Fact-checked by Grok 2 weeks ago

Search theory

Search theory is a branch of that analyzes how economic agents, such as workers or consumers, make decisions when facing and costs in acquiring to find suitable trading partners or opportunities. It models scenarios where buyers and sellers cannot instantly match, leading to frictions that affect market outcomes like prices, , and . Core concepts include sequential search—where agents evaluate one option at a time—and the or , the threshold below which an offer is rejected in favor of continued searching. The field originated in the 1960s with George Stigler's 1961 work on consumer search for prices, which introduced the idea of fixed-sample-size search to explain price dispersion despite competition. This was extended to dynamic sequential models, notably by John J. McCall in 1970 for job search, where unemployed workers balance search costs against the value of better offers, leading to the reservation wage concept. In the 1970s, economists like incorporated search into models, showing how frictions can sustain monopolistic outcomes even with many agents. Further advancements in the and developed search and matching frameworks, integrating individual search behavior with aggregate market dynamics. Dale Mortensen and Christopher Pissarides built the canonical Diamond-Mortensen-Pissarides (DMP) model, which explains as arising from matching inefficiencies in labor markets. Their contributions, along with Diamond's foundational work, earned the 2010 Nobel Memorial Prize in Economic Sciences for analyzing markets with search frictions. Search theory has broad applications beyond labor economics, including housing markets where tenants search for affordable units, consumer product search amid online price comparisons, and for understanding . Recent developments incorporate heterogeneous agents, endogenous search intensity, and computational methods to model complex equilibria, with ongoing research addressing digital markets and policy interventions like unemployment insurance.

Fundamentals

Core Concepts and Assumptions

Search theory in focuses on developing mathematical models to optimize the allocation of limited search resources, such as time, personnel, or sensors, to maximize the probability of detecting a hidden that may be stationary or moving. It quantifies uncertainties in the target's location, path, and detectability, using concepts like probability of (the chance the target is within the searched area) and probability of detection given containment (dependent on effort applied and environmental factors). The overall probability of success is the product of containment and detection probabilities, guiding optimal search planning to minimize resource waste. Key assumptions include: the target exists and is somewhere in the search space with a known distribution (e.g., or based on ); detection is probabilistic, modeled by a detection that relates search effort to detection probability, often assuming of detections; resources are limited, requiring trade-offs in effort distribution; and the environment affects visibility, speed, and sensor performance (e.g., , ). Models often assume rational optimization to maximize detection probability or minimize expected search time, with extensions for moving targets using kinematic models of motion. These assumptions capture real-world frictions in , SAR, or wildlife tracking scenarios, where exhaustive search is impractical. The foundational framework is the optimal allocation problem, where search planners distribute effort across space and time to maximize the cumulative detection probability. For a target, this involves solving for effort that equalizes the marginal increase in detection per unit effort across areas. Below a certain effort , additional search in low-probability areas yields , prompting focus on high-likelihood regions. This embodies utility maximization: planners continue allocating effort where the expected gain in success probability exceeds the in resources. Such principles underpin advanced models, including dynamic programming for adaptive searches. In search theory, simultaneous search involves deploying multiple search units or resources concurrently across different areas or paths to cover more ground in parallel, typically incurring fixed costs for coordination and , such as multiple in a operation. This approach assumes commitment to the full coverage plan upfront, allowing rapid accumulation of effort but requiring accurate prior estimates of target location to optimize allocation. For example, in , convoys might use several ships searching sectors simultaneously to compare detections and refine estimates. The expected value is modeled as the integral of detection probability over the joint effort:
P_d = 1 - \prod_{i=1}^n (1 - p_i(e_i)),
where p_i(e_i) is the detection probability in area i given effort e_i, and n is the number of simultaneous units, subject to total effort constraint \sum e_i \leq E.
In contrast, sequential search deploys resources one at a time or in phases, observing outcomes (e.g., no detection) to update probabilities via and adapt subsequent effort, common in ground searches or single-asset scenarios like a lone rescue helicopter sweeping areas serially. This allows for conditional stopping or redirection if a detection occurs or new information emerges, but risks delaying success due to serial processing. The process incorporates for time-sensitive targets (e.g., drifting vessels), often over a finite horizon. The expected detection probability balances updated posteriors against costs:
V_t = \max_{A} \left[ P(C_t | A) P(D | C_t, e_t) + (1 - P(D | C_t, e_t)) \beta V_{t+1} \right] - c_t,
where V_t is the value at time t, A is the action (area chosen), P(C_t | A) is probability, P(D | C_t, e_t) is detection given containment and effort, \beta is the discount factor, and c_t is the cost per period. This enables Bayesian updates for refining search areas, but introduces risks from incomplete early coverage.
The key differences lie in resource utilization and adaptability: simultaneous search accelerates coverage and reduces time to detection in resource-rich environments, lowering variance in outcomes but demanding high upfront commitment and precluding real-time adjustments based on interim results. Sequential search conserves resources through adaptive planning and is suited to information-scarce or dynamic settings, though it prolongs exposure time and may miss optimal paths due to path dependencies. These approaches trade off based on operational constraints. Simultaneous search is efficient in scenarios with abundant assets and targets, such as multi-unit aerial patrols maximizing broad-area . Sequential search excels in limited-resource or highly uncertain environments, like single-vehicle tracking of moving targets, where updating beliefs conserves effort. Historically, simultaneous models emerged in WWII convoy protection optimizations, while sequential extensions developed in post-war SAR planning with Bayesian methods.

Search with Known Offer Distributions

Homogeneous Search Costs

In the homogeneous search costs framework, the searcher faces a known F(p) of offers, such as wages in job search or prices in markets, with a constant c > 0 incurred for each independent draw from this distribution. The model typically assumes an infinite horizon with a discount factor \delta \in (0,1), where the searcher sequentially observes offers one at a time and decides whether to accept or continue searching, without recall of previous offers. This setup captures the between the cost of additional and the potential benefit of better offers, leading to an optimal strategy characterized by a reservation value. The optimal policy is a reservation rule: accept an offer x if x \geq r and reject otherwise, where the reservation value r solves the equation \int_r^\infty (x - r) \, dF(x) = \frac{c}{1 - \delta}. This condition equates the expected benefit of one more search—the representing the anticipated gain from surpassing r—to the of the search cost over the infinite horizon. The left side decreases in r, ensuring a unique solution under standard assumptions on F, such as and positive density above some lower bound. Under this strategy, the number of searches until acceptance follows a with success probability q = 1 - F(r), yielding an expected number of searches of $1/q = 1 / (1 - F(r)). The expected total search cost is thus c / (1 - F(r)), while the expected duration (in periods) is similarly $1 / (1 - F(r)) if one search occurs per period. These quantities highlight how the reservation value balances frictions: a higher r shortens expected duration but risks forgoing acceptable offers. Several implications follow from the reservation equation. Greater variance in F raises r, as the potential upside from searching increases relative to the mean, prompting more selective behavior. Conversely, a lower search cost c decreases r, encouraging extended search to exploit the known distribution more fully. In job search applications, this implies that more variable wage offers lead to higher reservation wages and prolonged unemployment spells. A representative example arises in consumer goods markets with prices drawn from a uniform distribution F(p) = p / \bar{p} on [0, \bar{p}], where \bar{p} is the maximum price. Here, r is the maximum acceptable price, and it satisfies \frac{r^2}{2 \bar{p}} = \frac{c}{1 - \delta}, illustrating how r increases with c to equate the expected savings from potentially lower prices below r against discounted costs, often resulting in limited search unless c is small relative to price dispersion. The homogeneity of costs imparts a memoryless to the optimal rule: the reservation value r remains constant across searches and independent of past rejected offers, yielding myopic decisions that simplify computation and reveal how uniform frictions promote behavior in decentralized markets.

Heterogeneous Search Costs

In search theory with known offer distributions, the heterogeneous search costs model extends the to scenarios where the c_i of inspecting option i varies across a of alternatives, while each option draws its value x_i independently from the same known distribution F(x). This setup captures realistic frictions, such as travel costs in markets where agents evaluate at different distances, or dealership visits in product search differentiated by location. The agent's objective is to select an order of inspection to maximize expected net payoff, accounting for the irrevocable decision to pay c_i upon inspecting option i and the option to stop and accept the best observed value. The optimal strategy involves computing a reservation value r_i for each option i, defined by the indifference condition where the expected gain from inspection equals the cost: \int_{r_i}^{\infty} (x - r_i) \, dF(x) = c_i. This equation yields a higher r_i for lower c_i, as the expected improvement must justify the smaller cost. Options are then ranked and searched in decreasing order of r_i, meaning lower-cost options are inspected first since they offer the highest reservation . The search proceeds sequentially, updating the best observed value after each inspection, and stops when this best value exceeds the reservation values of all remaining uninspected options; this generalized reservation rule ensures cost-effectiveness by prioritizing options with the greatest potential relative improvement per , where the \int_{r_i}^{\infty} (x - r_i) \, dF(x) / c_i = 1 serves as the uniform for viability. This directed search approach allows agents to select subsets or sequences that minimize total expected cost, often resulting in only a fraction of options being inspected and leading to segmented search patterns. For instance, in markets, agents with heterogeneous costs focus searches on nearby segments, creating distinct pools of inspected that limit . A representative example is search for automobiles, where search costs rise with to dealerships (estimated at €148 per kilometer on average); closer dealerships receive higher search intensity, with consumers typically visiting only about two dealers, thereby reducing overall coverage and increasing average prices by 13% relative to full-information scenarios. A key result is that cost heterogeneity amplifies dispersion in search intensity across options: low-cost alternatives are searched more intensively, concentrating evaluation efforts and potentially generating price stickiness in high-cost segments due to reduced competitive pressure from fewer inspections. Compared to the homogeneous search costs case, where a single applies uniformly and search order is irrelevant under identical distributions, heterogeneity introduces —the sequence of inspections influences outcomes through early stopping—and imposes finite search horizons, as agents may halt before exhausting all options.

Advanced Search Models

Endogenous Price Distributions

In models of endogenous distributions within search theory, firms compete in a monopolistic setting where consumers incur a known positive search c to observe each successive offer from a large number of potential sellers. Assuming unit marginal production costs normalized to 1 and unit demand, firms strategically select prices p \geq 1 to maximize expected profits, recognizing that search frictions limit the extent to which consumers compare offers across the market. Equilibria in these models are symmetric Nash equilibria in mixed strategies, where identical firms randomize prices according to a cumulative distribution function G(p) with compact support [p_{\min}, p_{\max}] and p_{\min} = 1, the marginal cost. This randomization ensures that no firm can profitably deviate to a pure strategy price within the support, as consumers' search behavior responds endogenously to the prevailing price distribution. The consumer's reservation price r (the maximum price they are willing to accept), which determines whether to accept the current offer or continue searching, satisfies the indifference condition c = \int_{1}^{r} (r - x) \, dG(x), where the integral represents the expected benefit (savings) from one additional search. Firms mix over prices to maintain consumer indifference across the support, yielding zero expected profits at p_{\min} while balancing lower sales probabilities at higher prices with increased margins. A foundational insight is the Diamond (1971) paradox, which shows that even vanishingly small search costs c > 0 result in a unique equilibrium of rigid monopoly pricing, where all firms charge p = 2 (the full markup over marginal cost of 1), as consumers effectively search only once due to the anticipated uniformity of high prices. In contrast, the Burdett-Judd (1983) framework with sequential search—where consumers visit stores one by one and decide whether to search further based on the current offer—produces equilibria featuring price dispersion, with G(p) strictly increasing over [1, p_{\max}] and p_{\max} > 2, even among homogeneous agents. These results highlight how search costs erode price , elevating average markups above competitive levels; furthermore, higher c intensifies this effect by expanding the range of the price distribution and reducing the mass at low prices. In retail markets, such endogenous manifests through programs that reduce effective search costs for repeat customers, enabling firms to offer lower prices to frequent searchers while sustaining higher markups from less mobile buyers.

Unknown Offer Distributions

In search models with unknown offer distributions, the searcher begins with prior beliefs about the cumulative distribution function F(p) of offers, typically parameterized by unknown parameters, and observes offers sequentially in a sequential search protocol. Each observation incurs a search cost, and the searcher updates their beliefs using Bayes' rule after each draw, incorporating the new information into the posterior distribution. The optimal strategy involves dynamically adjusting the reservation value r_t at each period t, where the searcher accepts an offer if it meets or exceeds r_t and continues searching otherwise, balancing the immediate cost against the expected benefit from future information. The optimal policy in this setting generally requires solving an exploration-exploitation trade-off, as continuing to search yields both potential better offers and valuable about the , while the reservation value r_t evolves based on the tightening posterior. If search costs are constant and the horizon is infinite, the policy can be myopic, meaning the searcher acts as if there is only one remaining period, though this simplifies only under specific assumptions like . In more general cases, the reservation value decreases over time as resolves, reflecting discouragement from high observed offers or from low ones. A seminal by (1974) examines finite samples drawn from a with a Dirichlet (equivalent to for equal parameters), demonstrating that the initial reservation value is high due to , leading to more selective early on, and converges toward the true as observations accumulate and the posterior sharpens. Unknown offer distributions effectively increase search costs compared to known cases, as the searcher must allocate effort to learning, which can lead to overestimation of the 's variance and premature stopping if early observations suggest low prospects. For instance, in job search models, an unemployed worker with beliefs on the may encounter early low offers that update the posterior downward via , prompting a lower reservation and reduced search intensity to avoid prolonged costs. In finite-horizon settings, the optimal r_t is derived via , yielding time-varying thresholds that decline more rapidly than in known- models, whereas infinite-horizon approximations often rely on stationary policies adjusted for learning. These dynamics highlight how ignorance amplifies the , influencing search duration and outcomes in uncertain environments.

Equilibrium and Matching Frameworks

Search Equilibria

In search theory, equilibria in frictional markets arise from integrating individual search behaviors with market-clearing conditions in large economies featuring free entry on both sides. Firms post vacancies at rate μ, while unemployed workers search at rate ν, leading to matching through arrival processes where the market tightness θ is defined as θ = μ / ν, representing the ratio of vacancies to searchers. This setup captures the bilateral nature of matches in decentralized markets without full information, where agents decide on search intensity and entry based on expected returns. A stationary equilibrium is characterized by constant search intensities, offer prices or wages, and free entry such that expected profits for entrants are zero, ensuring market balance. In this equilibrium, the illustrates the inverse relationship between and tightness θ, as higher θ reduces job-finding rates but increases vacancy-filling probabilities. Key results highlight efficiency losses due to externalities, where individual searchers impose costs on others by competing for limited matches; the Hosios (1990) condition achieves constrained efficiency when the bargaining share of the searching side equals the elasticity of the matching function with respect to searchers, aligning private incentives with social optima. Wage determination in these equilibria often relies on Nash bargaining in bilateral matches, yielding the wage w = β p + (1 - β) b + β c θ, where β is the worker's bargaining share, p is , b is the worker's utility (typically ), and c θ reflects search costs scaled by tightness. This formula derives from splitting the match surplus according to , incorporating values from individual search models where agents accept offers above a . Implications include the effects of interventions like , which raise utilities b and thus increase θ by encouraging entry, potentially reducing mismatches but elevating search costs and duration. For example, in frictional unemployment equilibria, workers adjust search effort in response to benefits: higher UI increases b, prompting more intensive search (higher ν) that tightens the (rising θ) and boosts wages via the formula, though it may amplify if entry does not fully adjust. This dynamic underscores how search equilibria propagate effects through market tightness, balancing individual gains against aggregate frictions.

Matching Theory

Matching theory extends search models to two-sided markets where agents on , such as workers and firms, actively search and form pairs based on heterogeneous attributes like or preferences. In a typical bipartite setup, workers are characterized by levels x \in [0,1] and firms by y \in [0,1], with match output given by a f(x,y) that reflects complementarities between types. Agents form strict ordinal preferences over potential matches, and meetings occur randomly through frictional processes, such as the urn-ball model where agents are drawn without replacement from finite pools to simulate limited interactions, or phone models where agents initiate contacts but face random responses. Unmatched agents receive a reservation payoff, often normalized to zero, while frictions prevent instantaneous sorting into optimal pairs. A central concept is the stable matching, defined as an assignment where no pair of agents prefers each other over their current partners, ensuring no blocking pairs that could destabilize the outcome. The -Shapley algorithm (1962) provides a constructive method to find such a stable matching through deferred acceptance: one side proposes in rounds, and the other side tentatively accepts or rejects based on preferences, iterating until no further proposals occur. This yields the proposer-optimal stable matching, which is stable and individually rational. The set of all stable matchings forms a under a suitable partial order, where the proposer-optimal and receiver-optimal matchings bound the extremes, and any of stable matchings is also stable. This structure, established in Roth and Sotomayor (1990), implies multiple stable outcomes are possible, with varying degrees of assortative pairing. Frictional extensions incorporate ongoing search dynamics via the Diamond-Mortensen-Pissarides (DMP) framework, where aggregate matches are governed by a constant-returns-to-scale m(u,v) linking unemployed workers u and vacancies v, often specified as Cobb-Douglas m(u,v) = \mu u^\gamma v^{1-\gamma} with elasticity \gamma \in (0,1). Recent computational methods, such as techniques, enable global solutions and calibration of these models with heterogeneous agents and aggregate shocks. Upon meeting, agents bargain over the match surplus—the difference between joint output and continuation values from further search—using Rubinstein's alternating-offer protocol, which converges to a subgame-perfect equilibrium splitting the surplus according to discount rates and outside options. In high-friction environments (low meeting rates), leads to positive assortative matching (high types pairing together) only under strong supermodularity conditions, such as log-supermodularity of payoffs, as weaker complementarities result in cross-type matches or . Directed search, where agents target specific submarkets based on posted terms, facilitates better sorting by allowing price-like signals to guide applications, achieving positive assortative matching under milder conditions than , though multiplicity persists. Advances in are enhancing labor market matching by leveraging data-driven algorithms to reduce frictions and improve outcomes. Key results highlight the multiplicity of equilibria, where coordination failures can trap markets in suboptimal stable matchings with mismatches, such as low-productivity pairings, even when superior alternatives exist. For instance, in with high frictions, agents may settle for inferior matches due to fear of prolonged , reducing overall . Subsidies, such as hiring incentives or reduced search costs, can shift equilibria toward more stable and assortative outcomes by altering shares or meeting probabilities, improving without eliminating frictions. An illustrative example is college admissions, implemented via the Gale-Shapley deferred acceptance algorithm where students propose and colleges respond, ensuring stability in quota-constrained matching. Similarly, marriage markets with search costs exhibit frictional stable pairings, where frictions lead to delayed or mismatched unions despite ordinal preferences over partners.

Applications and Extensions

Labor and Housing Markets

In labor markets, job search models, such as the sequential search developed by McCall (1970), explain duration dependence in through the gradual decline in workers' reservation wages as the value of continued search diminishes over time. Empirical analyses of (UI) data further demonstrate that higher benefit levels elevate reservation wages, thereby prolonging job search , with evidence showing reservation wages falling faster when UI are shorter. Data from the (CPS) reveal that informal networks account for around 30-50% of job findings, outperforming formal methods like public employment services in terms of speed and wage outcomes, which underscores the role of social connections in mitigating search frictions. The Diamond-Mortensen-Pissarides (DMP) model extends these ideas to aggregate dynamics, where labor market tightness—measured by the ratio θ of vacancies to —fluctuates over s, driving cyclical unemployment through varying matching efficiency. Calibrations of DMP frameworks indicate that search frictions contribute substantially to fluctuations in , as imperfect information and matching inefficiencies amplify shocks to and separations. Key empirical patterns include a mean U.S. unemployment duration of about 24.5 weeks as of August 2025, as reported by the , alongside search costs equivalent to 6.5-14% of the average accepted , reflecting time and effort expenditures in . Housing markets apply similar sequential search principles, where buyers incur significant moving costs and sequentially evaluate offers, resulting in segmented submarkets divided by and types that limit broad exploration. These models predict persistent price dispersion arising from location-specific heterogeneity, as searchers prioritize nearby options to minimize relocation expenses, leading to localized pricing inefficiencies. For instance, broad searchers who connect segments help equalize prices across areas, but narrow local search dominates, sustaining up to 20-30% dispersion in comparable properties. Policy implications in these domains center on unemployment insurance design, which optimally trades off moral hazard—where benefits reduce search effort and extend unemployment spells—against consumption insurance for the risk-averse unemployed. Randomized trials, such as those in Hungary varying UI durations and monitoring, confirm that longer benefits or reduced oversight can increase completed unemployment spells, validating the moral hazard channel while highlighting the need for experience-rated premiums to curb distortions. These bilateral aspects align with broader matching theory frameworks, emphasizing coordinated search by workers and firms. Recent evidence through 2025 shows that the post-COVID surge in has reduced spatial frictions in both labor and housing search, effectively lowering the per-period search cost c by enabling wider geographic job and home options without penalties. This shift accounted for over half of U.S. house price growth from 2019 to 2023 and accelerated job matching across regions, as workers bypassed traditional location constraints. As of 2025, ongoing integration of in recruitment platforms, such as automated matching on , has further decreased search frictions by 10-20% in matching efficiency according to recent studies.

Consumer Product Search and Recent Developments

In consumer product search models, buyers sequentially compare prices across retailers or platforms, where the has drastically lowered search costs c, thereby reducing consumers' r—the maximum at which they are willing to purchase without further searching. This reduction in c intensifies , as modeled in sequential search frameworks adapted to online settings, where consumers face low physical and informational barriers to accessing multiple offers. Empirically, despite these low costs, price dispersion persists in markets like , with studies showing 16-22% variation in s for homogeneous goods, largely attributable to and consumer loyalty that limits full price sensitivity. A seminal result in this domain is Varian's (1980) model of , which explains through mixed strategies: firms randomize prices to capture from informed shoppers while securing loyal customers at higher markups, leading to even under low search costs. In online contexts, this extends to directed search via platforms like , where consumers target specific products and compare prices algorithmically, reducing sequential effort but still yielding dispersed outcomes due to seller differentiation and platform fees. Recent developments from 2010 to 2025 have integrated and into , with recommendation algorithms functioning as a form of directed search by predicting and surfacing offers based on user history, thereby lowering effective search costs while tailoring price exposure. For instance, joint household search models account for couples' coordinated decisions in product purchases, incorporating shared preferences and that amplify gains from digital tools, as explored in frameworks extending classical search to multi-agent settings. The rise of apps has further reduced sequential search costs, with indicating that updated apps decrease browsing time by up to 1,446 seconds per session, enabling quicker price comparisons and higher purchase rates. Post-2020 studies highlight how chatbots endogenize price distributions F(p) through , where algorithms adjust offers in real-time based on chat interactions and signals, creating endogenous variation that influences search . These frictions, while lowering average markups by enhancing (e.g., 5-15% price drops post-platform redesigns), also increase by type, as personalized recommendations match high-value buyers to offers. Policy implications include data privacy regulations, which can enhance search efficiency by building trust but may raise effective costs if they limit , potentially reducing welfare for privacy-sensitive users. An illustrative example is Uber's surge pricing, which generates a endogenous price distribution responsive to local , effectively directing consumer search toward available rides while balancing platform equilibrium.

References

  1. [1]
    The Theory of Search: III. The Optimum Distribution of Searching Effort
    Abstract. This is Part III, concluding the series of three papers on The Theory of Search Part I, Kinematic Bases, appeared in Opns.
  2. [2]
    [PDF] The Theory of Search - A Simplified Explanation - navcen
    1.1. Introduction.The purpose of this paper is to provide the search and rescue (SAR) community with a single document explaining the scientific basis for ...
  3. [3]
    [PDF] A BIBLIOGRAPHY OF SEARCH THEORY AND ... - DTIC
    A speech on the general topic of mathematics in operations research. Fifteen pages of the article are devoted to "The search problem." What is presented is ...
  4. [4]
    (PDF) Review of Search Theory: Advances and Applications to ...
    This report reviews the history and recent advances of search theory and its application to a variety of search problems.
  5. [5]
    The Economics of Information | Journal of Political Economy
    The Economics of Information. George J. Stigler. George J. Stigler. Search for more articles by this author · PDF · PDF PLUS · Add to favorites · Download ...
  6. [6]
    Economics of Information and Job Search - jstor
    Finally, the expected unemployment period is calculated, first when the searcher overestimates the distribution of wages, and then when he underestimates it.
  7. [7]
    [PDF] Search-Theoretic Models of the Labor Market: A Survey
    We survey the literature on search-theoretic models of the labor market. We show how this approach addresses many issues, including the following: Why do ...Missing: core seminal
  8. [8]
    [PDF] The Sequential Search Model: A Framework for Empirical Research
    Oct 3, 2022 · We provide a detailed overview of the empirical implementation of the sequential search model proposed by Weitzman (1979).
  9. [9]
    Simultaneous or Sequential? Search Strategies in the U.S. Auto ...
    Aug 1, 2016 · We study the identification of the search method consumers use when resolving uncertainty in the prices of alternatives.
  10. [10]
    [PDF] Segmented Housing Search - Stanford University
    Moving shocks induce agents to sell their current house (at a cost) and search for another house. Heterogeneous agent types are identified by their search ...
  11. [11]
    [PDF] Consumer Search and Prices in the Automobile Market
    Our search cost estimates are precisely estimated and suggest consumers' search costs are affected by distances to dealership and other demographics such as ...
  12. [12]
    Prices and heterogeneous search costs - jstor
    Jan 3, 2020 · We study price formation in a model of consumer search for differentiated products in which consumers have heterogeneous search costs.<|control11|><|separator|>
  13. [13]
    A model of price adjustment - ScienceDirect.com
    Volume 3, Issue 2, June 1971, Pages 156-168. Journal of Economic Theory. A model of price adjustment. Author links open overlay panelPeter A Diamond.
  14. [14]
    Equilibrium Price Dispersion - jstor
    BURDETT AND K. L. JUDD equilibrium price dispersion in these models demonstrates that ex ante heteroge- neity in neither costs, tastes, nor rationality is ...
  15. [15]
    [PDF] Searching for the Lowest Price When the Distribution of Prices Is ...
    As the distribution of prices becomes more dispersed, customers intensify their search activity by lowering their reservation price. ... Kohn and Shavell (1974).
  16. [16]
    Searching for the Lowest Price When the Distribution of Prices Is ...
    Searching for the Lowest Price When the Distribution of Prices Is Unknown. Michael Rothschild. Michael Rothschild. Search for more articles by this author.
  17. [17]
    Declining Reservation Wages and Learning - jstor
    The conclusions are stated in Section 3. 1. A MODEL OF SEARCH WITH UNKNOWN OFFER DISTRIBUTION. Assume an unemployed worker pays c(c> 0) each ...
  18. [18]
    [PDF] Matching with Search Frictions - Philipp Kircher
    In the extreme where output is additively separable in types, matching is negatively assortative. Here it does not matter who matches with whom. What matters is ...
  19. [19]
    [PDF] College Admissions and the Stability of Marriage
    College Admissions and the Stability of Marriage. Author(s): D. Gale and L. S. Shapley. Source: The American Mathematical Monthly, Vol. 69, No. 1 (Jan., 1962), ...
  20. [20]
    [PDF] Chapter 16 - TWO-SIDED MATCHING - Stanford University
    This chapter is adapted from our monograph, Roth and Sotomayor (1990a), in which a much more complete treatment can be found. 2. Some empirical motivation.
  21. [21]
    [PDF] MARKETS WITH SEARCH FRICTIONS - Nobel Prize
    Oct 11, 2010 · Diamond's (1971) article “A Model of Price Adjustment” established ... Journal of Economic Theory 7, 188—209. McCall, J (1970), The ...
  22. [22]
    [PDF] Perfect Equilibrium in a Bargaining Model - Ariel Rubinstein
    After one player has made an offer, the other must decide either to accept it, or to reject it and continue the bargaining. Several properties which the players ...
  23. [23]
    [PDF] Sorting through Search and Matching Models in Economics
    To fully understand how search frictions distort equilibrium market outcomes, we first need to learn some basic decision theo- retic tools of search theory.
  24. [24]
    [PDF] Reservation Wages Revisited: Empirics with the Canonical Model
    Oct 23, 2024 · First, consistent with the model, we find that reservation wages fall faster when UI benefit durations are shorter.
  25. [25]
    Social Networks and Labor Markets: How Strong Ties Relate to Job ...
    Over 50% of jobs are found through a social tie, and social networks help explain observed labor market phenomenon like duration dependence and the ...
  26. [26]
    [PDF] Imperfect Information and Slow Recoveries in the Labor Market
    Jan 4, 2024 · Specifically, they account for 63% of the variation in unemploy- ment, 61% in vacancies, 63% in the job-finding rate, 65% in job-to-job ...<|separator|>
  27. [27]
    Table A-12. Unemployed people by duration of unemployment
    Sep 5, 2025 · Average (mean) duration, in weeks. 20.8, 22.9, 24.3, 21.0, 23.2, 21.8, 23.0 ... U.S. Bureau of Labor Statistics Current Population Survey Office ...
  28. [28]
    [PDF] Home Sweet Home? Job Search with Commuting and ...
    We find that job seekers face search costs ranging from 6.5 to 14% of their wage rate, not far from the roughly 5-10% time job seekers spend searching every day ...
  29. [29]
    Search strategies on the housing market and their implications on ...
    For example, Read (1991) proposes a theoretical setup with search costs on the housing market in which agents have exogenously distributed preferences ...
  30. [30]
    [PDF] Segmented Housing Search - National Bureau of Economic Research
    This paper studies housing markets with multiple segments searched by heterogeneous clienteles. We document market and search activity for the San Francisco Bay ...Missing: sequential dispersion
  31. [31]
    [PDF] Segmented Housing Search - NYU Stern
    Heterogeneous agent types are identified by their search ranges: subsets of the set of all segments that they would consider living in, as in our data.
  32. [32]
    Moral Hazard versus Liquidity and Optimal Unemployment Insurance
    This paper presents new evidence on why unemployment insurance (UI) benefits affect search behavior and develops a simple method of calculating the welfare ...Missing: trials | Show results with:trials
  33. [33]
    Randomised control trials of unemployment benefits - CEPR
    Apr 30, 2008 · A quarter of the control group had stopped claiming benefit after 102 days but among the treatment group a quarter had gone by only 85 days. ( ...
  34. [34]
    Housing Demand and Remote Work | NBER
    May 13, 2022 · We show that the shift to remote work explains over one-half of the 18.9 percent increase in U.S. real house prices from 2019 to 2023.
  35. [35]
    Consumer search on the Internet - ScienceDirect.com
    Competition and consumer search costs can lead to price dispersion in an oligopoly. IO research has long identified the existence of search costs and ...
  36. [36]
    [PDF] Consumer Search and Pricing Behavior in Internet Markets
    Despite the mixed empirical evidence, many economists hold the view that Internet will promote competition, thereby lowering prices and price dispersion and ...
  37. [37]
    Within-retailer price dispersion in e-commerce - Oxford Academic
    Aug 5, 2022 · This article investigates the prevalence and the magnitude of online within-retailer price dispersion over time.
  38. [38]
    [PDF] A Model of Sales - Hal R. Varian
    Nov 25, 2003 · As in the Shilony model, I will allow for the possibility of randomized pricing strategies by stores. I will be interested in characterizing the.
  39. [39]
    Price-Directed Search, Product Differentiation and Competition - PMC
    We study the effects of product differentiation and search costs on competition and market outcomes in a tractable model of price-directed consumer search.
  40. [40]
    Artificial intelligence and recommender systems in e-commerce ...
    This study shows how AI can improve e-commerce recommender systems. In e-commerce, intelligent search algorithms have improved efficiency and effectiveness. The ...
  41. [41]
    [PDF] Joint-search theory New opportunities and new frictions
    Joint search can create new opportunities when couples are risk-averse, but new frictions when spouses have multiple job offers and costs of living apart.Missing: core | Show results with:core
  42. [42]
    The version effect of apps and operating systems in mobile commerce
    Feb 1, 2023 · Using an up‐to‐date app decreases consumer search activities by 129.34 product pages or 1446.48 seconds spent browsing compared to using a ...Empirical Context And Data · Analysis And Results · Estimation Results
  43. [43]
  44. [44]
    [PDF] Consumer Price Search and Platform Design in Internet Commerce
    In online commerce, physical search costs are low, yet price dispersion is common. We use browsing data from eBay to estimate a model of consumer search and ...
  45. [45]
    Frontiers: The Intended and Unintended Consequences of Privacy ...
    Aug 5, 2025 · Privacy Measures May Stifle Entry and Innovation by Entrepreneurs and Small Businesses Who Are More Likely to Serve Niche Consumer Segments. ...Introduction · Access to Consumer Data Can... · Which Consumers Care Most...
  46. [46]
    [PDF] Using Big Data to Estimate Consumer Surplus: The Case of Uber
    In practice, the surge price that consumers face is not random; it reflects local demand and supply conditions. There is, however, a component of Uber pricing ...