Fact-checked by Grok 2 weeks ago

Event study

An event study is a widely used empirical methodology in and to assess the impact of a specific event—such as a corporate announcement, regulatory change, or macroeconomic development—on the value of a firm or asset, typically by measuring abnormal returns relative to expected returns around the event date. This approach relies on the , assuming that stock prices rapidly incorporate new information, allowing researchers to isolate the event's effect from general market movements. The origins of event studies trace back to the late 1960s, with foundational work by Ray Ball and Philip Brown in 1968, who examined how accounting earnings announcements influence stock prices, and by Eugene F. Fama, Lawrence Fisher, , and in 1969, who analyzed the adjustment of stock prices to information from stock splits and dividends. These seminal papers established the core framework for detecting market reactions to news, building on earlier asset pricing models like the (CAPM). Over time, the methodology evolved to address issues like event clustering and thin trading, with influential surveys such as A. Craig MacKinlay's 1997 review highlighting its robustness for daily and monthly data analysis. At its core, an event study involves three main steps: estimating a normal return model (e.g., market model or CAPM) using historical data from an prior to the event; calculating abnormal returns as the difference between actual and predicted returns during the event ; and aggregating these abnormal returns across securities and time to test for , often using t-tests or bootstrap methods to account for cross-sectional dependencies. The event is typically short (days or weeks) to minimize influences, while the (e.g., 100–250 days) ensures reliable estimates. Robustness checks, such as alternative models or non-parametric tests, are common to validate results against assumptions like market efficiency. Event studies have become a of in , , and beyond, applied to evaluate phenomena like merger announcements, earnings surprises, , and policy interventions. For instance, they quantify the market's valuation of intangible assets or the costs of by observing cumulative abnormal returns (). In recent extensions, the method has been adapted for broader economic contexts, including staggered treatment designs in difference-in-differences frameworks to study dynamic policy effects. Despite challenges like identifying clean event dates or handling global market integrations, event studies remain influential due to their ability to provide causal insights into in financial markets.

Overview

Definition and Purpose

An event study is a statistical method in and that measures the impact of specific events—such as announcements or mergers—on asset prices, particularly returns, by isolating abnormal changes from expected based on historical patterns. This approach uses data to detect deviations in prices attributable to the event, assuming markets process new efficiently. The methodology underpins much of empirical research in , originating from early tests of information's role in price formation. The primary purpose of event studies is to evaluate market efficiency under the (EMH), which asserts that asset prices incorporate all available information instantaneously and without bias, and to quantify how events alter firm value. They test whether markets react rationally to news, providing evidence on the speed and magnitude of price adjustments, while also informing policy analysis through assessments of regulatory impacts and supporting legal claims in securities litigation by linking disclosures to price effects. For instance, studies of CEO successions often reveal significant abnormal stock returns, indicating market perceptions of leadership changes' value implications. Event studies are broadly categorized into short-window and long-window types, with short-window designs—typically spanning days around the event—serving as the primary form to capture immediate impacts while reducing interference from unrelated factors. Long-window studies, by contrast, extend over months or years to evaluate cumulative effects on performance. Abnormal returns, the key metric, represent the event-induced portion of price changes isolated from normal fluctuations.

Historical Development

The event study methodology originated in the late 1960s amid burgeoning interest in capital markets research and the (EMH), pioneered by finance scholars such as , Ray Ball, and Michael Jensen. Early applications focused on testing how quickly stock prices adjusted to new information, reflecting the EMH's premise that markets efficiently incorporate all available data. A foundational contribution came from Ball and Brown (1968), who examined stock price reactions to earnings announcements, demonstrating that abnormal returns clustered around announcement dates and providing empirical support for market efficiency in accounting contexts. Key milestones in the 1960s and solidified the method's foundations. Fama, Fisher, Jensen, and Roll () applied it to stock splits, finding rapid price adjustments post-event with no long-term drift, which became a for EMH testing and introduced the cumulative abnormal framework. The methodology expanded in the to corporate events like mergers, with studies such as Mandelker (1974) analyzing tender offers and revealing positive abnormal returns for target firms, highlighting value creation in acquisitions. By the 1980s, efforts by Stephen Brown and Jerold Warner enhanced robustness; their paper evaluated monthly models for event studies, while their work adapted techniques to daily data, addressing non-normality and improving statistical power for shorter windows. The evolution from manual calculations to computerized methods accelerated in the 1990s, driven by accessible databases like CRSP and computational tools that enabled large-scale analyses. Influences from accounting, as seen in Ball and Brown's integration of financial reporting with market reactions, and , through broader EMH debates, broadened its interdisciplinary appeal. As of 2025, the core methodology remains largely unchanged since the 1980s, though integrations with for automated event detection—such as using neural networks to identify news-driven anomalies—have emerged to handle complex, high-frequency data in modern finance.

Methodology

Event Identification and Sample Selection

Event identification forms the foundational step in an event study, where researchers pinpoint specific occurrences anticipated to influence asset prices, such as corporate announcements or regulatory changes. The event date is conventionally defined as the first trading day on which the information becomes publicly available, designated as t=0, to isolate the market's response to unanticipated news. Events are selected based on their materiality, meaning they possess a substantial likelihood of altering firm value, often verified through objective sources like U.S. Securities and Exchange Commission (SEC) regulatory filings (e.g., Form 8-K for material events) or verified news releases from reputable outlets. This criterion ensures the study captures economically significant reactions rather than routine fluctuations. To address potential information leakage, researchers incorporate pre-event windows to detect anticipation effects, where abnormal returns prior to t=0 may signal premature market awareness. In cases of clustered events—such as multiple announcements for a single firm or contemporaneous events across firms—standard assumptions of independence are violated, necessitating adjustments like to prevent understated significance levels. Sample selection follows event identification and requires criteria that promote reliable , focusing on publicly traded firms with adequate and data availability. Typical requirements include exchange listing (e.g., NYSE or ), minimum trading volume to mitigate thin-trading issues, and at least 100-200 non-missing return observations in relevant periods. Data sources commonly include the Center for Research in Security Prices (CRSP) database for historical stock returns and trading details, paired with for firm characteristics like . These databases enable comprehensive coverage of U.S. equities from the 1960s onward, though international studies may draw from equivalents like Datastream. Biases in sample construction can distort results, including from excluding delisted firms, which overrepresents successful entities and inflates average returns. Small-firm effects pose another challenge, as lower stocks exhibit higher and infrequent trading, leading to noisier estimates and reduced statistical power. To counteract these, stratified random sampling by firm size, industry, or prior performance is recommended, ensuring the sample reflects broader market dynamics and enhances generalizability. Window specification delineates the temporal scope for analysis: the event window encompasses days around t=0 for assessing cumulative abnormal returns, often spanning [-1, +1] to focus on short-term impacts. In contrast, the estimation window precedes this period, typically [-250, -11] trading days, to parameterize expected returns without event contamination. Shorter event windows minimize exposure to events—unrelated occurrences that could mask the target effect—while longer ones risk dilution from noise, balancing precision with comprehensiveness based on event type. Best practices emphasize rigorous documentation of selection processes to facilitate replication and robustness checks. Random sampling within strata promotes unbiased representation, while modern implementations leverage from specialized providers like RavenPack for event detection via , improving accuracy in identifying announcement dates as of 2025.

Abnormal Returns Calculation

Abnormal returns represent the difference between the actual return on a and the return expected under a model, isolating the impact of an from normal movements. In event studies, these returns are computed for individual securities over the event window to assess the event's economic significance. The relies on estimating expected returns using historical data from a pre-event estimation window, typically 120 to 250 days long, to avoid contamination from the event itself. The most widely adopted model for expected returns is the market model, which posits a linear relationship between the security's and the market : R_{it} = \alpha_i + \beta_i R_{mt} + \epsilon_{it} Here, R_{it} is the on security i at time t, R_{mt} is the market , \alpha_i and \beta_i are parameters estimated via ordinary least squares (OLS) on the estimation window, and \epsilon_{it} is the error term. This model, rooted in the , captures while assuming residuals are uncorrelated with market returns. Alternative models include the market-adjusted approach, which simplifies estimation by assuming \alpha_i = 0 and \beta_i = 1, yielding abnormal returns as AR_{it} = R_{it} - R_{mt}. This method is computationally straightforward but ignores security-specific risk. For more robust specifications, the Fama-French three-factor model extends the market model by incorporating size () and value (HML) factors: R_{it} - R_{ft} = \alpha_i + \beta_i (R_{mt} - R_{ft}) + s_i \text{SMB}_t + h_i \text{HML}_t + \epsilon_{it} where R_{ft} is the risk-free rate, and parameters are again estimated via OLS. This adjustment accounts for additional risk premia beyond market beta, improving accuracy in diverse market conditions. The abnormal return for security i at time t is then: AR_{it} = R_{it} - E[R_{it}] where E[R_{it}] is the predicted return from the chosen model applied to event window data. To capture the total event impact over a multi-day window [t_1, t_2], the cumulative abnormal return (CAR) is calculated as: CAR_i(t_1, t_2) = \sum_{t = t_1}^{t_2} AR_{it} This summation assumes no compounding or leakage across days, focusing on additive effects. The step-by-step process begins with parameter estimation on the pre-event window to derive \alpha_i and \beta_i, followed by applying these to compute E[R_{it}] in the event window. Abnormal returns are then subtracted from actual returns, and CARs are aggregated if needed. In cases of thin trading or non-synchronous prices, which can bias beta estimates due to infrequent trades, the Scholes-Williams adjustment refines the OLS beta by regressing leads and lags of market returns against security returns, yielding an unbiased estimator: \hat{\beta}_i^{SW} = \hat{\beta}_i + \hat{\beta}_{i,-1} + \hat{\beta}_{i,+1} where \hat{\beta}_{i,-1} and \hat{\beta}_{i,+1} are betas from regressions including one-period lagged and led market returns, respectively. This method corrects for trading delays without altering the core abnormal return formula. For illustration, consider a hypothetical stock with daily returns over an estimation window yielding market model parameters \alpha = 0.001, \beta = 1.2, and market returns in a three-day event window of 0.5%, 1.0%, and -0.2%. The stock's actual returns are 1.0%, 2.5%, and 0.5%. Expected returns are E[R_{1}] = 0.001 + 1.2 \times 0.005 = 0.007 (0.7%), E[R_{2}] = 0.001 + 1.2 \times 0.01 = 0.013 (1.3%), and E[R_{3}] = 0.001 + 1.2 \times (-0.002) = -0.0014 (-0.14%). Abnormal returns are AR_1 = 1.0\% - 0.7\% = 0.3\%, AR_2 = 2.5\% - 1.3\% = 1.2\%, AR_3 = 0.5\% - (-0.14\%) = 0.64\%, and the CAR over the window is $0.3\% + 1.2\% + 0.64\% = 2.14\%, indicating a positive event impact.

Statistical Significance Testing

Statistical significance testing in event studies assesses whether observed abnormal returns (ARs) or cumulative abnormal returns (CARs) around an event differ significantly from zero, indicating an event's impact on prices. These tests evaluate the that the event has no effect, controlling for estimation errors and potential dependencies in returns data. tests assume and , while non-parametric alternatives provide robustness against violations of these assumptions. The choice of test depends on event clustering, sample size, and return characteristics. Cross-sectional tests aggregate ARs across securities on a given event day or window to test for average effects. The standard t-test for the mean AR on the event day is given by t = \frac{\overline{AR_0}}{\sigma_{AR_0} / \sqrt{N}}, where \overline{AR_0} is the cross-sectional AR, \sigma_{AR_0} is the cross-sectional standard deviation of ARs, and N is the sample size. For CARs over a multi-day , a similar t-statistic applies, but adjustments for among securities (e.g., via portfolio variance or Sefcik-Thompson methods) are necessary when events cluster in time, as unadjusted tests can inflate Type I errors. Time-series tests leverage the variability of standardized residuals across the estimation and event periods to detect anomalies, particularly useful for clustered events. The standardized residual method, introduced by Patell (1976), normalizes each security's by its estimation-period standard deviation before aggregating, reducing heteroscedasticity and cross-sectional dependence. Boehmer et al. (1991) extend this with a standardized cross-sectional t-test that uses event-period variance estimates, performing robustly under event-induced increases and clustering; simulations show it maintains nominal (e.g., 5% rejection rate under the ) and achieves high (e.g., over 90% for 1% abnormal performance). Non-parametric tests offer alternatives when returns exhibit non-normality or outliers, focusing on ranks or signs rather than distributions. The , as generalized by Cowan (1992), examines the proportion of positive relative to the estimation period, with a z- based on the difference in signing frequencies; it is robust to asymmetry and single outliers. The ranks the absolute and assigns signs, testing if the median AR deviates from zero, providing greater power than the for symmetric distributions. For studies with multiple events per , the J-test (or test) aggregates standardized t-statistics across days or events into a chi-squared to assess overall , while generalized methods like the Kolmogoroff-Smirnov test handle cross-event dependencies. Test power—the probability of detecting true abnormal performance—and size (Type I error rate) are critical for reliable inference, with empirical evidence from the 1980s highlighting trade-offs. Brown and Warner (1985) simulations using daily CRSP data demonstrate that the market model yields well-specified tests (rejection rates of 4-6% under the null) and superior (e.g., 80% for 1% drifts) compared to mean-adjusted models, especially with event clustering, though power declines for thin trading or long windows. Type II errors rise with small samples or weak effects, underscoring the market model's efficiency in short-horizon studies.

Practical Implementation

In practical implementations of event studies, researchers typically rely on established databases for stock returns and event information. The Center for Research in Security Prices (CRSP) database provides comprehensive historical stock price, return, and volume data for U.S. exchanges, serving as the for daily returns in most applications. Event data, such as merger announcements, are commonly sourced from SDC Platinum, which offers detailed records on global since the 1970s, including deal dates, values, and parties involved. provides event-driven data feeds covering corporate events like earnings releases and meetings, enabling integration with returns data for timely analysis. As of 2025, the Wharton Research Data Services (WRDS) platform aggregates these sources, including CRSP and SDC, facilitating secure access for academic and institutional users. For cost-effective alternatives, open-source options like the allow retrieval of historical stock prices and returns, though they may lack the depth of proprietary datasets for delisted securities or intraday data. A range of software tools supports event study execution, from proprietary suites to open-source languages. and are widely used in academic and professional settings for their robust statistical capabilities and built-in procedures for returns computation. In , the eventstudies package streamlines analysis by handling estimation windows, abnormal returns, and cumulative abnormal returns (CAR) for both daily and intraday data. Python's eventstudy library offers similar functionality through an open-source framework, supporting data import, model estimation, and visualization for financial event analyses. A typical in using the eventstudies package involves several steps to compute abnormal returns () and CAR. First, install and load the package along with dependencies like dplyr for data manipulation:
install.packages("eventstudies")
library(eventstudies)
library(dplyr)
Next, prepare the dataset with columns for firm identifiers, dates, stock returns (ret), market returns (mret), and event dates. Define the event study parameters, such as the estimation window (e.g., 250 days prior) and event window (e.g., [-5, +5] days):
# Sample data preparation (assuming df has ret, mret, date, firm, event_date)
es_data <- prepare_eventstudy(df, event.date.var = "event_date", 
                              xvar = "mret", yvar = "ret", 
                              est.window = c(-255, -6), event.window = c(-5, 5))
Then, estimate the market model parameters and compute and :
# Estimate and compute
results <- eventstudy(es_data)
Finally, aggregate and visualize across events, such as plotting mean to assess significance. This process ensures efficient handling of multiple events while maintaining through scripted execution. For large samples, extensions like data.table can optimize computation by vectorizing operations. Implementation challenges often arise in data handling and scalability. Data cleaning is critical, particularly for delisted firms, where CRSP includes delisting returns to avoid survivor bias, but mismatches in identifiers or missing values require manual reconciliation to prevent biased estimates. Computational efficiency becomes an issue with large samples, as estimating models for thousands of events demands optimized code or resources to avoid excessive runtime. Proprietary databases like CRSP and SDC incur significant subscription costs for institutional access via WRDS, prompting researchers to weigh open-source alternatives against trade-offs. Best practices emphasize reproducibility and validation to ensure reliable results. Sharing complete code via platforms like allows replication, reducing errors in parameter estimation and window definitions. Researchers should validate implementations by replicating published findings, such as those on earnings surprises where positive announcements yield average CAR of 1–2% over [0, +1] days, confirming alignment with benchmarks like Ball and Brown (1968).

Applications

Merger and Acquisition Analysis

Event study methodology is widely applied in merger and acquisition (M&A) analysis to assess the market's valuation of these corporate events, with the primary event date defined as the initial of the deal. This setup captures the immediate reaction to the anticipated synergies or risks associated with the . Target firms consistently exhibit strong positive abnormal returns () upon announcement, typically in the range of 20-30%, driven by the acquisition paid by the acquirer. In contrast, acquirer firms display mixed short-term , often small and positive or insignificant, while long-run post-merger performance frequently turns negative, suggesting overpayment or integration challenges. Key empirical findings from decades of research underscore asymmetric value creation in M&A. Seminal studies, including those synthesized by Jensen and Ruback, documented substantial short-run gains for target shareholders but limited or zero benefits for acquirers, highlighting the transfer of wealth from acquirer to target owners. A comprehensive 2000s by Betton, Eckbo, and Thorburn reviewed over 200 studies and confirmed average short-window acquirer of approximately 2% for completed deals, with variations tied to deal characteristics. Notably, payment method plays a critical role: cash-financed acquisitions generate higher acquirer (around 1-2% greater) than stock-financed ones, as cash signals managerial confidence and avoids dilution concerns. Similarly, deal size influences outcomes, with larger relative acquisitions (exceeding 10% of acquirer ) often yielding negative due to heightened integration risks and overvaluation. Methodological adaptations in M&A event studies extend beyond the standard short window to account for multi-stage processes. Researchers frequently analyze longer event windows encompassing both announcement and completion dates to isolate completion effects, such as regulatory approvals, which can add 0.5-1% to combined if the deal proceeds smoothly. Cross-border versus domestic mergers also warrant tailored approaches; studies show developed-market acquirers in cross-border deals achieve higher (up to 1% premium) compared to domestic ones, reflecting geographic diversification and growth opportunities, though with added and regulatory risks. As of , recent trends indicate declining acquirer returns in technology and mega-deals amid intensified antitrust scrutiny. Post-2020 event studies report average short-window below 1% (e.g., -0.49%) for acquirers in acquisitions due to prolonged regulatory reviews and blocked deals, contrasting earlier eras of more consistent modest gains, while overall M&A shows mixed results with potential improvements anticipated.

Securities Litigation

Event studies serve a pivotal role in U.S. securities litigation by providing to establish "loss causation," a key element required under the Private Securities Litigation Reform Act (PSLRA) of 1995. The PSLRA mandates that plaintiffs plead and prove that an alleged misrepresentation or omission in financial disclosures proximately caused their economic losses, typically demonstrated through statistically significant abnormal returns () coinciding with corrective announcements that reveal the fraud. For instance, a negative on the day of a disclosure event links the price drop directly to the revelation of concealed information, distinguishing it from general market movements. This application helps courts assess , reliance, and in suits under Rule 10b-5 of the Securities Exchange Act of 1934. Methodologically, event studies in litigation contexts incorporate adaptations to ensure robustness and admissibility in court. Practitioners favor outlier-robust tests like the Patell , which standardizes abnormal returns across events to detect significance while reducing sensitivity to extreme observations that could skew results. Estimation windows are selected to precede the alleged period or avoid contamination from ongoing litigation, often spanning 120 to 250 trading days to establish a reliable model without incorporating post-event distortions. Expert analyses must also satisfy the Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993) standard for scientific reliability, requiring peer-reviewed methodologies, known error rates, and general acceptance in the community to be admissible as evidence. These safeguards address the adversarial nature of litigation, where challenges to model assumptions can undermine findings. Prominent case examples illustrate the impact of event studies in quantifying damages. In the Corporation scandal of 2001, analyses of key disclosure events showed substantial negative cumulative abnormal returns (CARs), with stock price declines exceeding 50% tied to revelations of accounting irregularities, supporting claims of billions in investor losses and contributing to a $7.2 billion recovery. Likewise, the WorldCom, Inc. securities litigation utilized event studies to measure the market's reaction to fraud disclosures, isolating abnormal losses that informed a record $6.15 billion settlement in , the largest at the time for defrauded shareholders. These cases highlight how event studies translate complex financial data into actionable evidence for judicial determinations of economic harm. As of 2025, event studies remain a cornerstone of securities litigation practices, though they face ongoing criticisms for over-reliance on short event windows, typically one to three days, which can introduce effects from unrelated and yield low statistical in isolating causation. Such narrow windows may overlook gradual price adjustments or multi-day impacts, potentially leading to unreliable inferences in high-stakes class actions. Despite these concerns, courts continue to accept well-constructed studies as dispositive evidence, emphasizing the need for rigorous peer-reviewed validation to counter methodological challenges.

Policy and Regulatory Events

Event studies have been extensively applied to assess market reactions to policy and regulatory announcements, which often represent macroeconomic shocks affecting broad segments of the economy rather than individual firms. These analyses typically measure abnormal returns around announcement dates to gauge the efficiency and magnitude of investor responses to changes in , such as interest rate decisions, or fiscal measures like environmental regulations. For instance, studies of Fed rate announcements reveal significant abnormal returns, with markets reacting swiftly to surprises in the path, often resulting in intraday price adjustments of 0.5-2% across equity indices. Similarly, announcements of environmental regulations, such as schemes, elicit heterogeneous responses, with polluting industries experiencing negative abnormal returns while firms see gains, highlighting the redistributive effects of such policies. To capture industry-wide impacts, researchers frequently employ approaches, constructing equally weighted portfolios of affected to aggregate returns and enhance statistical power for detecting systematic effects. Prominent examples illustrate the versatility of event studies in this domain. During the , the announcement of the on October 14, 2008, triggered a positive reaction, with banking sector cumulative abnormal returns (CAR) reaching approximately +5% over a three-day window, reflecting investor relief from anticipated government intervention. The 2016 referendum provides another case, where the stock exhibited abnormal returns of around -10% in the immediate post-vote period, as measured by the , underscoring the uncertainty introduced by the Leave outcome. In 2020, stimulus packages, including the U.S. , produced variable sector impacts: technology and healthcare stocks recorded positive CARs of 2-4%, while travel and energy sectors faced declines of up to -8%, demonstrating the targeted nature of fiscal support. Methodologically, these applications often involve broader samples, such as constituents or sector indices, to account for economy-wide spillovers rather than firm-specific events. Handling is crucial for partially expected announcements; option-implied probabilities from markets can adjust event windows by estimating ex-ante event likelihoods, thereby isolating the surprise component and avoiding biased return estimates. For statistical significance, brief cross-references to testing procedures confirm that t-statistics or bootstrapped methods validate these abnormal returns. As of 2025, recent developments emphasize climate policy shocks, with event studies on green policy announcements, including aspects of the Green Deal—announced in December 2019—showing positive cumulative abnormal returns for green sectors, including renewables, driven by anticipated subsidies and regulatory support for green transitions, while sectors experience negative returns. These findings reinforce the growing use of event studies to evaluate the financial implications of regulations.

Limitations and Extensions

Key Assumptions and Violations

Event studies rely on several foundational assumptions to ensure the validity of inferences about the economic impact of specific events on asset prices. A primary assumption is market efficiency, which posits that prices fully and rapidly incorporate all available information, allowing abnormal returns to reflect only the unanticipated component of the event. Another key assumption is the stability of the 's , meaning the relative to the market remains constant before and after the event window, enabling accurate estimation of expected returns using pre-event data. Additionally, the methodology assumes the absence of confounding events that could simultaneously influence prices, ensuring that observed abnormal returns are attributable solely to the event of interest. Finally, the normality of residuals in the abnormal returns model is assumed, which underpins the validity of standard t-tests for . These assumptions are frequently violated in practice, potentially compromising the reliability of event study results. Event clustering, where multiple firms experience similar events around the same time, induces cross-sectional dependence among abnormal returns, leading to downward-biased standard errors and inflated t-statistics that increase the likelihood of Type I errors. Model misspecification, such as failing to account for effects or using an inappropriate model like the CAPM in non-stationary environments, can generate spurious abnormal returns, particularly in long-horizon studies where cumulative errors amplify. In emerging markets, thin trading—characterized by infrequent transactions—biases return estimates by introducing non-synchronous trading effects, which distort the measurement of abnormal performance and reduce the power of tests. To detect and mitigate these violations, researchers employ diagnostics such as variance inflation factors to assess cross-sectional dependence from clustering, and conduct robustness checks by applying alternative models like the Fama-French three-factor model to verify results under different specifications. For thin trading, adjustments like the Scholes-Williams beta estimator can correct for non-synchronous trading by incorporating lead-lag correlations in returns. underscores the risks of these violations; for instance, simulations show that even modest cross-correlations (e.g., 0.02) in clustered events with 100 firms can inflate the ratio of true to assumed standard deviations by up to 1.73, leading to rejection rates of the exceeding 10% in short-horizon tests when no true effect exists.

Advanced Methods and Alternatives

Long-run event studies extend the traditional short-window approach to assess the cumulative impact of events over extended periods, such as one to five years, capturing potential delayed market reactions or persistent effects. A key method in this domain is the buy-and-hold abnormal return (), which measures the difference between the actual buy-and-hold return of an event portfolio and the based on a , addressing effects that simple cumulative abnormal returns might overlook. The is formally defined as: \text{BHAR}_{i,t,T} = \left[ \prod_{k=t}^{T} (1 + R_{i,k}) \right] - \left[ \prod_{k=t}^{T} (1 + E(R_{i,k})) \right] where R_{i,k} is the realized return for firm i in period k, and E(R_{i,k}) is the expected return, often estimated using reference portfolios matched on size and book-to-market ratios to mitigate biases from cross-sectional dependence and skewness. This approach, introduced as a robust alternative to calendar-time methods for detecting long-term anomalies, performs well in simulations when benchmarks are appropriately adjusted, though it requires careful handling of outliers and non-stationarity in long horizons. To address cross-dependence among event firms clustered in time, calendar-time portfolios aggregate event firms into monthly portfolios and estimate abnormal performance via from a factor model, such as the Fama-French three-factor . This method standardizes returns across overlapping events, reducing the risk of inflated s from correlated abnormal returns, and is particularly effective for value-weighted portfolios where larger firms dominate. Empirical evidence shows that calendar-time tests yield more reliable inferences for long-run performance compared to buy-and-hold approaches, as they incorporate market-wide risk factors and avoid the "bad model" problem where expected returns are misspecified over long periods. However, both long-run methods remain sensitive to the choice of benchmark and , with calendar-time portfolios preferred for their alignment with theory. As alternatives to market-based event studies, regression discontinuity designs (RDD) exploit sharp cutoffs to identify causal , particularly for regulatory events where assignment changes discontinuously at a , such as eligibility rules for financial incentives. In RDD, the is estimated as the discontinuity in outcomes at the , using local polynomial regressions on either side to control for smooth trends, providing a quasi-experimental analogue to randomized trials without relying on parallel trends assumptions. This method is especially valuable for evaluations in , where events like banking regulations create natural experiments, offering higher than traditional event windows that may confound anticipation effects. Synthetic control methods serve as another alternative, particularly for non-market data or single-unit treatments, by constructing a counterfactual from a weighted combination of control units that best matches the treated unit's pre-event trajectory. The weights are optimized to minimize differences in predictors like GDP or indicators, enabling estimation of event impacts in settings without parallel groups, such as national policy shocks. Unlike event studies, which assume efficiency, synthetic controls focus on observable covariates for , making them suitable for macroeconomic or non-financial events where abnormal returns are unavailable. Machine learning techniques, including (NLP), enable automated event detection by classifying news articles or into event categories, addressing manual identification biases in large datasets. For instance, hierarchical multi-label text classification models, such as those using bidirectional encoders, extract financial events like mergers from unstructured text with high precision, integrating to filter relevant announcements. These approaches outperform rule-based methods in , allowing processing of vast news streams to identify unanticipated events that trigger market reactions. Hybrid approaches combine event studies with (PSM) to enhance by balancing treated and control firms on observables like size, , and before estimating abnormal returns. PSM estimates the probability of event exposure given covariates via , then matches firms to create a counterfactual group, reducing in non-random events. Applied to post-event performance, this integration yields unbiased abnormal return estimates, as demonstrated in studies of equity offerings where PSM-adjusted returns are insignificant, contrasting with unmatched samples. As of 2025, studies increasingly incorporate high-frequency intraday returns to capture immediate impacts, using tick-level to narrow event windows to minutes and mitigate from overnight gaps or . This shift improves precision in volatile markets, with bootstrapped tests accounting for microstructure effects like bid-ask bounce. In markets, studies analyze blockchain-specific events, such as deployments or announcements. These applications highlight the method's adaptability to decentralized assets, where 24/7 trading amplifies sensitivity.

References

  1. [1]
    [PDF] Event Studies in Economics and Finance
    Event studies provide an ideal tool for examining the information content of the disclosures. In this section the description of an example selected to ...
  2. [2]
    An Introductory Guide to Event Study Models
    The event study model is a powerful econometric tool used for the purpose of estimating dynamic treatment effects.
  3. [3]
    [PDF] An Empirical Evaluation of Accounting Income Numbers - Ray Ball
    Mar 25, 2006 · * Another approach pursued by Beaver (1968) is to use the investment decision, as it is reflected in transactions volume, for a predictive ...
  4. [4]
    [PDF] THE ADJUSTMENT OF STOCK PRICES TO NEW INFORMATION
    A precise definition of "unusual” behavior of security returns will be provided below. 1. Page 2. 2. FAMA, FISHER, JENSEN AND ROLL ... adjustment see Fisher [5].
  5. [5]
    (PDF) Understanding and Conducting Event Studies - ResearchGate
    Aug 9, 2025 · Event studies have become one of the most important methodological approaches to empirical research in frnance and accounting.
  6. [6]
    Event studies in international finance research - PMC - NIH
    Event studies are widely used in finance research to investigate the implications of announcements of corporate initiatives, regulatory changes, ...
  7. [7]
    Event Studies in Economics and Finance - jstor
    Using financial market data, an event study measures the impact of a specific event on the value of a firm.
  8. [8]
    An Empirical Evaluation of Accounting Income Numbers - jstor
    There is some evidence that industry eff ects might account for more than 10 per cent when the association is estimated in first differences [Brealey (1968)].
  9. [9]
    Event Study - an overview | ScienceDirect Topics
    An event study is defined as a methodology that analyzes the impact of a specific event on the performance of securities by simulating various event study ...
  10. [10]
    Reservations on the Use of Event Studies to Evaluate Economic Policy
    Event studies represent a highly useful methodology in the context of their original objective, i.e. research aiming to identify what drives stock market prices ...
  11. [11]
    The Logic and Limits of Event Studies in Securities Fraud Litigation
    This Article explores an array of considerations related to the use of event studies in securities fraud litigation.
  12. [12]
    [PDF] Market Reactions to Unexpected CEO Deaths, 1950 - 2009
    Dec 18, 2015 · Using an event study methodology and a sample of 240 sudden and unexpected CEO deaths, we show that absolute (unsigned) market reactions to ...
  13. [13]
    [PDF] “Econometrics of Event Studies”
    In contrast to the short-horizon tests, long-horizon event studies (even when they are well- specified) generally have low power to detect abnormal performance, ...
  14. [14]
    Fama, Fisher, Jensen and Roll (1969): Retrospective Comments
    Feb 16, 2014 · This essay provides a retrospective view of one of Gene Fama's many seminal papers, Fama, Fisher, Jensen, and Roll (1969).
  15. [15]
    Using daily stock returns: The case of event studies - ScienceDirect
    This paper examines properties of daily stock returns and how the particular characteristics of these data affect event study methodologies.Missing: standardization | Show results with:standardization
  16. [16]
    (PDF) The Event Study Methodology Since 1969 - ResearchGate
    Aug 6, 2025 · This paper discusses the event study methodology, beginning with FFJR (1969), including hypothesis testing, the use of different benchmarks for the normal rate ...
  17. [17]
    Event Study: Advanced Machine Learning and Statistical Technique ...
    Dec 20, 2021 · The goal of this paper is to analyze how a specific banking news event (such as a fraud or a bank merger) and other co-related news events
  18. [18]
    [PDF] Sample selection and event study estimation
    Jan 31, 2009 · The present paper provides a number of contributions to the event study methods literature.
  19. [19]
    The event study in international business research - NIH
    Mar 31, 2022 · The event study or event study method (ESM) is an empirical technique for capturing investors' reaction to an event affecting one or more publicly traded firms.
  20. [20]
    Ravenpack - Wharton Research Data Services
    RavenPack provides real-time news analytics, including sentiment analysis and event data focused on business and financial applications.
  21. [21]
    [PDF] USING DAILY STOCK RETURNS The Case of Event Studies
    This paper examines properties of daily stock returns and how the particular characteristics of these data affect event study methodologies.Missing: Donald | Show results with:Donald
  22. [22]
    Estimating betas from nonsynchronous data - ScienceDirect.com
    In this paper properties of the observed market model and associated ordinary least squares estimators are developed in detail.
  23. [23]
    Scholes-Williams Betas in Event Studies - EconWPA
    We examine the effects of thin trading on the specification of event study tests. Simulations of upper and lower tail tests are reported.
  24. [24]
    [PDF] Event-study methodology under conditions of event-induced variance
    Boehm~r et al.. Event-study methodology and went-induced t,ariance are uncorretated and that event-induced variance is insignificant.* The test statistic ...
  25. [25]
    Nonparametric event study tests | Review of Quantitative Finance ...
    This paper provides the first documentation of the power and specification of the generalized sign test, which is based on the percentage of positive abnor.
  26. [26]
    Center for Research in Security Prices, LLC (CRSP)
    The Center for Research in Security Prices, LLC (CRSP) maintains the most comprehensive collection of security price, return, and volume data for the NYSE, AMEX ...<|separator|>
  27. [27]
    ​SDC Platinum Financial Securities Data | Data Analytics - LSEG
    Drive your investment banking decisions with latest news, market and deals data - for mergers and acquisitions, debt capital markets, or equity capital markets.
  28. [28]
    Event-Driven Data - FactSet
    Analyze corporate events with a range of best-in-class tools in the FactSet Workstation and gain direct, continuous access via comprehensive data feeds and APIs ...
  29. [29]
    Event Study with Stata: A Step-by-Step Guide - Research Guides
    Feb 21, 2025 · A financial event study is a method used to examine how the market reacts to a significant event of interest (e.g., regulatory changes, ...Missing: sources | Show results with:sources
  30. [30]
    eventstudies package - RDocumentation
    Jun 2, 2020 · An R package for conducting event studies and a platform for methodological research on event studies.
  31. [31]
    eventstudy - PyPI
    Event Study package is an open-source python project created to facilitate the computation of financial event study analysis. Install. $ pip install eventstudy ...Missing: tools SAS Stata R
  32. [32]
    eventstudy Perform event study analysis - RDocumentation
    'eventstudy' provides an easy interface that integrates all functionalities of package eventstudies to undertake event study analysis.
  33. [33]
    (R) Super Efficient Event Study Code - Yu.Z
    May 27, 2023 · In this article, we'll build a highly efficient event study program in R. We'll use all the earnings call events in 2021 in the CRSP stock universe as an ...
  34. [34]
    [PDF] Best Practices for Transparent, Reproducible, and Ethical Research
    Feb 1, 2019 · Reproducible research practices facilitate collaboration with other researchers and provide a strong foundation for future researchers to build ...Missing: validation earnings
  35. [35]
    33.3 Steps for Conducting an Event Study | A Guide on Data Analysis
    This is a guide on how to conduct data analysis in the field of data science, statistics, or machine learning.
  36. [36]
    The market for corporate control: The scientific evidence
    The evidence indicates that corporate takeovers generate positive gains, that target firm shareholders benefit, and that bidding firm shareholders do not lose.
  37. [37]
    Domestic and cross-border effect of acquisition announcements
    Moreover, when developed-market acquirers announce cross-border acquisitions, higher abnormal returns are reported than when announcing domestic acquisitions.
  38. [38]
    Market Reactions to Fintech M&A: Evidence from Event Study ...
    The results show that, on average, financial institutions experience negative abnormal returns around announcement dates, suggesting limited short-term value ...
  39. [39]
    [PDF] A New Era of Midnight Mergers: Antitrust Risk and Investor Disclosures
    Antitrust authorities search public documents to discover anticompetitive mergers. Thus, investor disclosures may alert them to deals that would otherwise ...
  40. [40]
    [PDF] Correct Application of Event Studies in Securities Litigation
    An event study analyzes the effects of economic events on security prices. In efficient markets where prices reflect all publicly available information and ...
  41. [41]
    What Is Loss Causation—and Why It Matters for Damages in ...
    Aug 27, 2025 · Under the Private Securities Litigation Reform Act (PSLRA), plaintiffs must plead loss causation with particularity, often relying on expert ...
  42. [42]
    The Logic and Limits of Event Studies in Securities Fraud Litigation
    This Article provides a primer explaining the event study methodology and identifying the limitations on its use in securities fraud litigation.
  43. [43]
    Significance Tests for Event Studies | EST
    Significance Tests for Event Studies. Event studies are concerned with the question of whether abnormal returns on an event date or, more generally, during a ...
  44. [44]
    [PDF] Single-Firm Event Studies, Securities Fraud, and Financial Crisis
    May 5, 2016 · Abstract. Lawsuits brought pursuant to section 10(b) of the Securities and Exchange Act depend on the reliability of a statistical tool ...
  45. [45]
    In re Enron Corp. Sec. Litig. - Robbins Geller Rudman & Dowd LLP
    Largest Securities Fraud Class Action Recovery in History. Investors lost billions of dollars as a result of the massive fraud at Enron.Missing: event | Show results with:event
  46. [46]
    WorldCom | Bernstein Litowitz Berger & Grossmann LLP
    The WorldCom lawsuit was filed due to overstated earnings and accounting fraud, resulting in a $6.15 billion settlement for investors between 1999 and 2002.Missing: study damages
  47. [47]
    [PDF] Damages and the Use of Event Studies
    DAMAGES IN SECURITIES LITIGATION. PLAINTIFFS' STRATEGIES FOR A DEFENSIBLE. DAMAGES STUDY. By. Jeffrey C. Block. Kathleen M. Donovan-Maher. And Kyle G. DeValerio.
  48. [48]
    [PDF] Event Studies in Securities Litigation: Low Power, Confounding ...
    An event study is a statistical method for determining whether some event—such as the announcement of earnings or the announcement of a proposed merger—is ...<|separator|>
  49. [49]
    The Troubling Dispositive Role of Event Studies in Securities Fraud ...
    Yet the law governing event studies has become inseparable from the substantive law governing securities fraud litigation.
  50. [50]
    [PDF] Market Reaction as an Impact of Announcement Increase Fed ...
    ... announcement of the Fed's interest rate increase is an event study aimed at examining market responses and abnormal returns. Empirically, the form of ...
  51. [51]
    Just “blah blah blah”? Stock market expectations and reactions to ...
    Several studies have investigated the financial impacts of climate regulations and policies announcements by using an event study methodology (e.g., Blacconiere ...
  52. [52]
    [PDF] Market response to policy initiatives during the global financial crisis
    positive immediate market reaction ... policy announcements. Section 3 provides a brief overview of the event study methodology and describes how the event study ...
  53. [53]
    The Economic Effects of Brexit: Evidence from the Stock Market
    Dec 18, 2018 · We follow the standard practice in the event-study literature of using abnormal returns because we want to examine the part of a stock's return ...
  54. [54]
    Hang in There: Stock Market Reactions to Withdrawals of COVID-19 ...
    Dec 18, 2020 · We empirically examine the impact of a withdrawal of fiscal stimulus policies on the stock markets. ... Keywords: COVID-19, Event study, Exit ...
  55. [55]
    [PDF] Estimating the Market's Probability of Uncertain Events
    Apr 26, 2019 · These probabilities suggest that a conventional event study would draw incorrect conclusions about which events were most consequential for ...
  56. [56]
    (PDF) Climate Change and Monetary Policy in the Euro Area
    European Green Deal will increase funding for the transition through the EU budget ... Event-study. analysis looking at the impact of news-related transition ...
  57. [57]
    [PDF] The Empirical Power and Specification of Test Statistics
    We analyze the empirical power and specification of test statistics in event studies designed to detect long-run (one- to five-year) abnormal stock returns. We ...Missing: seminal | Show results with:seminal
  58. [58]
    [PDF] Market efficiency, long-term returns, and behavioral finance1
    Market efficiency survives the challenge from the literature on long-term return anomalies. Consistent with the market efficiency hypothesis that the ...
  59. [59]
    [PDF] Managerial Decisions and Long-Term Stock Price Performance
    Oct 14, 2002 · Mitchell, Erik Stafford. The Journal of Business, Volume 73 ... tive long-term abnormal returns for growth firms and positive abnormal.
  60. [60]
    [PDF] The regression discontinuity design—Theory and applications
    Regression discontinuity (RD) designs for evaluating causal effects of interventions where assignment to a treatment is determined at least partly by the ...
  61. [61]
    [PDF] Synthetic Control Methods For Comparative Case Studies
    Building on an idea in Abadie and Gardeazabal (2003), this article investigates the application of synthetic control methods to comparative case studies. We ...
  62. [62]
    [PDF] F-HMTC: Detecting Financial Events for Investment Decisions Based ...
    We model financial event detection as a hierarchical multi-label text classification problem, and propose a neural event detection model, namely F-HMTC, for our.Missing: 2020s | Show results with:2020s
  63. [63]
    Propensity Score Matching: An Application in Empirical Finance
    This study contributes to the literature by implementing the propensity score estimator which is able to match firms in multiple dimensions simultaneously, and ...
  64. [64]
    Propensity score matching and abnormal performance after ...
    Stocks underperform after SEOs, but this is insignificant when using propensity score matching, which addresses the multi-dimensional matching problem.
  65. [65]
    [PDF] IT IS IMPERATIVE TO PERFORM EVENT STUDIES ONLY WITH ...
    Jan 16, 2024 · event studies and market efficiency ... would be necessary for an objective, systematic and ordinal direct measure of market efficiency.
  66. [66]
    Investigating the impact of global events on cryptocurrency ...
    This paper investigates the impact of various crypto and global events from 2017 to 2023 on the performance of cryptocurrencies.<|control11|><|separator|>