Fact-checked by Grok 2 weeks ago

Quadratic variation

Quadratic variation is a fundamental concept in that measures the total squared fluctuation of a along its paths, defined as the limit in probability (or under suitable conditions) of the sum of squared increments over refining of a time interval. For a continuous X_t on [0, T], it is given by \langle X \rangle_T = \plim_{\|\Pi\| \to 0} \sum_{i=1}^n (X_{t_i} - X_{t_{i-1}})^2, where \Pi = \{0 = t_0 < t_1 < \cdots < t_n = T\} is a with mesh \|\Pi\| = \max_i (t_i - t_{i-1}). This quantity distinguishes processes with "rough" paths, such as , from smoother ones like deterministic continuous functions, for which the quadratic variation is zero. For standard B_t, the quadratic variation over [0, T] equals T almost surely, reflecting the process's infinite but finite quadratic variation, which is crucial for its nowhere-differentiable paths. In contrast, processes of have zero quadratic variation, highlighting a key dichotomy in path regularity. Quadratic variation plays a central role in the theory of semimartingales and Itô processes, where for an Itô process X_t = X_0 + \int_0^t \mu_s \, ds + \int_0^t \sigma_s \, dB_s, it equals \langle X \rangle_t = \int_0^t \sigma_s^2 \, ds. This property underpins , the stochastic integration framework, and applications in , such as estimation in option models. For square-integrable martingales M_t, the process M_t^2 - \langle M \rangle_t is itself a martingale, enabling the characterization of via Lévy's theorem: a continuous martingale with \langle M \rangle_t = t and M_0 = 0 is standard . Extensions to quadratic covariation \langle X, Y \rangle_t further generalize these ideas to multivariate settings.

Fundamentals

Definition

Quadratic variation is defined in the context of a complete filtered probability space (\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \geq 0}, P), where the stochastic process in question is adapted to the filtration (\mathcal{F}_t)_{t \geq 0}. This setup ensures that the process incorporates all available information up to each time t, allowing for the rigorous construction of limits involving its paths. To build intuition, consider the deterministic case of a continuous function f: [0, t] \to \mathbb{R} that is continuously differentiable. The quadratic variation of f over [0, t] is given by the limit \lim_{\|\pi\| \to 0} \sum_{i=1}^n \left( f(t_i) - f(t_{i-1}) \right)^2, where \pi = \{0 = t_0 < t_1 < \cdots < t_n = t\} is a partition of [0, t] and \|\pi\| denotes the mesh size (maximum subinterval length). By the mean value theorem, each increment satisfies f(t_i) - f(t_{i-1}) = f'(c_i) (t_i - t_{i-1}) for some c_i \in (t_{i-1}, t_i), so the sum approximates \sum_{i=1}^n [f'(c_i)]^2 (t_i - t_{i-1})^2 \leq \sup |f'|^2 \sum_{i=1}^n (t_i - t_{i-1})^2. As \|\pi\| \to 0, the sum of squared lengths tends to zero, yielding a quadratic variation of zero for such smooth functions. This example demonstrates how the construction emphasizes second-order fluctuations in path behavior. For a general X adapted to (\mathcal{F}_t) with paths (right-continuous with left ), the quadratic variation process \langle X \rangle = (\langle X \rangle_t)_{t \geq 0} is formally defined as the unique nondecreasing process such that X^2 - \langle X \rangle is a , or equivalently, as the in probability \langle X \rangle_t = \plim_{\|\pi\| \to 0} \sum_{i=1}^n (X_{t_i} - X_{t_{i-1}})^2, where the probabilistic is taken uniformly over all refining partitions \pi of [0, t]. This captures the cumulative squared increments along typical paths, distinguishing stochastic roughness from deterministic smoothness. The definition admits distinctions based on path regularity. For continuous versions of X (where paths have no jumps), \langle X \rangle_t coincides with the predictable quadratic variation, ensuring compatibility with stochastic integration. In the case, the quadratic variation decomposes as \langle X \rangle_t = \langle X^c \rangle_t + \sum_{0 < s \leq t} (\Delta X_s)^2, separating the continuous component \langle X^c \rangle from the discrete sum of squared jumps \Delta X_s = X_s - X_{s-}. This structure accommodates processes with discontinuities while preserving the property of the sums.

Historical Development

The concept of quadratic variation originated in the early 20th-century study of , whose sample paths possess infinite first-order variation but finite second-order variation, necessitating a new measure to capture path roughness. Norbert Wiener's 1923 rigorous construction of as a provided the foundational model, demonstrating paths with this distinctive property. Paul Lévy advanced this in 1940 by defining quadratic variation as the almost-sure limit of sums of squared increments over refining partitions of [0, t], proving it equals t for standard and distinguishing it from processes of . In the 1940s, pioneered integrals to address integration against such irregular paths, motivated by the need to solve differential equations for diffusion processes. His 1944 paper introduced the Itô integral for square-integrable functions adapted to the Brownian filtration, establishing its martingale properties and laying groundwork for handling quadratic terms in expansions. This development underscored quadratic variation as essential for a "second-order" calculus suited to Brownian motion's infinite variation. Itô's 1951 memoir on differential equations further formalized this through a chain rule incorporating quadratic variation, enabling the analysis of function compositions along paths. The 1960s brought refinements via approximation theorems, particularly from E. Wong and M. Zakai, who showed that piecewise linear approximations to yield ordinary integrals converging to Stratonovich-type stochastic integrals, with discrepancies attributable to quadratic variation. Their 1965 result clarified the interplay between deterministic and stochastic integration, influencing subsequent work on numerical approximations and pathwise properties. By the 1970s, the theory expanded dramatically with semimartingale frameworks, as Catherine Doléans-Dade and Paul-André Meyer generalized stochastic integration to local martingales without quasi-left-continuity assumptions on filtrations. Their 1970 paper defined quadratic variation for as the compensator in the decomposition of the square, enabling integration against discontinuous processes and unifying quadratic variation across broader classes of stochastic paths. This milestone shifted focus from Markovian diffusions to martingale-based probability. These advancements drew from analytical traditions, including adaptations of expansions to account for finite quadratic variation in rough paths, providing conceptual tools for higher-order without relying on smoothness.

Properties

For Finite Variation Processes

A X = (X_t)_{t \geq 0} is said to have finite variation if, almost surely, its paths are functions of on every compact interval [0, T]. This means that the V(X; [0, T]) = \sup \sum_{i=1}^n |X_{t_i} - X_{t_{i-1}}| < \infty, where the supremum is taken over all partitions $0 = t_0 < t_1 < \cdots < t_n = T of [0, T]. For continuous processes of finite variation, the quadratic variation [X]_T is zero almost surely. More precisely, if X is a continuous process with paths of bounded variation almost surely, then the limit in probability of the sums \sum_{i=1}^n (X_{t_i} - X_{t_{i-1}})^2 over partitions with mesh tending to zero is zero. This result holds because finite variation processes lack the "roughness" that generates non-zero quadratic variation, unlike processes such as , whose paths have unbounded variation but finite quadratic variation equal to time. To see this, consider the proof sketch for a continuous path x on [0, T] with V < \infty. For any \mathcal{P} = \{0 = t_0 < \cdots < t_n = T\} with |\mathcal{P}| = \max_i (t_i - t_{i-1}), \sum_{i=1}^n (x_{t_i} - x_{t_{i-1}})^2 \leq |\mathcal{P}| \sum_{i=1}^n |x_{t_i} - x_{t_{i-1}}| \leq |\mathcal{P}| \cdot V. As |\mathcal{P}| \to 0, the right-hand side tends to zero uniformly in the partition, so the quadratic variation along the path is zero. For the , this pathwise property implies [X]_T = 0 . Examples of deterministic functions of illustrate this. Consider an absolutely continuous function x(t) = \int_0^t g(s) \, ds, where g is integrable on [0, T] (hence finite variation, with V = \int_0^T |g(s)| \, ds < \infty). The increments satisfy x_{t_i} - x_{t_{i-1}} = \int_{t_{i-1}}^{t_i} g(s) \, ds, so \sum (x_{t_i} - x_{t_{i-1}})^2 \approx \sum \left( g(\xi_i) (t_i - t_{i-1}) \right)^2 \leq \max |g| \cdot \left( \sum (t_i - t_{i-1})^2 \right) \to 0 as the mesh tends to zero, since \sum (t_i - t_{i-1})^2 \leq |\mathcal{P}| \cdot T \to 0. Thus, its quadratic variation is zero. For a step function, such as x(t) = 0 for t < 1 and x(t) = 1 for t \geq 1 on [0, 2], the path has finite variation V = 1. However, since it is discontinuous, the quadratic variation computation yields the square of the jump: over fine partitions, the sum \sum (Δx)^2 includes (1)^2 = 1 from the interval containing the jump at t=1, and zeros elsewhere, so the limit is 1, not zero. This highlights that the zero quadratic variation property strictly requires continuity for finite variation processes.

General Properties

The quadratic variation process of a semimartingale X, denoted \langle X \rangle, satisfies an additivity property with respect to sums of processes. Specifically, for semimartingales X and Y, \langle X + Y \rangle = \langle X \rangle + \langle Y \rangle + 2 \langle X, Y \rangle, where \langle X, Y \rangle denotes the quadratic covariation process, which measures the joint quadratic fluctuations between X and Y. This relation extends the notion of quadratic variation to interactions between processes and forms a foundational bilinearity property in stochastic calculus. The quadratic variation \langle X \rangle is itself an increasing process, meaning its paths are non-decreasing , reflecting the cumulative nature of squared increments along the paths of X. For semimartingales, \langle X \rangle admits a predictable , often also denoted \langle X \rangle in contexts where the continuous martingale part dominates, which serves as the predictable of the full quadratic variation process and plays a key role in martingale representations and Itô's formula. The further links quadratic variation to covariation via \langle X, Y \rangle = \frac{1}{4} \left( \langle X + Y \rangle - \langle X - Y \rangle \right), allowing the covariation to be recovered solely from quadratic variations of linear combinations, a property that underscores the structure of these processes. If the semimartingale X has continuous paths, then its quadratic variation process \langle X \rangle is also continuous. Moreover, for semimartingales, the quadratic variation is pathwise unique, meaning that any two versions of \langle X \rangle agree on the paths of X, ensuring well-definedness in applications like stochastic integration.

Applications to Stochastic Processes

Itô Processes

An Itô process is a continuous defined by the dX_t = \mu_t \, dt + \sigma_t \, dW_t, where W_t is a standard , and \mu_t and \sigma_t are adapted processes satisfying suitable integrability conditions, such as \mu_t being progressively measurable and integrable, and \sigma_t being progressively measurable and square-integrable with respect to the . This representation decomposes the process into a drift term \mu_t \, dt and a diffusion term \sigma_t \, dW_t, capturing both deterministic trends and random fluctuations driven by the . For an Itô process X_t, the quadratic variation process \langle X \rangle_t is given explicitly by the formula \langle X \rangle_t = \int_0^t \sigma_s^2 \, ds. This expression arises because the quadratic variation accumulates the squared diffusion coefficients over time, reflecting the cumulative effect of the . In the context of stochastic differentials, the multiplication rule (dX_t)^2 = \sigma_t^2 \, dt (ignoring higher-order terms like dt^2 and dt \, dW_t, which vanish) directly implies that the infinitesimal quadratic variation is d\langle X \rangle_t = \sigma_t^2 \, dt, linking it to the diffusion coefficient \sigma_t as a measure of local variance. To derive this formula, apply to the function f(x) = x^2, yielding the differential d(X_t^2) = 2 X_t \, dX_t + \frac{1}{2} \cdot 2 \cdot (dX_t)^2 = 2 X_t (\mu_t \, dt + \sigma_t \, dW_t) + \sigma_t^2 \, dt. Integrating from 0 to t gives X_t^2 = X_0^2 + \int_0^t 2 X_s \mu_s \, ds + 2 \int_0^t X_s \sigma_s \, dW_s + \int_0^t \sigma_s^2 \, ds. The term $2 \int_0^t X_s \sigma_s \, dW_s is a martingale (as an with respect to ), so the quadratic variation \langle X \rangle_t, defined as the compensator making X^2 - \langle X \rangle a martingale, must be \int_0^t \sigma_s^2 \, ds. This derivation highlights how the second-order term from captures the quadratic variation, distinct from the linear drift effects. A canonical example is the standard W_t, which is an Itô with \mu_t = 0 and \sigma_t = 1, so its quadratic variation is \langle W \rangle_t = \int_0^t 1^2 \, ds = t. This result underscores that the quadratic variation of grows linearly with time, quantifying its inherent roughness despite pathwise continuity. In broader terms, the diffusion coefficient \sigma_t governs the scale of fluctuations in the stochastic differential, and the quadratic variation \langle X \rangle_t serves as the integrated squared , providing a pathwise measure of accumulated essential for applications in and physics.

Semimartingales

Semimartingales represent a broad class of stochastic processes that encompass both continuous and discontinuous paths, making them essential for beyond purely continuous cases. A X admits a unique X = M + A, where M is a and A is a of finite variation. This decomposition highlights the role of the martingale component in capturing the "random" fluctuations, while the finite variation part accounts for deterministic drifts or jumps of finite activity. The quadratic variation of a X, denoted [X], is determined solely by its part M, such that [X] = [M], because processes of finite variation have zero quadratic variation. For semimartingales with jumps, the quadratic variation process decomposes as [X]_t = \sum_{0 < s \leq t} (\Delta X_s)^2 + [X^c]_t, where \Delta X_s denotes the jump at time s and X^c is the continuous part of X. This formula captures the contribution from discontinuous jumps via the sum of their squared sizes, in addition to the continuous quadratic variation. The Kunita-Watanabe decomposition provides a foundational for square-integrable martingales, expressing them in terms of stochastic integrals with respect to a given martingale, which relies on quadratic covariation processes. For semimartingales, this decomposition implies that the quadratic variation governs the L^2-boundedness and predictability of integrals, enabling the extension of Itô's formula to include jump terms and ensuring the quadratic variation remains a for the martingale part squared. A concrete example is the compound Poisson process X_t = \sum_{i=1}^{N_t} Y_i, where N is a Poisson process with intensity \lambda and Y_i are i.i.d. random variables independent of N. Its quadratic variation is [X]_t = \sum_{i=1}^{N_t} Y_i^2, consisting purely of the sum of squared jumps, as there is no continuous component.

Martingales

In the theory of stochastic processes, the quadratic variation plays a central role for martingales, particularly local martingales. For a locally square-integrable local martingale M, the Doob-Meyer decomposition theorem guarantees the existence of a unique increasing process \langle M \rangle, starting at zero, such that M^2 - \langle M \rangle is a local martingale. This process \langle M \rangle is known as the predictable quadratic variation, as it is the predictable compensator in the decomposition of M^2. It is distinguished from the optional quadratic variation, often denoted [M], which is the right-continuous version obtained as the limit in probability of sums of squared increments over refining partitions and coincides with \langle M \rangle almost surely for continuous martingales. The angle bracket notation \langle M \rangle specifically emphasizes its predictable nature, essential for stochastic integration and predictability properties in martingale theory. For square-integrable martingales, where \mathbb{E}[M_t^2] < \infty for each t, the predictable quadratic variation satisfies \mathbb{E}[\langle M \rangle_t] = \mathbb{E}[M_t^2], assuming M_0 = 0. This equality follows directly from the martingale property of M^2 - \langle M \rangle, implying that the expected value of the compensator matches the second moment of the martingale. Extending to the terminal time, if the martingale converges in L^2 to M_\infty, then \mathbb{E}[\langle M \rangle_\infty] = \mathbb{E}[M_\infty^2], highlighting the quadratic variation's role in quantifying the accumulated "uncertainty" or variance in the martingale's path. This relation underscores the predictability aspect, as \langle M \rangle is adapted in a way that allows conditional expectations to preserve the martingale structure. The quadratic variation also governs stochastic integrals with respect to martingales. For an predictable integrand H such that the integral is well-defined, the stochastic integral \int H \, dM is itself a with predictable quadratic variation \int H^2 \, d\langle M \rangle. This property, often derived from polarization identities or in the continuous case, ensures that the quadratic variation scales quadratically with the integrand, facilitating computations in and emphasizing the martingale's role in modeling unpredictable fluctuations without drift.

Extensions and Applications

Quadratic Covariation

The quadratic covariation of two stochastic processes X and Y extends the univariate quadratic variation to capture their joint fluctuations. It is formally defined as the limit in probability of the partial sums \langle X, Y \rangle_n = \sum_{i=1}^n (X_{t_i} - X_{t_{i-1}})(Y_{t_i} - Y_{t_{i-1}}) over refining partitions \{t_i\} of [0, t] with mesh tending to zero, yielding the quadratic covariation process \langle X, Y \rangle_t. This construction is the unique bilinear extension of the quadratic variation via the : \langle X, Y \rangle_t = \frac{1}{4} \Bigl( \langle X + Y \rangle_t - \langle X - Y \rangle_t \Bigr), which preserves the structure and ensures \langle X, X \rangle_t = \langle X \rangle_t. The bilinearity implies properties such as \langle aX + bY, Z \rangle_t = a \langle X, Z \rangle_t + b \langle Y, Z \rangle_t for constants a, b. For semimartingales X and Y, the quadratic covariation decomposes into continuous and jump components: \langle X, Y \rangle_t = \langle X^c, Y^c \rangle_t + \sum_{s \le t} \Delta X_s \Delta Y_s, where X^c and Y^c are the continuous parts of X and Y (primarily their components), and \langle X^c, Y^c \rangle_t is the predictable quadratic covariation of these continuous parts. This expression highlights how covariation arises from the martingale portions, with finite-variation parts contributing zero in the limit. A representative example occurs with : the covariation of a standard B with itself satisfies \langle B, B \rangle_t = t, reflecting its variance growth, whereas the covariation \langle B, W \rangle_t = 0 for an independent W, underscoring of independent martingales. In multidimensional , quadratic covariation is essential for the generalized Itô formula applied to functions of vector processes, incorporating cross terms d\langle X^i, X^j \rangle_t in the second-order expansion to account for interdependent diffusions. For instance, the becomes d(X_t Y_t) = X_t \, dY_t + Y_t \, dX_t + d\langle X, Y \rangle_t, enabling precise handling of correlated stochastic differentials in higher dimensions. Quadratic covariation is also central to rough path theory, which extends stochastic integration to paths with finite quadratic variation but infinite total variation, with applications in modeling turbulent flows and numerical solutions of stochastic differential equations.

Real-World Applications

In finance, quadratic variation plays a central role in volatility modeling, where realized volatility serves as a nonparametric estimator of the integrated quadratic variation of asset prices modeled as semimartingales. This approach leverages high-frequency intraday data to compute the sum of squared log-returns, providing an ex-post measure of price variability that informs risk management, option pricing, and portfolio optimization. To handle discontinuities like jumps in price processes, high-frequency estimation methods such as bipower variation separate the continuous component from jump contributions, yielding consistent estimates of integrated variance even under . Similarly, kernel-based estimators, including realized kernels, mitigate the effects of noise by applying weighted averages to high-frequency returns, enhancing the accuracy of quadratic variation proxies in noisy environments. Numerically, realized quadratic variation is estimated from tick data as the sum of squared log-returns, \sum_{i=1}^M r_{t,i}^2, where r_{t,i} denotes the log-return over the i-th intraday interval on day t, converging to the true integrated quadratic variation as sampling frequency increases. However, microstructure noise from bid-ask spreads and order flow introduces upward bias in these estimators, necessitating corrections like the two-scale realized volatility method, which optimally combines estimators from sparse and dense sampling frequencies to achieve consistency. Beyond finance, quadratic variation finds applications in physics for modeling rough paths in turbulent flows, where it quantifies path irregularity in equations describing fluid particle trajectories amid and infinite Reynolds numbers. In , particularly for bioelectrical signals like ECG, quadratic variation minimization enables baseline wander removal by identifying and subtracting low-frequency artifacts while preserving signal integrity. Recent developments since 2020 incorporate techniques, such as random forests and neural networks, to predict quadratic variation in volatile markets using from multiple assets, outperforming traditional econometric models in forecasting realized volatility amid economic uncertainty.

References

  1. [1]
    [PDF] A Brief Introduction to Stochastic Calculus - Columbia University
    Definition 4 (Quadratic Variation) The quadratic variation of a stochastic process, Xt, is equal to the limit of Qn(T) as ∆t := maxi(ti − ti−1) → 0 ...
  2. [2]
    [PDF] Quadratic variation - MIT OpenCourseWare
    Sep 30, 2013 · Unbounded variation of a Brownian motion. Any sequence of values 0 < t0 < t1 < ··· < tn < T is called a partition Π = Π(t0,...,tn) of an interval ...
  3. [3]
    [PDF] Introduction to Stochastic Calculus - Duke Mathematics Department
    Jan 8, 2020 · Quadratic variation for martingales. Recall the definition of quadratic variation of a stochastic process: Definition 4.1. The quadratic ...
  4. [4]
    [PDF] Sur certains processus stochastiques homogènes - Numdam
    PAUL LÉVY. Sur certains processus stochastiques homogènes. Compositio Mathematica, tome 7 (1940), p. 283-339. <http://www.numdam.org/item?id=CM_1940__7__283_0>.
  5. [5]
    [PDF] A short history of stochastic integration and mathematical finance
    The history of stochastic integration, from 1880-1970, starts with Brownian motion, and early models by Thiele, Bachelier, and Einstein. Bachelier is seen as a ...
  6. [6]
    On Stochastic Differential Equations - AMS Bookstore
    On Stochastic Differential Equations. K. Ito. On Stochastic Differential ... Volume: 1; 1951; 51 pp. MSC: Primary 60. Table of Contents. Chapters.
  7. [7]
    On the Convergence of Ordinary Integrals to Stochastic Integrals - jstor
    All the stochastic integrals considered in the remainder of this note are in 1t6's sense. For the special case where yn(t) are polygonal approximations to y(t), ...
  8. [8]
    [PDF] Intégrales stochastiques par rapport aux martingales locales
    CATHERINE DOLÉANS-DADE. PAUL-ANDRÉ MEYER. Intégrales stochastiques par rapport aux martingales locales. Séminaire de probabilités (Strasbourg), tome 4 (1970) ...
  9. [9]
    [PDF] stochastic differential equations - People
    Since process of bounded variation remain processes of bounded variation with respect to an abso- ... quadratic variation is zero. Conversely, if the ...<|control11|><|separator|>
  10. [10]
    [PDF] Semimartingales and stochastic integration
    Jun 2, 2016 · 2.6 The quadratic variation of a semimartingale . ... [X,Y] := XY −. Z. X−dY −. Z. Y−dX. Write [X] := [X,X]. The polarization identity is ...
  11. [11]
    [PDF] Semimartingales
    Mar 22, 2010 · The limit is called the quadratic variation process of the semimartingale. It is easiest to establish existence of the quadratic variation by ...
  12. [12]
    [PDF] Stochastic Calculus: An Introduction with Applications
    Feb 15, 2023 · There is a mathematical challenge in studying stochastic processes in- ... Definition If Xt is a process, the quadratic variation is defined by.
  13. [13]
    [PDF] Lecture 17: Ito process and formula - MIT OpenCourseWare
    Nov 13, 2013 · But there is a natural generalization of Ito integral to a broader family, which makes taking functional operations closed within the family.
  14. [14]
    Stochastic Integration and Differential Equations: A New Approach
    Apr 17, 2013 · This book assumes the reader has some knowledge of the theory of stochastic processes, including elementary martingale theory.
  15. [15]
    [PDF] Introduction to Semi-martingale Theory
    Quadratic variation [X,X] is an non-decreasing (hence finite variation) process for any good integrator. As a consequence of the previous approximation ...
  16. [16]
    ON SQUARE INTEGRABLE MARTINGALES - Project Euclid
    2) For the definitions see [8] or Appendix. Page 3. ON SQUARE INTEGRABLE MARTINGALES ... Kunita, T Watanabe, Notes on transformations of Markov processes ...
  17. [17]
    [PDF] Stochastic Analysis - IAM Bonn
    Jan 28, 2013 · and seems to require some background from the general theory of stochastic processes, ... formula for processes with finite quadratic variation ...<|control11|><|separator|>
  18. [18]
    [PDF] On quadratic variation of martingales - Indian Academy of Sciences
    We are now in a position to prove an analogue of the Doob–Meyer decomposition theorem for the square of an r.c.l.l. locally square integrable martingale.
  19. [19]
    [PDF] Advanced computational methods-Lecture 2 1 Brief Introduction to ...
    2.2 Predictable quadratic variation. Using the Doob-Meyer decomposition, one may find another option to define quadratic variation. In fact, M2 is right ...
  20. [20]
    [PDF] STAT331 Some Key Results for Counting Process Martingales This ...
    By the Doob-Meyer decomposition, there exists a unique predictable process, which we will denote by < M,M > (·), such that M2(·)− < M,M > (·) is a martingale. • ...
  21. [21]
    [PDF] Lecture 19 Semimartingales
    Apr 13, 2015 · Theorem 19.4 (Quadratic variation of continuous local martingales). ... This decomposition is called the semimartingale decomposition of X.
  22. [22]
    Quadratic covariation and an extension of - Itô's formula
    We show that for any locally square integrable function ƒ the quadratic covariation [f(X), X] exists as the usual limit of sums converging in probability. For ...
  23. [23]
    [PDF] Estimating quadratic variation using realized variance
    SUMMARY. This paper looks at some recent work on estimating quadratic variation using realized variance (RV)—that is, sums of M squared returns.
  24. [24]
    [PDF] Realized Volatility - Torben G. Andersen and Luca Benzoni
    Realized volatility is a nonparametric ex-post estimate of the return variation. The most obvious realized volatility measure is the sum of finely-sampled.
  25. [25]
    [PDF] Power and Bipower Variation with Stochastic Volatility and Jumps
    We demonstrate that in special cases, realized bipower variation estimates integrated variance in stochastic volatility models, thus providing a model-free and ...Missing: kernel | Show results with:kernel
  26. [26]
    [PDF] Designing Realized Kernels to Measure the ex post Variation of ...
    In this paper we study the class of realized kernel estimators of quadratic variation. We show how to design these estimators to be robust to certain types of ...
  27. [27]
    [PDF] A Tale of Two Time Scales: Determining Integrated Volatility With ...
    It is a common practice in finance to estimate volatility from the sum of frequently sampled squared returns. However, market microstructure.
  28. [28]
    Spontaneous Stochasticity in the Presence of Intermittency
    Spontaneous stochasticity is a modern paradigm for turbulent transport at infinite Reynolds numbers. It suggests that tracer particles advected by rough ...
  29. [29]
    Baseline wander removal for bioelectrical signals by quadratic ...
    In this paper, we propose a novel approach to baseline wander estimation and removal for bioelectrical signals, based on the notion of quadratic variation ...
  30. [30]
    (PDF) Forecasting realized volatility with machine learning: Panel ...
    Aug 9, 2025 · This paper considers the problem of forecasting realized volatility with machine learning using high-frequency data.