Fact-checked by Grok 2 weeks ago

Hitting time

In and stochastic processes, the hitting time of a process to a or set is defined as the first time at which the process enters that or set, formally expressed as \tau_A = \min\{n \geq 0 : X_n \in A\} for a discrete-time process \{X_n\} and a target set A, or analogously in continuous time. This captures the waiting time until an event of interest occurs, such as a reaching a specific x, where \tau_x = \min\{n \geq 0 : X_n = x\}. Hitting times are fundamental stopping times in stochastic processes, meaning the event \{\tau = n\} depends only on the process history up to time n, which enables powerful analytical tools like the strong —stating that, conditional on the process at \tau, the future evolution is independent of the past and restarts as a new process from X_\tau. In Markov chains, key quantities include the hitting probability (the chance of ever reaching the target) and the expected hitting time E[\tau_A], which can be computed by solving systems of linear equations based on at the first step. For irreducible chains with \pi, the sum \sum_x E_a[\tau_x] \pi(x) equals a constant independent of the starting state a, bounding the maximum expected hitting time and relating it to mixing times in methods. Beyond theory, hitting times model real-world phenomena as first hitting time models for lifetime data, where the process \{X(t)\} represents degradation or risk accumulation until it crosses an absorbing boundary H, with applications in medicine (e.g., survival until progression), engineering (e.g., equipment failure), economics (e.g., thresholds), and social sciences (e.g., time to ). Common models include one- or two-dimensional processes with fixed or moving barriers, often analyzed via inverse Gaussian distributions for exact on parameters like drift and . These models address challenges in censored or longitudinal data, providing flexible alternatives to traditional by incorporating covariates through marker processes.

Prerequisites

Stochastic Processes

A is defined as a family of random variables \{X_t : t \in T\}, where T is an representing time, typically evolving in a random manner to describe phenomena such as stock prices or particle movements. This framework allows the process to capture uncertainty over time, with each X_t representing the state at time t. A key property often central to such processes is the , which states that the depends only on the current and not on the sequence of past states, formally expressed as P(X_{t+s} \in A \mid X_u, u \leq t) = P(X_{t+s} \in A \mid X_t) for suitable events A. This memoryless condition simplifies analysis and is foundational for processes where holds given the present. Relevant types include discrete-time processes, such as , where time advances in integer steps and transitions occur according to fixed probabilities. In contrast, continuous-time processes, exemplified by , evolve over real-valued time with continuous sample paths and are Markov processes satisfying the property in a continuous setting. The state space of a can be discrete, consisting of countable points like integers in a , or continuous, such as the real line in models. Within discrete-state Markov chains, states are classified as transient if the probability of eventual return is less than one, or recurrent if that probability equals one, influencing long-term behavior like absorption or persistence. To formalize the information structure, a \{\mathcal{F}_t : t \in T\} is an increasing family of \sigma-algebras on the , where \mathcal{F}_t represents the information available up to time t, with \mathcal{F}_s \subseteq \mathcal{F}_t for s < t. A process is adapted to this filtration if X_t is \mathcal{F}_t-measurable for each t. Stopping times are random times \tau such that the event \{\tau \leq t\} belongs to \mathcal{F}_t for all t.

Stopping Times

In a filtered probability space (\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \geq 0}, P), a stopping time \tau is a random variable taking values in [0, \infty] such that the event \{\tau \leq t\} belongs to the \sigma-algebra \mathcal{F}_t for every t \geq 0. This condition ensures that the decision to stop by time t depends only on the information available up to t, without foresight into the future. Common examples include deterministic times, such as a fixed constant \tau = c for some c \geq 0, where \{\tau \leq t\} = \Omega if t \geq c and \emptyset otherwise, both of which are trivially in \mathcal{F}_t. Another example is the first time an adapted process X exceeds a level a, defined as \tau = \inf\{t \geq 0 : X_t > a\} (with \tau = \infty if no such t exists); here, \{\tau \leq t\} = \bigcup_{0 \leq s \leq t} \{X_s > a\}, which lies in \mathcal{F}_t since X is adapted. Key properties of stopping times include their compatibility with martingale theory via optional sampling: for a martingale M and a bounded \tau, the satisfies \mathbb{E}[M_\tau] = \mathbb{E}[M_0]. In continuous time, stopping times admit right-continuous modifications, meaning there exists a of \tau such that the paths t \mapsto \mathbf{1}_{\{\tau \leq t\}} are right-continuous , which aligns with right-continuous filtrations and processes. Stopping times play a central role in processes by the of stopped processes, such as X_{t \wedge \tau} = X_{\min(t, \tau)}, which equals X_t for t < \tau and X_\tau for t \geq \tau. If X is a martingale, then so is the stopped process X_{t \wedge \tau}, preserving the martingale property up to the random stopping time. This framework allows analysis of processes over random horizons without altering their probabilistic structure.

Core Concepts

Definition of Hitting Time

In the theory of stochastic processes, the hitting time associated with a Borel set A in the state space is the earliest time at which the process enters A. For a continuous-time stochastic process (X_t)_{t \geq 0} taking values in a measurable space, the hitting time \tau_A is formally defined as \tau_A = \inf\{ t \geq 0 : X_t \in A \}, where the infimum over the empty set is taken to be \infty. In discrete time, for a stochastic process (X_n)_{n \geq 0}, the analogous definition uses the minimum instead of the infimum: \tau_A = \min\{ n \geq 0 : X_n \in A \}, with the convention that the minimum over the empty set is \infty. To ensure the measurability of \tau_A and its status as a stopping time in continuous time, the process (X_t) is typically assumed to have right-continuous paths with left limits (càdlàg paths); under this condition, the hitting time of any closed Borel set A is a stopping time, as established by the . The definition accommodates various boundary cases, such as when A is a singleton \{a\} (marking first entry to a specific state) or a larger set, without alteration to the form of \tau_A. If X_0 \in A, then \tau_A = 0 by definition. For processes featuring absorbing states—where, upon entry, the process remains in the state indefinitely—the hitting time \tau_A precisely captures the moment of absorption into such a state. The first exit time, denoted \tau_{A^c}^A = \inf\{t \geq 0 : X_t \notin A \mid X_0 \in A\}, represents the first moment a stochastic process starting inside a set A leaves that set. This contrasts with the hitting time, which focuses on entering a target set from outside. The first return time, \tau_A^+ = \inf\{t > 0 : X_t \in A \mid X_0 \in A\}, measures the initial re-entry into a set A after leaving it, explicitly excluding the starting time t=0. It builds on the by considering recurrence from an interior point. The first passage time is frequently used interchangeably with the hitting time, particularly when targeting specific points in discrete or continuous state spaces. However, in processes, subtle distinctions may arise: while both denote the first attainment of a level, first passage time sometimes emphasizes crossing a in models allowing overshoots, though of paths often renders them equivalent. Absorption time refers to the hitting time of an absorbing state in a , where an absorbing state i satisfies p_{ii} = 1, preventing escape once entered. This concept applies specifically to chains with trapping states, differing from general hitting times by the permanence of the target. These notions, including hitting times, are specialized forms of stopping times in processes.

Mathematical Properties

Hitting Probabilities

In stochastic processes, the hitting probability h_x(A) = \mathbb{P}_x(\tau_A < \infty), where \tau_A = \inf\{ t \geq 0 : X_t \in A \}, quantifies the likelihood that the process, starting from state x \notin A, will reach the target set A at some finite time. This measure captures reachability without regard to the time taken, distinguishing it from temporal aspects like expected durations. For discrete-time Markov chains, the hitting probabilities satisfy a system of linear equations derived from the Markov property and first-step analysis. Specifically, for states i \notin A, h_{iA} = \sum_{j \in S} p_{ij} h_{jA}, with boundary conditions h_{iA} = 1 if i \in A and the solution being the minimal non-negative function satisfying these relations. In finite-state chains, this system can be solved explicitly using linear algebra, yielding the unique solution as the absorption probabilities into A when treating A as absorbing. Hitting probabilities are bounded between 0 and 1, reflecting their probabilistic nature, and exhibit monotonicity: if A \subseteq B, then h_x(A) \leq h_x(B) for all x, as reaching A implies reaching the larger set B. In continuous-state processes like diffusions, these probabilities correspond to subharmonic functions with respect to the infinitesimal generator, satisfying mean-value inequalities that bound their growth and ensure the minimal solution property. The concept connects to state classification in Markov chains, where a state i is recurrent if and only if P_i(τ_i < ∞) = 1, with τ_i = inf{n ≥1 : X_n = i} the first return time, meaning the chain returns to i with probability 1. In recurrent classes, hitting probabilities to any state within the class are 1 from all starting points in the class, underscoring the inescapability of such communicating sets.

Expected Hitting Times

The expected hitting time of a set A starting from a state x \notin A in a stochastic process is defined as m_x(A) = \mathbb{E}_x[\tau_A], where \tau_A = \inf\{t \geq 0 : X_t \in A\} is the of A, and this expectation may be infinite if the process does not hit A in finite expected time with positive probability. In discrete-time with finite state space excluding A, the expected hitting times satisfy a system of linear equations derived from the first-step analysis: for each i \notin A, m_i = 1 + \sum_{j \notin A} p_{ij} m_j, with boundary conditions m_i = 0 for i \in A, where p_{ij} are the transition probabilities. These equations can be solved explicitly for small state spaces or numerically for larger ones, providing the mean time to absorption in A. For continuous-time diffusions, such as those satisfying stochastic differential equations dX_t = \mu(X_t) dt + \sigma(X_t) dW_t, the expected hitting time m(x) to a domain boundary A solves the elliptic partial differential equation \mathcal{L} m = -1 in the interior, subject to m = 0 on A, where \mathcal{L} = \mu \frac{d}{dx} + \frac{\sigma^2}{2} \frac{d^2}{dx^2} is the infinitesimal generator in one dimension (or the analogous operator in higher dimensions). For standard one-dimensional Brownian motion (\mu = 0, \sigma = 1) starting at 0, the expected time to exit the interval [-r, r] is r^2. More generally, for a diffusion with constant volatility \sigma, this scales as r^2 / \sigma^2. Higher moments of the hitting time can be obtained by solving similar boundary value problems. For the second moment v(x) = \mathbb{E}_x[\tau_A^2], it satisfies \mathcal{L} v = -2 m with v = 0 on A. For the exit time from [-r, r] in standard Brownian motion starting at 0, \mathbb{E}[\tau^2] = \frac{5}{3} r^4, yielding a variance of \frac{2}{3} r^4. Asymptotic behavior for large sets A often follows scaling laws determined by the process's dimensionality and drift. In one-dimensional diffusions, expected hitting times to distant or large intervals scale quadratically with the distance or size, reflecting the diffusive spread; in higher dimensions, logarithmic corrections may appear for recurrent cases like two-dimensional Brownian motion hitting large compact sets.

Key Theorems

Début Theorem

The début theorem establishes that, under suitable conditions, the hitting time of a Borel set by a stochastic process is a stopping time with respect to the natural filtration generated by the process. Specifically, let (\Omega, \mathcal{F}, (\mathcal{F}_t)_{t \geq 0}, P) be a filtered probability space satisfying the usual conditions (right-continuous and complete), and let X = (X_t)_{t \geq 0} be an adapted stochastic process with right-continuous paths that is progressively measurable with respect to (\mathcal{F}_t)_{t \geq 0}. For a Borel set A \subset \mathbb{R}^d, the hitting time \tau_A = \inf\{t \geq 0 : X_t \in A\} (with the convention \inf \emptyset = \infty) is a stopping time, meaning that \{\tau_A \leq t\} \in \mathcal{F}_t for all t \geq 0. The right-continuity of the paths of X and of the filtration (\mathcal{F}_t)_{t \geq 0} is essential for this result to hold, as it ensures the progressive measurability of the graph set \{( \omega, t) : X_t(\omega) \in A\} and allows the application of measurability results for infima over time. Without these assumptions, the hitting time may fail to be a stopping time; for instance, if X has jumps or the filtration is not right-continuous, the event \{\tau_A \leq t\} might not be \mathcal{F}_t-measurable. A proof of the début theorem relies on the fact that the set D = \{(\omega, t) : X_t(\omega) \in A\} is progressively measurable, and the début (first entrance time) of such a set into the predictable \sigma-algebra is \mathcal{F}_t-measurable for each t. One approach uses the measurable projection theorem to show that the debut time is the infimum of a sequence of approximating stopping times, ensuring \{\tau_A \leq t\} = \pi(\{s \leq t\} \cap D), where \pi denotes projection onto \Omega, and this projection preserves measurability under the given conditions. A simpler proof avoids capacities by constructing explicit approximations using the right-continuity to rationalize the times and verify the stopping time property directly. The converse does not hold in general: not every stopping time can be expressed as the hitting time of a Borel set for the given process X. However, every stopping time admits a natural representation as a hitting time with respect to an enlarged filtration generated by the coordinate process on path space.

Optional Sampling Theorem

The optional sampling theorem, developed by J. L. Doob in the 1950s, asserts that for a martingale M and a bounded stopping time \tau, the expected value satisfies \mathbb{E}[M_\tau] = \mathbb{E}[M_0]. This result extends to uniformly integrable martingales and unbounded stopping times under suitable conditions, such as \mathbb{E}[|M_\tau|] < \infty, ensuring that the stopped process retains the martingale property. In the context of hitting times, which qualify as stopping times, the theorem applies directly when the hitting time \tau_A of a set A meets the required conditions for the martingale M. Specifically, if M is uniformly integrable and \tau_A is such that the family \{M_{t \wedge \tau_A} : t \geq 0\} is uniformly integrable, then \mathbb{E}[M_{\tau_A}] = M_0. This enables computation of expectations at the first hitting time, as seen in applications to random walks where the position process is a martingale, yielding \mathbb{E}[S_{\tau}] = S_0 at the hitting time \tau of either level a or b (as in the gambler's ruin problem). For bounded stopping times, the theorem holds without additional integrability assumptions beyond the martingale property. In unbounded cases, such as typical hitting times, convergence requires conditions like \mathbb{E}[\tau_A] < \infty or bounded jumps to guarantee uniform integrability and prevent issues like infinite expectations.

Examples

Discrete Markov Chains

In discrete Markov chains, hitting times quantify the first passage from one state to a target state or set, leveraging the memoryless property to set up solvable recursive equations. These times are particularly tractable in finite-state settings, where linear systems arise naturally from conditioning on the first step. A canonical example is the simple symmetric random walk on the non-negative integers up to N, starting at position i (0 < i < N), with absorption at 0 or N. This models the , where the gambler begins with i units and the house with N - i units, each step moving +1 or -1 with equal probability $1/2. The hitting time \tau is the first time the walk reaches either boundary. The probability of hitting N before 0 is i/N, while the expected hitting time satisfies the recurrence E_i[\tau] = 1 + \frac{1}{2} E_{i-1}[\tau] + \frac{1}{2} E_{i+1}[\tau] with boundary conditions E_0[\tau] = E_N[\tau] = 0. The solution is E_i[\tau] = i(N - i). For the case of hitting +a or -b starting from 0 (with total span a + b = N and i = a, say), this yields E[\tau] = ab. For general absorbing Markov chains with transient states B and absorbing states C, mean hitting times to absorption can be computed via matrix methods. Partition the transition matrix P into submatrices Q (transient-to-transient) and R (transient-to-absorbing). The fundamental matrix N = (I - Q)^{-1} gives the expected number of visits to transient states before absorption. The mean time to absorption from transient state i is the i-th entry of t = N \mathbf{1}, where \mathbf{1} is the column vector of ones, satisfying the system (I - Q) t = \mathbf{1}. This approach extends the random walk example, where Q encodes the interior transitions. Birth-death chains, a subclass of discrete chains on \{0, 1, \dots, M\} with transitions only to adjacent states, provide another setting for hitting times via recurrences. From interior state j, the expected time \phi(j) to hit 0 or M satisfies \phi(j) = 1 + q_j \phi(j-1) + p_j \phi(j+1) for j = 1, \dots, M-1, with \phi(0) = \phi(M) = 0, where p_j is the probability to j+1, q_j to j-1, and self-loop probability r_j = 1 - p_j - q_j. Solving involves differencing: let \Delta \phi(j) = \phi(j+1) - \phi(j), yielding \Delta \phi(j) = \frac{q_j}{p_j} \Delta \phi(j-1) - \frac{1}{p_j}, integrable for explicit forms in homogeneous cases like the symmetric random walk, where \phi(j) = j(M - j). A numerical illustration is the two-state chain with states 0 and 1, transition matrix P = \begin{pmatrix} 1-p & p \\ q & 1-q \end{pmatrix} (0 < p, q < 1). The expected hitting time from 0 to 1, E_0[\tau_1], satisfies E_0[\tau_1] = 1 + (1-p) E_0[\tau_1] by first-step analysis, solving to E_0[\tau_1] = 1/p. Similarly, E_1[\tau_0] = 1/q. This geometric distribution arises as the chain persists in 0 until transitioning to 1.

Continuous Processes

In continuous processes, such as continuous-time Markov chains (CTMCs) and diffusions like , the hitting time is defined as the first instant at which the process reaches a specified state or boundary, formally H_A = \inf \{ t \geq 0 : X_t \in A \}, where X = (X_t)_{t \geq 0} is the process and A is a subset of the state space. This notion extends the discrete case by accounting for continuous evolution, often governed by infinitesimal generators or transition rates, and relies on the strong to analyze distributions and expectations. Hitting times in these settings are stopping times, enabling applications of optional sampling theorems, and their finite expectation or probability of occurrence depends on recurrence properties of the process. For CTMCs with countable state space and transition rate matrix Q = (q_{ij}), the hitting time to a closed subset A satisfies recursive equations derived from the embedded jump chain. Specifically, the probability of eventual hitting starting from state i \notin A is h_A(i) = \sum_j h_A(j) \int_0^\infty e^{-q_i t} q_{ij} dt, where q_i = -\sum_{j \neq i} q_{ij} is the exit rate from i, and h_A(i) = 1 if i \in A. The expected hitting time k_A(i) follows k_A(i) = \frac{1}{q_i} + \sum_{j \notin A} k_A(j) \int_0^\infty e^{-q_i t} q_{ij} dt for i \notin A, with k_A(i) = 0 if i \in A, assuming the chain is minimal and non-explosive. These equations can be solved via the embedded discrete-time chain, where recurrence or transience determines if h_A(i) = 1 almost surely. A canonical example is the M/M/1 queue, a birth-death CTMC with states representing queue length, arrival rate \lambda, and service rate \mu > \lambda. The hitting time to the empty state \{0\} starting from i \geq 1 has expected value solving the system k_0(i) = \frac{1 + \lambda k_0(i+1) + \mu k_0(i-1)}{\lambda + \mu} for i \geq 1, with boundary k_0(0) = 0, yielding explicit forms like the mean return time to 0 as m_0 = \frac{\mu}{\lambda (\mu - \lambda)}. In the positive recurrent case \lambda < \mu, the chain hits 0 almost surely, and ergodic theorems imply long-run proportions converge to the invariant distribution \pi_i = (1 - \rho) \rho^i where \rho = \lambda / \mu. In diffusion processes, exemplified by standard one-dimensional Brownian motion B = (B_t)_{t \geq 0} starting at 0 with continuous paths and independent Gaussian increments, the hitting time to level a > 0 is T_a = \inf \{ t \geq 0 : B_t = a \}. This T_a is finite, and its distribution is the with density f_{T_a}(t) = \frac{|a|}{\sqrt{2\pi t^3}} \exp\left( -\frac{a^2}{2t} \right) for t > 0, derived via the . The process (T_a)_{a \geq 0} forms a subordinator of index $1/2, with \mathbb{E}[T_a] = \infty, reflecting the heavy-tailed nature despite finite moments in bounded intervals. For exit times from an interval (-b, b) with b > 0, \mathbb{E}[T_{(-b,b)}] = b^2 by invariance and martingale properties. Higher-dimensional extensions, such as planar hitting a , leverage conformal invariance: the hitting distribution on a is under suitable mappings, with the time from a region like a satisfying measure probabilities. In dimensions d \geq 3, is transient and hits compact sets with positive probability less than 1, bounded by capacity estimates. These results underpin applications in and value problems for the Laplace equation.

References

  1. [1]
    [PDF] 18.445 Introduction to Stochastic Processes, Lecture 10
    Suppose that (Xn)n≥0 is an irreducible Markov chain with transition matrix P and stationary measure π. Let τx be the hitting time : τx = min{n ≥ 0 : Xn = x}.
  2. [2]
    [PDF] 1 Stopping Times
    Definition 1.1 Let X = {Xn : n ≥ 0} be a stochastic process. A stopping time with respect to. X is a random time such that for each n ≥ 0, the event {τ = n} is ...
  3. [3]
    First Hitting Time Models for Lifetime Data - ScienceDirect.com
    In other words, the first hitting time is the time until the stochastic process first enters or hits set H. The state space of the process {X(t)} may be one- ...
  4. [4]
    [PDF] An Introduction to Stochastic Processes in Continuous Time
    Loosely speaking, a stochastic process is a phenomenon that can be thought of as evolving in time in a random manner. Common examples are the location of a ...
  5. [5]
    [PDF] INTRODUCTION TO BASIC PROPERTIES OF MARKOV CHAIN ...
    Feb 2, 2023 · A stochastic process is any process describing the evolution in time of a random phenomenon, a collection or ensemble of random variables ...
  6. [6]
    [PDF] Markov Processes
    The Markov property is the independence of the future from the past, given the present. Let us be more formal. Definition 102 (Markov Property) A one-parameter ...
  7. [7]
    [PDF] Introduction to Stochastic Processes - Lecture Notes - UT Math
    Dec 24, 2010 · 5.1 Definition and first properties . ... Simply put, a stochastic process has the Markov property if its future evolution depends only.
  8. [8]
    [PDF] Lecture 5 : Stochastic Processes I - MIT OpenCourseWare
    A stochastic process with the Markov property is called a Markov chain. Note that a finite Markov chain can be described in terms of the transition.
  9. [9]
    [PDF] 9. Diffusion proceses. A diffusion process is a Markov process with ...
    A diffusion process is a Markov process with continuous paths with values in some Rd. Given the past history up to time s the conditional distribution at a ...
  10. [10]
    [PDF] Recurrence and Transience
    Definition 1.1 (Recurrent states, Transient states). • The state i is recurrent if ρii = 1. • The state i is transient if ρii < 1. Proposition 1.2. • State i ...
  11. [11]
    [PDF] Stochastic calculus and Arbitrage-free options pricing
    Definition 2.2. A filtration {Ft} is a collection of increasing σ-algebras for sto- chastic process Xt on (Ω, F, P) such ...<|control11|><|separator|>
  12. [12]
    [PDF] Chapter 6 - Random Times and Their Properties
    Section 6.2 defines various sort of “waiting” times, including hit- ting, first-passage, and return or recurrence times. Section 6.3 proves the Kac recurrence ...
  13. [13]
    An essay on the general theory of stochastic processes - Project Euclid
    Definition 2.1. A stopping time is a mapping T : Ω → R+ such that {T ≤ t} ∈. Ft for all t ≥ 0.
  14. [14]
    Stopping Times - Random Services
    A stopping time is a random time that does not require that we see into the future. That is, we can tell if from the information available at time.
  15. [15]
    [PDF] 4. Stochastic Integral - 4.1. Continuous Time Processes
    Let X and Y be right-continuous and modifications of each other. Since X and ... If X is a continuous adapted process and V is closed, then τV is a stopping time.
  16. [16]
    [PDF] Stochastic Processes in Continuous Time - Arizona Math
    Dec 14, 2007 · A continuous stochastic process X is called a time homogeneous Itô diffusion is there exists measurable mappings. 1. σ : Rd → Rd×r, (the ...
  17. [17]
    [PDF] Chapter 7 Markov chain background - Arizona Math
    First, there can be transient states even if the chain is irreducible. Second, irreducible chains need not have stationary distibutions when they are recurrent.
  18. [18]
    [PDF] Absorbing Markov Chains - UMD Math Department
    Jul 21, 2021 · An absorbing Markov chain has states from which it is impossible to leave, and it is possible to go from any transient state to an absorbing ...
  19. [19]
    Section 8 Hitting times | MATH2750 Introduction to Markov Processes
    Section 8 Hitting times. Definitions: Hitting probability, expected hitting time, return probability, expected return time; Finding these by conditioning on ...
  20. [20]
    [PDF] Chapter 8: Markov Chains
    We have been calculating hitting probabilities for Markov chains since Chapter 2, using First-Step. Analysis. The hitting probability describes the ...
  21. [21]
    [PDF] Hitting Probabilities
    Consider a Markov chain with a countable state space S and a transition matrix P. Suppose we want to find Pi(chain hits A before C) for some i ∈ S and ...
  22. [22]
    [PDF] Brownian Motion - UC Berkeley Statistics
    ... hitting probability can be approximated by the capacity of A with respect to ... subharmonic. ¦. To begin with we give two useful reformulations of the ...
  23. [23]
    11.2.4 Classification of States - Probability Course
    The states in Class 4 are called recurrent states, while the other states in this chain are called transient. In general, a state is said to be recurrent if, ...
  24. [24]
    4. Transience and Recurrence - Random Services
    The first thing to notice is that the hitting probability is a class property. Suppose that \( x \) is transient and that \( A \) is a recurrent class. Then \( ...
  25. [25]
    [PDF] Markov Chains
    i = Pi(hit A), kA i = Ei(time to hit A). Remarkably, these quantities can be calculated from certain simple linear equations. Let us consider an example. 9 ...<|control11|><|separator|>
  26. [26]
    [PDF] 1 IEOR 4701: Notes on Brownian Motion
    What is the expected length of time until either 10 or −2 are hit? SOLUTION ... Now let Tx = min{t ≥ 0 : B(t) = x | B(0) = 0}, the hitting time to x > 0.
  27. [27]
    [PDF] Notes 18 : Optional Sampling Theorem
    Lecture 18: Optional Sampling Theorem. 4. 1.3 Optional Sampling Theorem (OST). We show that the MG property extends to stopping times under UI MGs. THM 18.13 ...
  28. [28]
    [PDF] Lecture 11: Martingales II - MIT OpenCourseWare
    Oct 9, 2013 · 1. Second stopping theorem. 2. Doob-Kolmogorov inequality. 3. Applications of stopping theorems to hitting times of a Brownian motion.
  29. [29]
    [PDF] Doob's Optional Stopping Theorem
    The Doob's optional stopping time theorem is contained in many basic texts on probability and Martingales. (See, for example, Theorem 10.10 of.Missing: hitting | Show results with:hitting
  30. [30]
    [PDF] A Mathematical Introduction to Markov Chains1 - Virginia Tech
    May 13, 2018 · Calculation of hitting probabilities, mean hitting times, determining recurrence vs. transience, and explosion vs. non-explosion, are all ...
  31. [31]
    [PDF] random walks in one dimension - steven p. lalley
    Then the gambler's ruin problem can be re-formulated as follows: Problem ... Therefore, the probability of return is 1. D. 3. GAMBLER'S RUIN: EXPECTED DURATION OF ...
  32. [32]
    [PDF] MTH 565 Lectures 9 - 19 - Oregon State University
    Moreover, each state is null recurrent as Ex[Tx] = E0[T0] for all x ∈ Z. Page 46. MTH 565. 45. Expected first hitting time. Consider a birth-and-death chain ...
  33. [33]
    [PDF] Lecture 3: Discrete-Time Markov Chain – Part I 3.1 Introduction
    Ti is interpreted as the first time the chain returns to state i. • Successive Returns. Let τk be the time of the k-th return to state i (note that τ1 = Ti).
  34. [34]
    [PDF] Markov Chains - CAPE
    writing down and solving a system of linear equations. This situation is familiar from hitting probabilities and expected hitting times. Indeed, these are ...