Fact-checked by Grok 2 weeks ago

Recurrence relation

A recurrence relation is an equation that defines a recursively by expressing each as a of one or more preceding terms in the sequence. These relations are fundamental in and , providing a for modeling sequences where explicit formulas are difficult to derive directly. Recurrence relations can be classified into various types based on their structure. Linear recurrence relations express each term as a of previous terms, often with coefficients, such as the where F_n = F_{n-1} + F_{n-2} for n \geq 2, with initial conditions F_0 = 0 and F_1 = 1. Homogeneous linear recurrences have no additional non-recursive terms, while non-homogeneous ones include an extra function of n; higher-order relations depend on more previous terms, with the order equal to the number of such terms. Nonlinear recurrences, in contrast, involve nonlinear functions of prior terms and are generally harder to solve. Solving recurrence relations typically involves finding a that eliminates the . For linear homogeneous relations with constant coefficients, the method is used: the roots of the r^k - c_1 r^{k-1} - \cdots - c_k = 0 determine the general solution as a of terms like r_i^n. Initial conditions then fix the coefficients. offer another approach, transforming the recurrence into an for the generating function of the sequence. Recurrence relations have broad applications across mathematics and related fields. In , they model the time complexity of recursive algorithms, such as divide-and-conquer methods like , where T(n) = 2T(n/2) + n. In , they enumerate structures like paths in graphs or permutations, while in probability, they describe Markov chains and processes. These tools also appear in physics for modeling or , highlighting their versatility in capturing iterative dependencies.

Fundamentals

Definition

A recurrence relation is an equation that defines a sequence recursively by expressing each term as a function of one or more preceding terms in the sequence./08%3A_Recursion_and_Recurrence_Relations/8.03%3A_Recurrence_Relations) More formally, for a sequence \{a_n\} where n is a non-negative integer, a recurrence relation of order k takes the form a_n = f(a_{n-1}, a_{n-2}, \dots, a_{n-k}) for n > k, where f is some function. This recursive structure contrasts with an explicit or closed-form definition, which provides a direct formula for a_n in terms of n alone, without reference to other terms in the sequence; solving a recurrence relation typically involves deriving such a closed-form expression from the recursive one. To uniquely determine the sequence generated by a recurrence relation, one or more initial conditions—specific values for the first few terms, such as a_0, a_1, \dots, a_{k-1}—must be provided alongside the relation. These boundary values ensure that the recursion produces a single, well-defined rather than a family of possible sequences. The order of the recurrence is defined as the largest k such that a_{n-k} appears in the equation, representing the "depth" of dependency on prior terms. Recurrence relations are further classified by linearity: a linear recurrence expresses a_n as a of previous terms (additive with constant coefficients multiplying the terms), possibly plus a non-homogeneous term, whereas a nonlinear recurrence involves products, powers, or other nonlinear functions of the preceding terms. For instance, a simple first-order linear recurrence might be a_n = 2a_{n-1} for n \geq 1, which defines each term as twice the immediate predecessor, requiring an initial value like a_0 = 1 to specify the sequence.

Notation and Basic Properties

Recurrence relations are typically expressed using sequences denoted by lowercase letters with subscripts, such as a_n for the nth term, where n is a non-negative integer. The forward difference operator \Delta is defined as \Delta a_n = a_{n+1} - a_n, which measures the discrete change between consecutive terms. Complementing this, the shift operator E advances the sequence by one index, satisfying E a_n = a_{n+1}, and higher powers like E^k a_n = a_{n+k} allow compact representation of relations involving multiple prior terms. A key property of recurrence relations is that, when supplemented with appropriate initial conditions (such as the first k terms for a kth-order ), they determine a unique satisfying the relation. For linear recurrence relations, this uniqueness extends to well-posedness, ensuring existence and a single solution given the initial values, as the relations map linearly from prior terms to the next. Recurrence relations are classified as homogeneous or non-homogeneous based on the form of the defining . A homogeneous relation has the form a_n = \sum_{i=1}^k c_i a_{n-i}, where the right-hand side involves only previous terms of the sequence and no additional ; for instance, the case a_n - p a_{n-1} = 0 is homogeneous. In contrast, a non-homogeneous relation includes an extra term, written as a_n - \sum_{i=1}^k c_i a_{n-i} = f(n), where f(n) is a forcing independent of the sequence. For linear homogeneous recurrence relations with constant coefficients, provides a foundational tool for analysis. Assuming of the form a_n = r^n (where r \neq 0) and substituting into the relation a_n = \sum_{i=1}^k c_i a_{n-i} yields r^k - \sum_{i=1}^k c_i r^{k-i} = 0. Systems of recurrence relations, involving multiple interdependent sequences such as \mathbf{v}_n = (a_n, b_n)^T, can be expressed in as \mathbf{v}_n = M \mathbf{v}_{n-1}, where M is a companion matrix encoding the coefficients of the coupled relations. This matrix form facilitates studying the system's behavior through linear algebra, preserving the uniqueness property when initial vectors are specified.

Examples

Linear Recurrences

Linear recurrence relations express each term of a sequence as a of one or more preceding terms, often with constant coefficients. A classic example is the , defined by the second-order recurrence F_n = F_{n-1} + F_{n-2} for n \geq 2, with initial conditions F_0 = 0 and F_1 = 1. Computing the first few terms yields: F_2 = 1, F_3 = 2, F_4 = 3, F_5 = 5, F_6 = 8, F_7 = 13, and F_8 = 21. This sequence exhibits , approximately proportional to \phi^n, where \phi \approx 1.618 is the . The factorial sequence provides another illustration of a first-order linear recurrence, given by n! = n \cdot (n-1)! for n \geq 1, with the base case $0! = 1. Explicit computations for small values include: $1! = 1, $2! = 2, $3! = 6, $4! = 24, and $5! = 120. This relation highlights how factorials build multiplicatively from prior terms. Binomial coefficients also satisfy a linear recurrence, specifically \binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k} for $1 \leq k \leq n-1, with boundary conditions \binom{n}{0} = 1 and \binom{n}{n} = 1 for all n \geq 0. For instance, computing the third row (n=3) from the second row (n=2): from \binom{2}{0}=1, \binom{2}{1}=2, \binom{2}{2}=1, so \binom{3}{1} = \binom{2}{0} + \binom{2}{1} = 1 + 2 = 3, \binom{3}{2} = \binom{2}{1} + \binom{2}{2} = 2 + 1 = 3. This recurrence generates Pascal's triangle row by row. First-order linear recurrences encompass arithmetic and geometric sequences. An arithmetic sequence follows a_n = a_{n-1} + d for n \geq 1, where d is the constant difference; starting with a_0 = 5 and d = 2, the terms are 5, 7, 9, 11, 13. A geometric sequence obeys a_n = r \cdot a_{n-1} for n \geq 1, with common ratio r; for a_0 = 2 and r = 3, the sequence is 2, 6, 18, 54, 162. These simple cases demonstrate steady linear progression without higher-order dependencies.

Nonlinear Recurrences

Nonlinear recurrence relations involve terms where the dependent variable appears in a non-additive manner, often leading to complex dynamical behaviors such as bifurcations and , unlike the predictable patterns in linear cases. These relations are typically defined with an order greater than one and specified initial conditions, as per the general framework of recurrences. A prominent example is the , introduced in the context of and popularized in studies of discrete dynamical systems, given by the recurrence x_{n+1} = r x_n (1 - x_n), where r is a parameter ranging from 0 to 4, and the initial value x_0 lies in [0,1] to keep iterates bounded in that interval. This first-order nonlinear recurrence models growth limited by carrying capacity, with x_n representing the population fraction at generation n. The behavior of the logistic map varies dramatically with r, exhibiting bifurcations that mark transitions from stability to periodicity and chaos. For r = 2, the sequence converges to the fixed point x = 0.5, as iterations dampen toward equilibrium regardless of x_0 in (0,1). Increasing to r = 3, the map enters a regime of period-2 oscillations, where even and odd iterates alternate between two values after transients decay. At r = 4, the system becomes fully chaotic, with the Lyapunov exponent positive, indicating exponential divergence of nearby trajectories. To illustrate sensitivity to initial conditions at r = 4, consider x_0 = 0.1: the first few iterates are x_1 = 0.36, x_2 = 0.9216, x_3 \approx 0.2890, x_4 \approx 0.8220, and so on, producing a seemingly random sequence dense in [0,1]. In contrast, starting from x_0 = 0.100001, the iterates diverge rapidly from the previous trajectory after a few steps, such as x_3 \approx 0.2890, highlighting the map's extreme dependence on precise initial values. Fixed points exist at x = 0 and x = (r-1)/r, but their nature shifts with r without altering the overall qualitative dynamics here. Another illustrative nonlinear recurrence arises in variants of the puzzle, which extends the classic linear problem of disk moves. The standard satisfies the linear recurrence T_n = 2 T_{n-1} + 1 with T_1 = 1, yielding exponential growth T_n = 2^n - 1. However, nonlinear variants, such as those incorporating restricted moves or multi-peg configurations with interdependent states, introduce multiplicative or higher-order nonlinearities; for instance, a framed variant where moves depend on the current tower configuration can lead to recurrences like T_n = 2 T_{n-1} + f(n) with f(n) nonlinear in prior states, resulting in super-exponential growth rates that outpace simple exponentials. These modifications model more realistic constraint-based puzzles and demonstrate how nonlinearity amplifies . The provides an extreme example of a nonlinear recurrence embodying hyperoperations, growing faster than any and highlighting the limits of in recursive definitions. Defined as a double recurrence with base cases A(0,n) = n+1 for n \geq 0, A(m,0) = A(m-1,1) for m \geq 1, and A(m,n) = A(m-1, A(m,n-1)) for m,n \geq 1, it nests iterations hyperoperationally: A(1,n) yields addition, A(2,n) multiplication, A(3,n) exponentiation, A(4,n) tetration, and higher levels escalate rapidly. For small values, A(4,2) = 2^{2^{2^2}} = 2^{16} = 65536, while A(4,3) towers to $2^{65536}, a number vastly larger than exponential scales, illustrating the explosive nonlinearity from recursive self-application. This function, originally formulated to demonstrate non-primitive recursive functions, underscores how nested nonlinear recurrences can encode transfinite growth hierarchies.

Difference Equations

Forward and Backward Differences

In the context of recurrence relations, forward and backward differences serve as analogs to , enabling the analysis of sequences by measuring changes between successive terms. The forward difference operator, denoted by Δ, is defined for a sequence {a_n} as Δa_n = a_{n+1} - a_n. Higher-order forward differences are obtained iteratively, with the k-th order difference Δ^k a_n = Δ(Δ^{k-1} a_n), providing a way to capture accelerated changes in the sequence similar to higher in continuous . The backward difference operator, denoted by ∇, offers an alternative perspective by looking retrospectively: ∇a_n = a_n - a_{n-1}. Higher-order backward differences follow analogously, ∇^k a_n = ∇(∇^{k-1} a_n). Mixed differences, combining forward and backward operators, arise in more complex analyses, such as central differences δa_n = (Δa_n + ∇a_n)/2, which approximate symmetric changes around a point. Recurrence relations can be reformulated as difference equations using these operators, facilitating solution techniques akin to equations. For instance, a general first-order recurrence might be expressed as Δa_n = f(a_n, n), where f encapsulates the dependency on the current term and index. Linear difference equations take the form Δa_n + p_n a_n = q_n, where p_n and q_n are functions of n; this structure mirrors linear ordinary equations and allows for methods in discrete settings. Newton's forward difference formula provides an tool for sequences, approximating a_n based on initial values and differences: a_n \approx \sum_{k=0}^m \binom{n}{k} \Delta^k a_0, where the \binom{n}{k} = \frac{n(n-1)\cdots(n-k+1)}{k!} generalizes to non-integer n, enabling beyond tabulated points. This formula is particularly useful in for constructing polynomials that fit discrete data from recurrences. A key relation links differences to summation: solving the simple difference equation Δa_n = g_n yields a_n = a_0 + \sum_{i=0}^{n-1} g_i, as the forward difference telescopes over the sum, providing an exact solution for non-homogeneous linear cases with zero p_n.

From Sequences to Multidimensional Grids

Extending difference equations from one-dimensional sequences to multidimensional settings involves considering functions defined on lattices or grids, where values depend on neighbors in multiple directions. This generalization leads to partial difference equations, which are discrete analogs of partial differential equations and are used to model phenomena on two- or higher-dimensional discrete domains. Partial difference operators are defined similarly to their one-dimensional counterparts but act along specific directions. For a u(m, n) on a two-dimensional integer , the forward partial operator in the x- (or m-) is given by \Delta_x u(m, n) = u(m+1, n) - u(m, n), and analogously for the y-: \Delta_y u(m, n) = u(m, n+1) - u(m, n). Higher-order or mixed s, such as \Delta_x \Delta_y u(m, n), can be formed by composition, enabling the expression of more complex relations. These operators facilitate the study of s where changes occur independently along each . Grid-based recurrences arise when the value at a grid point is determined by values at adjacent points, often resembling path-counting problems in . A simple example is the recurrence u(m, n) = u(m-1, n) + u(m, n-1), with boundary conditions u(0, n) = 1 and u(m, 0) = 1 for m, n \geq 0, which generates the coefficients u(m, n) = \binom{m+n}{m} and corresponds to the number of paths from (0,0) to (m,n) using right and up steps. This extends the one-dimensional recurrence by distributing dependencies across dimensions. To embed a one-dimensional recurrence into a two-dimensional , one approach uses bivariate s, where the for the 1D becomes a slice or diagonal of the satisfying a partial . Alternatively, interpret the 1D terms as path counts on a , promoting the to a row or column in the array. For instance, the can be embedded as the values along the of a satisfying a related recurrence, preserving the original through summation over paths. A notable example of a multidimensional grid recurrence is provided by the Delannoy numbers D(m, n), which count the number of paths from (0,0) to (m,n) on a allowing steps right (1,0), up (0,1), or diagonally (1,1). These satisfy the recurrence D(m, n) = D(m-1, n) + D(m, n-1) + D(m-1, n-1), with boundary conditions D(0,0) = 1, D(m,0) = 1, and D(0,n) = 1 for m, n \geq 0. This equation captures interactions across both axes and diagonals, generalizing simpler paths. Boundary conditions on grids specify values or differences along the edges (e.g., m=0 or n=0) to initiate the recurrence and ensure . For well-posedness, these conditions must provide sufficient data to determine all interior values without ambiguity or , analogous to initial-value problems in continuous partial differential equations; insufficient or over-specified boundaries can lead to multiple or none. In settings, well-posed problems on bounded grids typically require conditions on all faces, guaranteeing a via forward propagation.

Solution Methods

Linear Homogeneous with Constant Coefficients

A linear homogeneous recurrence relation with coefficients is of the form a_n = c_1 a_{n-1} + c_2 a_{n-2} + \cdots + c_k a_{n-k} for n > k, where the c_i are constants and the right-hand side is zero. To solve such relations, the method of characteristic equations is employed, which transforms the recurrence into an whose determine the form of the solution. The for the recurrence a_n - c_1 a_{n-1} - c_2 a_{n-2} - \cdots - c_k a_{n-k} = 0 is given by r^k - c_1 r^{k-1} - c_2 r^{k-2} - \cdots - c_k = 0. The roots r_1, r_2, \dots, r_k of this equation of degree k are the characteristic roots. If all roots are distinct, the general solution is a_n = A_1 r_1^n + A_2 r_2^n + \cdots + A_k r_k^n, where the coefficients A_i are constants determined by initial conditions. This form arises because each term A_i r_i^n satisfies the recurrence individually, and the principle of linear superposition allows their sum to also satisfy it. For an illustrative example, consider the Fibonacci sequence defined by F_n = F_{n-1} + F_{n-2} for n \geq 2, with initial conditions F_0 = 0 and F_1 = 1. The characteristic equation is r^2 - r - 1 = 0, with roots \phi = \frac{1 + \sqrt{5}}{2} (the golden ratio) and \hat{\phi} = \frac{1 - \sqrt{5}}{2}. The general solution is thus F_n = A \phi^n + B \hat{\phi}^n. Applying the initial conditions: for n=0, A + B = 0; for n=1, A \phi + B \hat{\phi} = 1. Solving yields A = \frac{1}{\sqrt{5}} and B = -\frac{1}{\sqrt{5}}, resulting in Binet's formula F_n = \frac{\phi^n - \hat{\phi}^n}{\sqrt{5}}. To verify, substitute a_n = A r^n into the recurrence a_n - c_1 a_{n-1} - \cdots - c_k a_{n-k} = 0, yielding A r^n - c_1 A r^{n-1} - \cdots - c_k A r^{n-k} = A r^{n-k} (r^k - c_1 r^{k-1} - \cdots - c_k) = 0, which holds if r is a . The coefficients in the general solution are found by solving the from the initial k terms, ensuring uniqueness for the given boundary values. When the characteristic equation has repeated roots, the solution form must account for multiplicity to ensure linear independence. Suppose a root r has multiplicity m; then the corresponding terms are (A_0 + A_1 n + A_2 n^2 + \cdots + A_{m-1} n^{m-1}) r^n. For a full derivation, consider a second-order recurrence with repeated root r, so the characteristic equation is (s - r)^2 = s^2 - 2r s + r^2 = 0. Assume a solution of the form a_n = (A + B n) r^n. Substitute into the recurrence a_n = 2r a_{n-1} - r^2 a_{n-2}: (A + B n) r^n = 2r (A + B (n-1)) r^{n-1} - r^2 (A + B (n-2)) r^{n-2}. Simplify the right-hand side: $2r \cdot (A + B (n-1)) r^{n-1} = 2 (A + B (n-1)) r^n = (2 A r^n + 2 B (n-1) r^n), - r^2 \cdot (A + B (n-2)) r^{n-2} = - (A + B (n-2)) r^n = - A r^n - B (n-2) r^n. Combine: (2 A + 2 B (n-1) - A - B (n-2)) r^n = (A + 2 B n - 2 B - B n + 2 B) r^n = (A + B n) r^n, which matches the left-hand side, confirming the form satisfies the recurrence. For higher multiplicity m, the terms up to n^{m-1} r^n are verified similarly by differentiation of the geometric solution or by induction on the multiplicity. The coefficients are again determined by initial conditions, forming a Vandermonde-like system.

Linear Non-Homogeneous with Constant Coefficients

A linear non-homogeneous recurrence relation with constant coefficients takes the form a_n = c_1 a_{n-1} + c_2 a_{n-2} + \dots + c_k a_{n-k} + g(n), where the c_i are constants and g(n) is the non-homogeneous term. The general is the sum of the general solution to the corresponding homogeneous equation and a particular to the full non-homogeneous . The method of undetermined coefficients provides a systematic way to find a particular when g(n) is a , , or sinusoidal function (or products thereof). For a g(n) of degree m, assume a particular a_n^{(p)} that is also a of degree m, unless 1 is a of the of multiplicity s, in which case multiply by n^s and use degree m + s. Similarly, for g(n) = p(n) r^n where p(n) is a , assume a_n^{(p)} = n^s q(n) r^n with \deg q = \deg p and s the multiplicity of r; for sinusoids like g(n) = \sin(\theta n) or \cos(\theta n), assume a_n^{(p)} = A \cos(\theta n) + B \sin(\theta n), adjusting with n^s if e^{i\theta} is a . Consider the first-order example a_n - a_{n-1} = n. The homogeneous solution is a_n^{(h)} = A, from characteristic root 1. For the particular solution, since g(n) = n is degree 1 and 1 is a root of multiplicity 1, assume a_n^{(p)} = b n^2 + c n. Substituting yields b n^2 + c n - b (n-1)^2 - c (n-1) = n, simplifying to $2 b n + (c - b) = n. Thus, $2b = 1 so b = \frac{1}{2}, and c - b = 0 so c = \frac{1}{2}. The particular solution is a_n^{(p)} = \frac{1}{2} n^2 + \frac{1}{2} n, and the general solution is a_n = A + \frac{1}{2} n^2 + \frac{1}{2} n. For more general g(n), uses the discrete analog of the , called the Casoratian, to construct the particular solution. For a second-order relation, if y_1(n) and y_2(n) are independent homogeneous solutions with Casoratian W(n) = y_1(n) y_2(n+1) - y_1(n+1) y_2(n) \neq 0, the particular solution is a_n^{(p)} = u_1(n) y_1(n) + u_2(n) y_2(n), where u_1(n) = -\sum_{k=0}^{n-1} \frac{y_2(k+1) g(k+1)}{W(k)} and u_2(n) = \sum_{k=0}^{n-1} \frac{y_1(k+1) g(k+1)}{W(k)} (adjusted for initial conditions). This method applies broadly but requires solving the homogeneous case first. As an example of a driven harmonic recurrence, consider a_n - 2 a_{n-1} + a_{n-2} = \sin n. The r^2 - 2r + 1 = 0 has 1 with multiplicity 2, so the homogeneous is a_n^{(h)} = (A + B n) \cdot 1^n. Since \sin n corresponds to roots e^{\pm i} not matching 1, assume a_n^{(p)} = C \cos n + D \sin n. Substituting into the recurrence gives the system C(\cos n - 2 \cos(n-1) + \cos(n-2)) + D(\sin n - 2 \sin(n-1) + \sin(n-2)) = \sin n. Using trigonometric identities, this simplifies to -2 C (1 - \cos 1) \cos(n - 1) - 2 D (1 - \cos 1) \sin(n - 1) = \sin n, but solving yields C = -\frac{\sin 1}{4 \sin^2 (1/2)}, D = -\frac{\cos 1}{4 \sin^2 (1/2)} after equating coefficients. The full is a_n = (A + B n) + C \cos n + D \sin n.

First-Order Nonlinear and Rational Equations

First-order nonlinear recurrence relations take the form a_n = f(a_{n-1}), where f is a nonlinear . Exact solutions are rare for arbitrary f, but certain classes admit closed forms through substitutions or telescoping s. A separable form arises when the relation can be rewritten such that \frac{f(a)}{a} allows a integration, meaning the solution involves a summable series derived from the inverse of an analogous to the continuous case. For example, if the recurrence permits a to g(a_n) - g(a_{n-1}) = h(n), where g and h are known functions, the solution is g(a_n) = g(a_0) + \sum_{k=1}^n h(k), provided the sum has a closed form. This approach is effective for recurrences where nonlinearity permits such , prioritizing cases with explicit s. Rational difference equations represent a key solvable subclass, typically expressed as a_n = \frac{p a_{n-1} + q}{r a_{n-1} + s}, where p, q, r, s are constants (or functions of n). This linear fractional form can be linearized by embedding it into a recurrence. Specifically, the relation is equivalent to \begin{pmatrix} a_n \\ 1 \end{pmatrix} = \frac{1}{r a_{n-1} + s} \begin{pmatrix} p & q \\ r & s \end{pmatrix} \begin{pmatrix} a_{n-1} \\ 1 \end{pmatrix}. Ignoring the scalar factor (which normalizes the projective line), the solution involves computing the n-th power of the companion matrix M = \begin{pmatrix} p & q \\ r & s \end{pmatrix}, yielding a_n = \frac{ m_{11}^{(n)} a_0 + m_{12}^{(n)} }{ m_{21}^{(n)} a_0 + m_{22}^{(n)} }, where m_{ij}^{(n)} are entries of M^n. The matrix power can be found using diagonalization if M is diagonalizable, or Jordan form otherwise, providing an exact closed form when eigenvalues are known. This method extends to variable coefficients by product of matrices. A prominent example is the Riccati-type rational equation a_n = \frac{a_{n-1}^2 + b}{c a_{n-1} + d}, which arises in optimization and control theory as a scalar case of the discrete Riccati equation. To solve it, identify a fixed point \alpha satisfying \alpha = \frac{\alpha^2 + b}{c \alpha + d}, or use a particular solution. The substitution b_n = \frac{1}{a_n - \alpha} transforms the recurrence into a linear first-order form b_n = k b_{n-1} + m, where k and m are constants derived from the coefficients, solvable by standard linear methods. The inverse substitution then recovers a_n. This Riccati transformation parallels the continuous case and enables exact solutions when the linear form is explicit. Iterative solutions for general first-order nonlinear recurrences involve computing fixed points of f, solutions to a = f(a), which indicate equilibria; convergence to these depends on the |f'(a)| < 1 at the fixed point, though stability is analyzed separately. For rational cases without closed forms, numerical iteration a_n = f(a_{n-1}) from initial a_0 suffices, often converging quickly for contractive f. The Bernoulli analog provides another solvable class: a_n = p_n a_{n-1} + q_n a_{n-1}^k with k \neq 1. The substitution v_n = a_n^{1-k} yields the linear recurrence v_n = (1-k) p_n v_{n-1} + (1-k) q_n, solvable explicitly, followed by inversion to obtain a_n. This method mirrors the continuous Bernoulli equation and applies to variable coefficients when the linear solution is tractable.

Matrix and Higher-Order Methods

Higher-order linear recurrence relations can be reformulated using matrix methods to leverage linear algebra techniques for solution. For a k-th order linear homogeneous recurrence a_n = c_1 a_{n-1} + c_2 a_{n-2} + \cdots + c_k a_{n-k}, the companion matrix A is a k \times k matrix defined as A = \begin{pmatrix} c_1 & c_2 & \cdots & c_{k-1} & c_k \\ 1 & 0 & \cdots & 0 & 0 \\ 0 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & 1 & 0 \end{pmatrix}. The state vector \mathbf{v}_n = \begin{pmatrix} a_n \\ a_{n-1} \\ \vdots \\ a_{n-k+1} \end{pmatrix} satisfies \mathbf{v}_n = A \mathbf{v}_{n-1}, leading to \mathbf{v}_n = A^n \mathbf{v}_0 where \mathbf{v}_0 contains the initial conditions. This matrix representation transforms the scalar recurrence into a vector iteration, enabling efficient computation via matrix exponentiation. A classic example is the Fibonacci sequence, defined by F_n = F_{n-1} + F_{n-2} with F_0 = 0, F_1 = 1. The companion matrix is A = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}, and the state vector \mathbf{v}_n = \begin{pmatrix} F_n \\ F_{n-1} \end{pmatrix} = A^n \begin{pmatrix} 1 \\ 0 \end{pmatrix}, so F_n is the (1,1) entry of A^n. This formulation allows rapid calculation of large n terms using fast exponentiation algorithms. To solve for closed forms, matrix powers A^n can be computed using the Jordan canonical form when A is not diagonalizable. If A = P J P^{-1} where J is the Jordan form (block diagonal with Jordan blocks for eigenvalues), then A^n = P J^n P^{-1}, and J^n has explicit binomial expansions on the superdiagonals. For diagonalizable A (distinct eigenvalues, linking to characteristic roots from the homogeneous case), J = D is diagonal, simplifying A^n = P D^n P^{-1} with entries \lambda_i^n. Thus, a_n components derive from \mathbf{v}_n = A^n \mathbf{v}_0. Generating functions provide another matrix-compatible approach for linear recurrences. The ordinary generating function G(x) = \sum_{n=0}^\infty a_n x^n for a k-th order linear homogeneous recurrence with constant coefficients yields a rational function G(x) = \frac{P(x)}{Q(x)}, where Q(x) = 1 - c_1 x - \cdots - c_k x^k is the reciprocal characteristic polynomial and P(x) is a polynomial of degree less than k determined by initial conditions. Substituting the recurrence into G(x) eliminates the relation, producing this rational form directly. For non-homogeneous linear recurrences a_n = c_1 a_{n-1} + \cdots + c_k a_{n-k} + f(n), the Z-transform offers a discrete analog to the Laplace transform. The Z-transform Z\{a_n\} = \sum_{n=0}^\infty a_n z^{-n} converts the recurrence into an algebraic equation, where the non-homogeneous term f(n) appears as a convolution in the time domain, becoming a product in the Z-domain. Solving for Z\{a_n\} and inverting via partial fractions or residues yields the solution, handling forcing functions like polynomials or exponentials efficiently.

Stability Analysis

Linear Higher-Order Stability

For linear higher-order homogeneous recurrence relations with constant coefficients, asymptotic stability is determined by the locations of the roots of the relative to the unit circle in the complex plane. A solution sequence \{a_n\} converges to zero as n \to \infty for any initial conditions if and only if all characteristic roots r_i satisfy |r_i| < 1. This condition ensures that each term in the general solution, of the form \sum c_i r_i^n (or including polynomial factors for repeated roots), decays exponentially. If the root with the largest magnitude, known as the dominant root, has |r| > 1, the sequence diverges exponentially, with the long-term growth rate governed by this dominant term, overwhelming contributions from roots with smaller magnitudes. Conversely, the sequence is bounded if all roots satisfy |r_i| \leq 1 and any roots with |r_i| = 1 have algebraic multiplicity one; higher multiplicity for unit-magnitude roots introduces polynomial growth factors like n^k r^n (with k \geq 1), rendering the sequence unbounded unless the corresponding coefficients vanish. For example, consider the second-order recurrence a_n = 1.5 a_{n-1} - 0.5 a_{n-2}. The characteristic equation is r^2 - 1.5r + 0.5 = 0, with roots r = 1 and r = 0.5. The general solution is a_n = A \cdot 1^n + B \cdot (0.5)^n = A + B (0.5)^n, which remains bounded for arbitrary initial conditions but converges to zero only if A = 0. To assess whether all roots lie inside the unit circle without solving the explicitly, the Jury test offers an algebraic procedure. Developed for discrete-time systems, it constructs a tabular from the coefficients and verifies through a series of inequalities on the table entries, equivalent to checking the of certain matrices derived from the . This method is particularly useful for higher-order recurrences where root-finding is computationally intensive. Stability can be sensitive to perturbations in the recurrence coefficients, as small changes may shift root magnitudes across the unit circle, altering convergence behavior. The sensitivity of a root r to a coefficient perturbation \delta is quantified by the partial derivative \frac{\partial r}{\partial \alpha_j} = -\frac{p_j'(r)}{p'(r)}, where p(r) = 0 is the characteristic polynomial and \alpha_j are coefficients; this effect is amplified near multiple roots or when roots cluster near the unit circle. Such sensitivity underscores the need for robust coefficient estimation in applications involving noisy data.

Matrix Recurrence Stability

In the context of linear recurrence relations formulated in matrix form, such as \mathbf{x}_{n+1} = A \mathbf{x}_n where A is a constant and \mathbf{x}_n is a , refers to the asymptotic behavior of solutions originating from initial conditions \mathbf{x}_0. The zero \mathbf{x}_n = \mathbf{0} is asymptotically if \mathbf{x}_n \to \mathbf{0} as n \to \infty for every \mathbf{x}_0. This property holds if and only if the \rho(A), defined as the maximum of the eigenvalues of A, satisfies \rho(A) < 1. A matrix A is termed Schur stable when all its eigenvalues lie strictly inside the unit disk in the complex plane, i.e., |\lambda_i| < 1 for every eigenvalue \lambda_i of A. This condition is equivalent to asymptotic stability of the recurrence. Schur stability can be verified without explicitly computing eigenvalues through the discrete-time Lyapunov equation: there exists a positive definite matrix P > 0 such that A^T P A - P = -Q for some positive definite Q > 0. Solving this equation confirms stability, as the existence of such P and Q guarantees that a Lyapunov function V(\mathbf{x}) = \mathbf{x}^T P \mathbf{x} decreases along trajectories. An illustrative application arises in the model for age-structured populations, where the represents age-class abundances and A is the Leslie matrix incorporating survival and rates. The population trajectory declines to (stable zero ) if the dominant eigenvalue of A—the largest in modulus, which is real and positive by the Perron-Frobenius theorem—has magnitude less than 1. For instance, if rates are low relative to mortality, \rho(A) < 1 implies eventual population collapse. Norm-based analysis provides quantitative bounds on the decay rate. For any consistent matrix norm \|\cdot\| (submultiplicative, satisfying \|AB\| \leq \|A\| \|B\|), if \rho(A) < 1, then \|A^n\| \to 0 as n \to \infty. Moreover, the Gelfand formula gives \rho(A) = \lim_{n \to \infty} \|A^n\|^{1/n}, ensuring geometric decay dominated by \rho(A). This convergence holds uniformly, with the rate approaching \rho(A) for large n. When A is not diagonalizable, its reveals the precise asymptotic behavior. Decompose A = P J P^{-1}, where J consists of . For a block of size m corresponding to eigenvalue \lambda with |\lambda| < 1, the nth power is J^n = \sum_{k=0}^{m-1} \binom{n}{k} \lambda^{n-k} N^k, where N is the nilpotent superdiagonal matrix with 1's on the first superdiagonal. Thus, A^n = P J^n P^{-1} decays as O(n^{m-1} |\lambda|^n), introducing a polynomial prefactor n^{m-1} for non-semisimple eigenvalues, though the exponential decay \rho(A)^n still dominates overall stability. Larger block sizes slow the transient but do not alter asymptotic convergence to zero.

Nonlinear First-Order Stability

In first-order nonlinear recurrence relations of the form x_{n+1} = f(x_n), where f is a differentiable function, fixed points are solutions to the equation a = f(a). These points represent equilibria where the sequence remains constant if started exactly at a. The local stability of such a fixed point is determined by the derivative f'(a): the fixed point is attracting (stable) if |f'(a)| < 1, meaning nearby trajectories converge to a, and repelling (unstable) if |f'(a)| > 1, meaning nearby trajectories diverge from a; the case |f'(a)| = 1 requires further . A canonical example is the logistic map, x_{n+1} = r x_n (1 - x_n), which models population growth with parameter r > 0. The fixed points are a = 0 and a = 1 - 1/r (for r > 1). The point a = 0 is stable for $0 < r < 1, while a = 1 - 1/r is stable for $1 < r < 3, where |f'(a)| = |2 - r| < 1. For r > 3, the nonzero fixed point becomes unstable, leading to more complex dynamics. As r increases beyond 3, the undergoes a , where the fixed point loses stability and a period-2 emerges. This process cascades through successive period doublings, culminating in the onset of at the r \approx 3.57, known as the Feigenbaum point, beyond which the behavior becomes aperiodic and sensitive to initial conditions. Cobweb plots provide a graphical tool to visualize the or toward fixed points in such recurrences. Constructed by plotting y = f(x) alongside the line y = x on the same axes, the plot traces iterations starting from an initial x_0 by alternating horizontal lines to y = x and vertical lines to y = f(x); spirals inward indicate attraction to a fixed point, while outward signals repulsion. To quantify chaotic behavior in nonlinear recurrences, the measures the average rate of divergence of nearby trajectories. For the , it is defined as \lambda = \lim_{n \to \infty} \frac{1}{n} \sum_{i=0}^{n-1} \log |f'(x_i)|, where x_i follows the ; if \lambda > 0, the system exhibits with separation of trajectories, whereas \lambda < 0 implies convergence to a stable state.

Connections to Other Areas

Relation to Differential Equations

Recurrence relations arise naturally as discrete approximations to continuous differential equations through numerical discretization methods, bridging discrete and continuous mathematical modeling. A primary example is the Euler method, which approximates the solution of a first-order ordinary differential equation \frac{da}{dn} = f(a) by replacing the derivative with a forward difference: \frac{a_{n+1} - a_n}{h} \approx f(a_n), yielding the explicit recurrence a_{n+1} = a_n + h f(a_n), where h is the step size. This discretization transforms the continuous evolution into a sequence of iterative updates, preserving the qualitative behavior of the original system for sufficiently small h. Higher-order differential equations can similarly be reduced to systems of first-order recurrences via this approach, highlighting recurrences as computational proxies for differential dynamics. The Z-transform further underscores this connection by serving an analogous purpose to the Laplace transform in solving linear equations. For linear recurrence relations with constant coefficients, the Z-transform converts difference equations into algebraic forms in the z-domain, much like the Laplace transform algebraizes differential equations in the s-domain, facilitating solution through polynomial or rational manipulations followed by inverse transformation. This parallelism extends to stability analysis: discrete systems are asymptotically stable if all characteristic roots lie strictly inside the unit circle (|z| < 1) in the z-plane, mirroring the requirement for continuous systems where all poles must reside in the open left half-plane (\Re(s) < 0) of the s-plane. Such analogies arise from mappings like the exponential substitution z = e^{s h}, which relates discrete sampling to continuous time evolution. Consider the undamped harmonic oscillator differential equation y'' + y = 0, whose exact solutions are oscillatory with roots \pm i. Applying the central finite difference scheme for the second derivative, \frac{y_{n+1} - 2y_n + y_{n-1}}{h^2} + y_n = 0, results in the second-order linear recurrence y_{n+1} = (2 - h^2) y_n - y_{n-1}. The characteristic equation r^2 - (2 - h^2)r + 1 = 0 has roots r = \frac{2 - h^2 \pm \sqrt{(2 - h^2)^2 - 4}}{2}, which for small h approximate e^{\pm i h}, closely mimicking the continuous roots e^{\pm i} scaled by the time step. As h \to 0, the recurrence solutions converge pointwise to the exact differential equation solutions, with the discrete oscillations approaching the continuous sinusoidal behavior, thus validating recurrences as limiting approximations of differential systems.

Generating Functions and Closed Forms

Generating functions provide a powerful algebraic method for solving recurrence relations by transforming sequences into formal power series, allowing the recurrence to be expressed as a functional equation that can often be solved explicitly. The ordinary generating function for a sequence \{a_n\}_{n=0}^\infty is defined as G(x) = \sum_{n=0}^\infty a_n x^n. For a linear homogeneous recurrence relation with constant coefficients, such as a_n = c_1 a_{n-1} + \cdots + c_k a_{n-k} for n > k, multiplying the relation by x^n and summing over n \geq k yields an equation involving G(x) and initial terms, typically of the form G(x) (1 - c_1 x - \cdots - c_k x^k) = polynomial in x determined by initial conditions. Solving for G(x) often results in a , where the denominator is the $1 - c_1 x - \cdots - c_k x^k. To extract the coefficients a_n, of G(x) is applied, expressing it as a of terms like \frac{A}{(1 - r x)^m}, where r are the of the . The coefficients then follow from the generalized to negative exponents, giving a_n = \sum A \binom{n + m - 1}{m-1} r^n, or alternatively using for complex . A canonical example is the , defined by F_0 = 0, F_1 = 1, and F_n = F_{n-1} + F_{n-2} for n \geq 2. The is G(x) = \sum_{n=0}^\infty F_n x^n = \frac{x}{1 - x - x^2}. Decomposing via partial fractions, with roots \phi = \frac{1 + \sqrt{5}}{2} and \hat{\phi} = \frac{1 - \sqrt{5}}{2}, yields G(x) = \frac{1}{\sqrt{5}} \left( \frac{1}{1 - \phi x} - \frac{1}{1 - \hat{\phi} x} \right), leading to Binet's closed-form formula F_n = \frac{\phi^n - \hat{\phi}^n}{\sqrt{5}}. For recurrences involving factorials or permutations, exponential generating functions are more appropriate, defined as A(x) = \sum_{n=0}^\infty a_n \frac{x^n}{n!}. These transform linear recurrences into or algebraic equations that are easier to solve when the sequence grows faster than exponentially, such as in combinatorial contexts like derangements or tree enumerations. The coefficients a_n are recovered via a_n = n! [x^n] A(x), where [x^n] denotes the coefficient extraction operator. Beyond exact closed forms, generating functions enable of a_n through singularity analysis, which examines the dominant singularities of G(x) on its . For rational generating functions, the closest singularity to the origin determines the growth rate; for instance, a at x = \rho yields a_n \sim c \rho^{-n} n^{k-1} for some constants c, k, providing precise estimates without computing all terms. This method, rooted in , is particularly useful for large-n behavior in linear recurrences.

Applications

Computer Science and Algorithms

Recurrence relations play a fundamental role in , particularly in the where they model the time or of recursive procedures. Originating in the systematic study of during the 1970s, these relations were extensively explored by in his seminal work , which formalized methods for solving recurrences arising from , , and combinatorial problems. This approach shifted algorithm analysis from empirical testing to mathematical rigor, enabling precise asymptotic bounds. In time complexity analysis, recurrence relations capture the divide-and-conquer paradigm, where a problem of size n is decomposed into smaller subproblems. A classic example is , whose running time satisfies the recurrence T(n) = 2T\left(\frac{n}{2}\right) + \Theta(n), with T(1) = \Theta(1), reflecting the equal split into two subarrays and the linear-time merge step. This solves to T(n) = \Theta(n \log n) using the , a tool for recurrences of the form T(n) = aT(n/b) + f(n) where a \geq 1, b > 1, providing cases based on the growth of f(n) relative to n^{\log_b a}. The theorem, introduced in standard algorithms texts, simplifies analysis for balanced divide-and-conquer algorithms like binary search or variants. Dynamic programming leverages recurrence relations to optimize computations by storing intermediate results, avoiding redundant work in recursive formulations. For the , defined by F(n) = F(n-1) + F(n-2) with F(0) = 0, F(1) = 1, a naive recursive implementation recomputes subproblems exponentially, yielding O(2^n) time. resolves this by caching results in a , reducing the complexity to O(n) time and , as each F(k) for k \leq n is computed exactly once. This technique, foundational to dynamic programming, extends to problems like or , where recurrences define . More general divide-and-conquer recurrences, such as T(x) = \sum_{i=1}^k a_i T(b_i x + h_i(x)) + g(x) for x \geq x_0, arise in unbalanced partitions or with linear overheads. The Akra-Bazzi method generalizes the Master theorem, yielding an asymptotic solution T(x) = \Theta\left(x^p \left(1 + \int_1^x \frac{g(u)}{u^{p+1}} \, du\right)\right), where p solves \sum_{i=1}^k a_i b_i^p = 1. Developed in 1998, it applies to algorithms like median-finding or certain geometric computations, handling non-constant subproblem sizes and proving integral convergence for polynomial g(x). In graph algorithms, recurrence relations model dependencies in directed acyclic graphs (DAGs). For single-source shortest paths, topological ordering enables a linear-time dynamic programming approach: process vertices in , relaxing edges to update distances via the recurrence d(v) = \min_{u \to v} (d(u) + w(u,v)), with d(s) = 0 for source s. This computes all-pairs or single-source paths in O(|V| + |E|) time, exploiting the acyclic structure to avoid cycles in relaxation. Such recurrences underpin efficient solutions in scheduling, dependency resolution, and network routing without feedback loops.

Mathematical Biology and Population Models

In mathematical biology, recurrence relations provide essential tools for modeling , particularly in discrete-time frameworks that align with non-overlapping generations or seasonal cycles common in many . These models capture how sizes evolve over successive time steps, incorporating factors such as birth rates, survival probabilities, and density-dependent effects. By representing populations as vectors or scalars updated via multiplications or nonlinear functions, recurrences enable predictions of , , and risks in ecological systems. The exemplifies a linear recurrence for age-structured populations, where the \mathbf{n}_{t+1} of age-class abundances at time t+1 is given by \mathbf{n}_{t+1} = L \mathbf{n}_t, with L a non-negative matrix featuring age-specific fertilities on the first row and survival probabilities on the subdiagonal. Introduced by P. H. Leslie in , this model projects population trajectories based on demographic rates, revealing long-term growth via the dominant eigenvalue of L, which determines stability and asymptotic behavior. Applications include forecasting populations, where matrix stability analysis—briefly referencing eigenvalue conditions—assesses persistence under varying vital rates. Nonlinear recurrences extend these ideas to density-dependent growth, as in the , a discrete analog of the logistic equation: N_{t+1} = N_t \exp\left(r \left(1 - \frac{N_t}{K}\right)\right), where N_t is , r the intrinsic growth rate, and K the . Developed by W. E. Ricker in for fish stock-recruitment dynamics, it exhibits complex behaviors like cycles and for r > 2, reflecting overcompensation in recruitment at high densities. This model highlights how discrete time can amplify oscillations beyond continuous counterparts, aiding in understanding boom-bust cycles in exploited populations. Host-parasite interactions often rely on coupled nonlinear recurrences to model oscillatory cycles, such as in the Nicholson-Bailey host-parasitoid system: H_{t+1} = \lambda H_t \exp(-a P_t) and P_{t+1} = c H_t (1 - \exp(-a P_t)), where H_t and P_t are host and parasitoid abundances, \lambda host , a attack rate, and c parasitoid offspring per host. Formulated by A. J. Nicholson and V. A. Bailey in 1935, this Poisson-based model predicts potential instability leading to cycles or extinction, capturing search efficiency in discrete generations typical of insect systems. Extensions incorporate aggregation or refuges to stabilize equilibria, informing biological control strategies. Stability in ecological recurrences is profoundly influenced by the , where per capita growth declines at low densities, creating a Allee below which occurs. In discrete models like modified Ricker or logistic forms, this manifests as : populations above the converge to a positive , while those below spiral to zero, with the A satisfying f(A) = A where f is the growth function. As analyzed by S. J. Schreiber in 2003, Allee effects suppress and promote persistence in environments but heighten risk from demographic noise, evident in species or mating-limited populations. Historically, adaptations of Lotka-Volterra predator-prey equations from the —originally continuous differential models by A. J. Lotka and V. Volterra—emerged in to handle generational discreteness, inspiring nonlinear recurrences for cyclic dynamics in hosts and parasites. Lotka's 1920 extensions to organic systems laid groundwork for these, with early discrete formulations in the mid-20th century adapting the coupled form x_{t+1} = x_t + r x_t - a x_t y_t, y_{t+1} = y_t - d y_t + b x_t y_t to reveal amplified oscillations.

Signal Processing and Economics

In , recurrence relations underpin the behavior of digital filters, especially (IIR) filters, which implement linear constant-coefficient difference equations relating current output to past outputs and inputs. These recurrences define the filter's response to an input signal, producing an that theoretically extends indefinitely. The converts such recurrences into algebraic equations in the z-domain, enabling the computation of the H(z) = \frac{Y(z)}{X(z)}, where Y(z) and X(z) are the Z-transforms of the output and input sequences, respectively. This approach facilitates stability analysis via pole locations and for applications like audio processing and communications. Autoregressive moving average (ARMA) models further integrate recurrences into analysis within , representing stationary processes as the output of a recursive driven by . A general ARMA(p, q) model follows the recurrence y_t - \sum_{i=1}^p \phi_i y_{t-i} = \epsilon_t + \sum_{j=1}^q \theta_j \epsilon_{t-j}, where \{\phi_i\} are autoregressive coefficients capturing dependence on past values, \{\theta_j\} are coefficients modeling short-term noise correlations, and \epsilon_t is uncorrelated with zero mean and constant variance. This structure allows ARMA models to approximate the transfer functions of linear time-invariant systems, aiding in spectral estimation and tasks. The of the ARMA recurrence yields the model's power , essential for understanding frequency-domain behavior. The systematic framework for ARMA modeling, including identification via autocorrelation patterns and estimation through maximum likelihood, was established by and Jenkins, revolutionizing practical applications in . In , recurrence relations model intertemporal dynamics in markets, notably through the , which captures lagged adjustments in leading to fluctuations. Developed by , the model posits that supplied in period t+1 depends on the in period t, while in t+1 clears the based on that , yielding the iterative relations q_{t+1} = S(p_t) and p_{t+1} = D^{-1}(q_{t+1}), where S and D denote functions. This discrete generates trajectories that converge to , oscillate, or diverge depending on elasticities. Market stability requires the of the product of the supply slope and demand slope to be less than 1; otherwise, oscillations amplify, illustrating potential cycles in agricultural commodities. The marked a pivotal era in econometric analysis, with autoregressive models increasingly applied to decompose economic fluctuations and forecast variables like GDP and , leveraging methods such as the Yule-Walker equations for parameter estimation from autocovariances. These efforts, amid postwar data abundance, integrated recurrences into structural models, enhancing understanding of serial dependence in macroeconomic series and paving the way for ARMA extensions.

References

  1. [1]
    [PDF] 6.042J Chapter 10: Recurrences - MIT OpenCourseWare
    However, recur- rences have other applications in computer science as well, such as enumeration of structures and analysis of random processes. And, as we saw ...
  2. [2]
    [PDF] MATH 3336 – Discrete Mathematics Recurrence Relations (8.1, 8.2)
    Definition: A recurrence relation for the sequence {𝑎𝑛} is an equation that expresses 𝑎𝑛 in terms of one or more of the previous terms of the sequence, namely, ...
  3. [3]
    [PDF] Recurrence Relations and Generating Functions
    A formula that recursively defines a function is called a “recurrence relation” or a “recurrence equation”. Solving a recurrence equation means to find a close-.
  4. [4]
    [PDF] 1 Solving recurrences
    Last class we introduced recurrence relations, such as T(n) = 2T(bn/2c) + n. Typically these reflect the runtime of recursive algorithms.
  5. [5]
    Section 2.4: Recursion and Recurrence Relations
    In this section we examine the definition and multiple applications of recursion, and encounter many examples. We also solve one type of linear recurrence ...<|control11|><|separator|>
  6. [6]
    Recurrence relation definition - Math Insight
    A recurrence relation is an equation that defines a sequence based on a rule that gives the next term as a function of the previous term(s).
  7. [7]
    2.4 Solving Recurrence Relations
    There are a few techniques for converting recursive definitions to closed formulas. Doing so is called solving a recurrence relation.Missing: implicit | Show results with:implicit
  8. [8]
    [PDF] Recurrence Relations
    recurrence relation takes effect are called initial conditions. The recurrence relation and initial conditions uniquely determine a sequence.
  9. [9]
    cs2223 Classifying Recurrence Relations
    A recurrence relation is linear if there are no products or powers of the sequence elements. The above recurrence relations are non-linear. These recurrence ...Missing: definition | Show results with:definition
  10. [10]
    [PDF] The Bessel difference equation - MST.edu
    Dec 30, 2016 · The forward difference operator Δ is defined by the formula. Δf(t) = f(t + 1) - f(t). We define the “nth order discrete monomial”. (-1)n(-t)n ...
  11. [11]
    [PDF] A Reader's Guide to Daniel Bernoulli's "Recurrent Series"
    Sep 4, 2025 · 19). Following Boole, let us introduce the shift operator E, defined by Eux = ux+1. Thus,. ∆ = E − I, where I is the identity operator ...
  12. [12]
    [PDF] Recurrence Relations
    Sep 16, 2011 · What we want are two versions of the recurrence that are equal to 4n−1. Then we can subtract them and get a homogeneous recurrence. To get ...<|separator|>
  13. [13]
    [PDF] 4 Linear Recurrence Relations & the Fibonacci Sequence
    Consider the second-order recurrence axn+2 + bxn+1 + cxn = f. 1. Given initial conditions x1, x2, there exists a unique solution xn. 2. If x. (p) n is a ...Missing: uniqueness | Show results with:uniqueness
  14. [14]
    [PDF] Solving Recurrence Relations - cs.Princeton
    A particular solution of a recurrence relation is a sequence that satisfies the recurrence equation; however, it may or may not satisfy the initial conditions.
  15. [15]
    [PDF] Solving Linear Recurrence Relations
    The following recurrence relations are linear non- homogeneous recurrence relations. ... homogeneous recurrence that satisfies both recurrence and initial ...
  16. [16]
    [PDF] solving linear recursions over all fields - Keith Conrad
    For instance, the recursion an = an−1 + an−2 has order 2. The sequences in K satisfying a common recursion (1.1) are a K-vector space under termwise addition.
  17. [17]
    3.4 Recurrence Relations - Generating Functions
    A recurrence relation defines a sequence {ai}∞i=0 by expressing a typical term an in terms of earlier terms, ai for i<n. For example, the famous Fibonacci ...
  18. [18]
    [PDF] Fibonacci Numbers - Lehigh University
    The Fibonacci numbers are defined by the simple recurrence relation. Fn = Fn−1 + Fn−2 for n ≥ 2 with F0 = 0,F1 = 1. This gives the sequence F0,F1,F2,... = 0,1, ...
  19. [19]
    Recursion over Numbers - CSC 151 (Fall 2023)
    Example: Factorial. A classical example of a mathematical function with a recursive definition is factorial, written n! . n!
  20. [20]
    [PDF] 3 Properties of Binomial Coefficients - Clemson University
    Here is the famous recursive formula for binomial coefficients. Lemma 3.2 For 1 ≤ k<n, (nk) = (n − 1 k − 1)+ ( n − 1 k ) .
  21. [21]
    Recurrence Relations - Student Academic Success
    A recurrence relation is a mathematical equation that determines any term in a sequence based on one or more previous terms.Missing: standard | Show results with:standard
  22. [22]
    Forward Difference -- from Wolfram MathWorld
    The forward difference is a finite difference defined by Deltaa_n=a_(n+1)-a_n. Higher order differences are obtained by repeated operations of the forward ...
  23. [23]
    Finite Difference -- from Wolfram MathWorld
    The finite difference is the discrete analog of the derivative. The finite forward difference of a function f_p is defined as Deltaf_p=f_(p+1)-f_p.
  24. [24]
    Backward Difference -- from Wolfram MathWorld
    The backward difference is a finite difference defined by del _p=del f_p=f_p-f_(p-1). Higher order differences are obtained by repeated operations of the ...
  25. [25]
    Mathematical Introduction - DLMF
    Common Notations and Definitions ; Kronecker delta: 0 if j ≠ k ; 1 if j = k . · forward difference operator: Δ ⁡ f ⁡ ( x ) = f ⁡ ( x + 1 ) − f ⁡ ( x ) . · backward ...
  26. [26]
    Difference Equations (Finite Difference Schemes)
    and the difference equation resulting from the backward-difference substitution is ... forward difference approximation to the derivative: $\displaystyle ...
  27. [27]
    [PDF] Appendix L - Differential and Difference Equations - UTK-EECS
    forward difference of x n[ ]. Then, consistent with that definition, a first ... difference equation approximation to it for four different choices of At.
  28. [28]
    Newton's Forward Difference Formula -- from Wolfram MathWorld
    Newton's forward difference formula is a finite difference identity giving an interpolated value between tabulated points.
  29. [29]
    [PDF] First-Order Difference Equations in One Variable
    Sep 23, 2017 · Consider the special case when at = 1 for all t ∈ N. The obvious unique solution of xt − xt−1 = ft is then that each xt is the forward sum.
  30. [30]
    [PDF] partial difference equations in - Mathematics Department
    Section 1 introduces the nomenclature of. Partial Difference Operators and considers lattice walks. This is illustrated by the ordinary lattice walk, Simon ...
  31. [31]
    Analytic aspects of Delannoy numbers - ScienceDirect.com
    In particular, the central Delannoy numbers satisfy a three-term recurrence relation n D ( n ) = 3 ( 2 n − 1 ) D ( n − 1 ) − ( n − 1 ) D ( n − 2 ) and have the ...
  32. [32]
    [PDF] On the Partial Difference Equations of Mathematical Physics
    zero boundary conditions, since different boundary condi- tions can be taken care of by adding a suitable solution of the homogeneous equation. To solve the ...
  33. [33]
    4.2 Solving Recurrence Relations
    We call the equation r 2 − c 1 r − c 2 = 0 the characteristic equation of the recurrence relation. The solutions to this equation are the characteristic roots .
  34. [34]
    Fibonacci Number -- from Wolfram MathWorld
    The Fibonacci numbers are the sequence of numbers {F_n}_(n=1)^infty defined by the linear recurrence equation F_n=F_(n-1)+F_(n-2) with F_1=F_2=1.
  35. [35]
    [PDF] Define linear homogeneous recurrence relations of degree k with ...
    If the characteristic equation of the linear recurrence relation of degree two of the form an = Ban−1 + Can−2 has a repeated root r0, then the general solution ...
  36. [36]
    [PDF] 8.2 Solving Linear Recurrence Relations
    Determine if recurrence relation is homogeneous or nonhomogeneous. • Determine if recurrence relation is linear or nonlinear. • Determine whether or not the ...
  37. [37]
    [PDF] Solving Linear Recurrence Relations
    Linear recurrences. Linear recurrence: Each term of a sequence is a linear function of earlier terms in the sequence. For example: a.<|separator|>
  38. [38]
    [PDF] Nonhomogeneous Equations with Constant Coefficients ...
    As we have seen, solving a linear nonhomogeneous equation depends, in part, on finding a particular solution of the equation. In the last.Missing: recurrence relations
  39. [39]
    [PDF] a framework for structured linearizations of matrix polynomials in ...
    We cover the case of every polynomial basis endowed with a recurrence relation, and we provide explicit constructions for the Lagrange, Newton, Hermite and ...
  40. [40]
    Rational solutions of first-order algebraic ordinary difference equations
    We propose an algebraic geometric approach for studying rational solutions of first-order algebraic ordinary difference equations (AOΔEs).
  41. [41]
    Riccati type transformations for second-order linear difference ...
    Oscillation and comparison theorems for a linear homogeneous second-order difference equation are proved by employing various equivalent non-linear equations.
  42. [42]
    [PDF] Solving nonlinear recursions
    A method maps a polynomial recursion to a matrix linear one, solving it as a matrix product of initial values, using transfer matrices.
  43. [43]
    [PDF] Properties of Solutions to a Discrete Analog of the Bernoulli ...
    We describe the derivation of the discrete equation, yσ = y/[2 − (1 + µp)α + fµyα]1/α, and its relationship to the classic Bernoulli differential equation. We ...Missing: v_n = | Show results with:v_n =
  44. [44]
    Bernoulli Differential Equations - Pauls Online Math Notes
    Feb 14, 2025 · We are now going to use the substitution v=y1−n v = y 1 − n to convert this into a differential equation in terms of v . As we'll see this will ...Missing: discrete | Show results with:discrete
  45. [45]
    [PDF] A Matrix Approach for General Higher Order Linear Recurrences
    Aug 28, 2009 · In this paper, we consider k sequences of general order-k linear recurrences with k arbitrary initial conditions and coefficients. Then we study ...
  46. [46]
    [PDF] Assignment 2, due February 16 - NYU Courant Mathematics
    This matrix A is the companion matrix for the recurrence relation. (6). Of course, we have Xn = AnX0. (c) Show that for each solution of (7) there is an ...
  47. [47]
    discrete math matrix methods for recurrence relations
    We will explore how matrices can transform a sequence defined by a recurrence relation into a system of linear equations, enabling us to find closed-form ...
  48. [48]
    SCLA Linear Recurrence Relations - A First Course in Linear Algebra
    Consider a sequence where a few initial terms are given, and then each successive term is defined using the terms that preceded it.
  49. [49]
    [PDF] Solving Linear Recurrence Relations with Linear Algebra
    The lengths of all Jordan chains for eigenvalue λ sum to the algebraic multiplicity m. The number of Jordan chains for eigenvalue λ is the geometric ...
  50. [50]
    AC Using generating functions to solve recurrences
    In this section, our focus will be on linear recurrence equations. In Section 9.7, we will see how generating functions can solve a nonlinear recurrence. 🔗. Our ...
  51. [51]
    [PDF] 6 z-Transforms - CMU School of Computer Science
    6.7 Using z-Transforms to Solve Recurrence Relations​​ Recurrence relations are prevalent throughout computer science, biology, signal processing, and economics, ...
  52. [52]
    On the z-transform and the nonhomogeneous linear difference ...
    Aug 7, 2025 · This paper is devoted to the study properties of a large class of linear difference equations, with the aid of the z-transform technics.
  53. [53]
    On the Asymptotic Stability of a Class of Linear Difference Equations
    asymptotic stability if, and only if, the roots of this algebraic equation have absolute value strictly less than 1 (which, of course, yields limk Xk = 0.
  54. [54]
    [PDF] 23.2 General Theory of Recurrence Relations
    For example, for the Fibonacci numbers we have k = 2, c1 = c2 = 1, F0 = 0 and F1 = 1. Here the recurrence is Fn+1 = Fn + Fn−1.
  55. [55]
    [PDF] A Simplified Stability Criterion for Linear Discrete Systems - DTIC
    Furthermore the above property can also be shown as an equivalence relationship which can be written as a recurrence equation. (An-l+Bn.l) = An-2(a0+a2+a4-'' ^n ...
  56. [56]
    Sensitivity of roots to errors in the coefficient of polynomials obtained ...
    Although the roots of a polynomial of high order are extremely sensitive to perturbations in its coefficients, experience has demonstrated that ...Missing: recurrences | Show results with:recurrences
  57. [57]
    [PDF] 11.3 Iterative Methods and Preconditioners
    The powers Bk approach zero if and only if every eigenvalue of B has |λ| < 1. The rate of convergence is controlled by the spectral radius of B: ρ = max |λ(B)|.
  58. [58]
    [PDF] Lyapunov Theory for Discrete Time Systems 1 Autonomous systems
    A matrix A with all the eigenvalues in absolute value smaller than 1 is called a Schur matrix, and it holds that the origin is asymptotically stable if and only ...
  59. [59]
    [PDF] Lecture 13 Linear quadratic Lyapunov theory
    in particular, a discrete-time linear system is stable if and only if there is a quadratic Lyapunov function that proves it. Linear quadratic Lyapunov theory.
  60. [60]
    [PDF] Matrix Population Models: deterministic and stochastic dynamics
    The dominant eigenvalue λ of A is the long-term population growth rate. λ>1: growing population; λ<1: declining population; λ=1: constant population.
  61. [61]
    [PDF] Bounding the Norm of Matrix Powers - BYU ScholarsArchive
    Jul 5, 2013 · Also, while normal matrices can take many diverse forms, all those with spectral radius less than one will display a graph similar to that ...
  62. [62]
    [PDF] 3 Jordan Canonical Forms - UC Berkeley math
    In order to efficiently compute the state v(n), we need therefore to compute powers of a linear map. According to the general theory, there exists an invertible ...
  63. [63]
    [PDF] 2/7/2005: THE LOGISTIC MAP Math118, O. Knill
    STABILITY OF PERIODIC POINTS. If x0 is a fixed point of a dif- ferentiable interval map f and |f0(x0)| > 1, then x0 is unstable in.
  64. [64]
    [PDF] One Dimensional Maps Chapter 1
    A point p is said to be a fixed point of the map if f(p) = p. Stability of Fixed Points: We want a fixed point to be unstable if nearby points move away (e.g. ...
  65. [65]
    Logistic Map Fixed Points - Yale Math
    The fixed point xf = 0 is stable for 0 ≤ s < 1. The fixed point xf = (s - 1)/s is stable for 1 < s < 3.
  66. [66]
    [PDF] Chapter 3 - The logistic map, period-doubling and universal constants
    The bifurcation diagram for the logistic family of maps ... The birth of the period 2 attractor is an example of a period-doubling (or pitchfork) bifurcation.
  67. [67]
    Determining stability by cobwebbing linear approximations around ...
    By cobwebbing the linear approximation to a discrete dynamical around an equilibrium, one can determine a criterion for the stability of the equilibrium.Missing: recurrence | Show results with:recurrence
  68. [68]
    Logistic map
    No readable text found in the HTML.<|separator|>
  69. [69]
    Differential Equations - Euler's Method - Pauls Online Math Notes
    Nov 16, 2022 · In this section we'll take a brief look at a fairly simple method for approximating solutions to differential equations.
  70. [70]
    [PDF] 18.03 Difference Equations and Z-Transforms - MIT OpenCourseWare
    Algebraically we work with R in difference equations and Z-transforms in much the same way we work with D in differential equations and Laplace transforms.
  71. [71]
    [PDF] Lecture 7: Discrete approximation of continuous-time systems
    Sep 29, 2011 · The entire left half-plane maps inside a circle with radius at z = . If CT system is stable, then DT system is also stable. 31. Page 32 ...
  72. [72]
    [PDF] generatingfunctionology - Penn Math
    May 21, 1992 · The answer is that we convert the relation (4.3.18) between two se- quences into a relation between their exponential generating functions,.
  73. [73]
    3. Generating Functions
    3.16. Use generating functions to solve the following recurrences: an=−an−1+6an−2for n>1 with a0=0 and a1=1;an=11an−2−6an−3for n>2 with a0=0 and a1=a2=1;an=3an ...
  74. [74]
    [PDF] Generating functions - Penn Math
    These are all simple roots (multiplicity 1) so there is a partial fraction expansion of the form. H(z) = c1. 1 − z/ρ1. + c2. 1 − z/ρ2. + c3. 1 − z/ρ3 . This ...
  75. [75]
    [PDF] AC.pdf - Analytic Combinatorics
    Singularities determine a function's coefficients in asymptotic form and lead to precise estimates for counting sequences. ... relations be- tween counting ...
  76. [76]
    [PDF] Generating Function Constructions
    The partial fraction decomposition gives Binet's formula: F(x) = 1. √. 5. ( 1. 1−φx. − 1. 1+ψx. ) =⇒ Fk = 1. √. 5. (φk − (−ψ)k) for the golden ratio φ = √. 5+1.
  77. [77]
    [PDF] Exponential Generating Functions and Recurrence Relations
    A sequence is called a solution of a recurrence relation if its terms satisfy the recurrence relation. 1. 2. 3. 2. 0. 1. 2.
  78. [78]
    [PDF] Asymptotic Enumeration Methods
    Asymptotic enumeration methods provide quantitative information about the rate of growth of functions that count combinatorial objects.
  79. [79]
    The Art of Computer Programming (TAOCP)
    This PDF includes the complete indexes of Volumes 1, 2, 3, 4A, and 4B, as well as the index to Volume 1 Fascicle 1. Registered owners of the earlier four-volume ...Missing: recurrences 1970s
  80. [80]
    [PDF] Lecture 18: Dynamic Programming I: Memoization, Fibonacci, Crazy ...
    This technique of remembering previously computed values is called memoization. Recursive Formulation of Algorithm: memo = { } fib(n): if n in memo: return memo ...
  81. [81]
    Shortest Paths - Algorithms, 4th Edition
    Jan 10, 2025 · By relaxing vertices in topological order, we can solve the single-source shortest-paths and longest-paths problems for edge-weighted DAGs in ...Missing: recurrence | Show results with:recurrence
  82. [82]
    ON THE USE OF MATRICES IN CERTAIN POPULATION ...
    This paper, 'ON THE USE OF MATRICES IN CERTAIN POPULATION MATHEMATICS', by P.H. Leslie, was published in Biometrika, Volume 33, Issue 3, November 1945.
  83. [83]
    Stock and Recruitment - Canadian Science Publishing
    ... 1954Stock and Recruitment · Article. Share on. Stock ... An explicit solution for calculating optimum spawning stock size from Ricker's stock recruitment model.
  84. [84]
    [PDF] Allee effects, extinctions, and chaotic transients in simple population ...
    The models exhibit four behaviors: persistence for all initial population densities, bistability in which a population persists for intermediate initial ...Missing: recurrence | Show results with:recurrence
  85. [85]
    [PDF] The z-Transform - Analog Devices
    Just as analog filters are designed using the Laplace transform, recursive digital filters are developed with a parallel technique called the z-transform.
  86. [86]
    [PDF] z-transforms and linear recursions
    Apply the z-transform method to solve x(n + 1)=(1 + i)x(n), x−1 = 1/(1 + i). 2. Use sympy.rsolve to solve the recursion. Industrial Math & Computation (MCS 472).
  87. [87]
    Understand AR, MA and ARMA models - GaussianWaves
    May 22, 2014 · AR, MA & ARMA models express the nature of transfer function of LTI system. Understand the basic idea behind those models & know their frequency responses.Lti System Model · Auto Regressive (ar) Models... · Auto Regressive Moving...<|separator|>
  88. [88]
    Autoregressive Moving Average Model - ScienceDirect.com
    The autoregressive moving average (ARMA) model is defined as a statistical model for time series analysis that consists of two polynomials: one for ...
  89. [89]
    [PDF] Box-Jenkins modelling - Rob J Hyndman
    May 25, 2001 · The Box-Jenkins approach to modelling ARIMA processes was described in a highly in- fluential book by statisticians George Box and Gwilym ...<|separator|>
  90. [90]
    Cobweb Theorem | The Quarterly Journal of Economics
    Summary of cobweb theorem: (1) continuous fluctuation, 263; (2) divergent fluctuation, 263; (3) Convergent fluctuation, 265.— Extension of the cobweb analysis ...
  91. [91]
    COBWEB THEORY, MARKET STABILITY, AND PRICE ...
    Cobweb theory appears in models of endoge- nous cycles in prices and production and empirical studies of agricultural phenomena such as the hog price cycle. In ...Missing: recurrence | Show results with:recurrence
  92. [92]
    [PDF] Twenty Years of Time Series Econometrics in Ten Pictures
    There was also an understanding that vector autoregressions, because they impose as little structure on the data as possible, cannot answer questions about.
  93. [93]
    The Past, Present, and Future of Macroeconomic Forecasting
    Here we introduce a few to help convey a feel for the breadth of modern time-series econometrics and fore- casting. The discussion is necessarily brief; for a ...