Fact-checked by Grok 2 weeks ago

Gradient descent

Gradient descent is a iterative optimization used to minimize a differentiable objective by repeatedly updating parameters in the direction opposite to the of the at the current point, scaled by a that controls the step size. The method was first proposed by French mathematician in 1847 as a technique for solving systems of equations and minimizing quadratic , where each iteration applies a step proportional to the negative , often termed the . In modern applications, particularly in and , gradient descent has become the cornerstone for training models such as neural networks by minimizing loss functions over large datasets, with its stochastic variant enabling efficient computation on massive data. Key variants include batch gradient descent, which computes the gradient over the entire dataset for precise but computationally expensive updates; (SGD), which uses individual training examples for faster, noisier progress suitable for ; and mini-batch gradient descent, a compromise that processes small subsets of data to balance efficiency and stability, widely adopted in practice.

Fundamentals

Definition and Intuition

Gradient descent is an iterative optimization designed to minimize a differentiable objective by repeatedly updating parameters in the direction opposite to the , which points toward the local increase of the . This approach relies solely on evaluations of the and its first , making it computationally efficient for high-dimensional problems where higher-order information, such as Hessians, is impractical to compute. Intuitively, gradient descent can be likened to navigating down a foggy : at each step, one assesses the steepest downhill based on the immediately underfoot—the —and takes a step in the opposite direction, gradually approaching the valley floor, which represents a local minimum. This analogy highlights the method's reliance on local information to make globally informed progress, though it may zigzag or slow down in rugged landscapes with flat regions or narrow valleys. In optimization problems, gradient descent serves to locate local minima of the objective function, a task central to fields like , where it tunes model parameters to minimize loss functions measuring prediction errors. Its simplicity and scalability have made it foundational for training neural networks and other data-driven models. The technique traces its origins to , who in 1847 introduced it as a method for solving systems of equations by iteratively reducing residuals along descent directions. It was later generalized in the , notably by in 1907, who applied similar iterative gradient-based steps to variational problems in .

Mathematical Formulation

Gradient descent seeks to minimize an objective function f: \mathbb{R}^n \to \mathbb{R} that is differentiable, with the parameters \theta \in \mathbb{R}^n representing the variables to optimize. This formulation arises in unconstrained optimization problems where the goal is to find \theta^* = \arg\min_\theta f(\theta). The gradient of the function, denoted \nabla f(\theta), is the vector of partial derivatives: \nabla f(\theta) = \begin{pmatrix} \frac{\partial f}{\partial \theta_1}(\theta) \\ \vdots \\ \frac{\partial f}{\partial \theta_n}(\theta) \end{pmatrix}. This vector points in the direction of steepest ascent, so the method proceeds by moving in the opposite direction to reduce f. The core update rule of gradient descent is the iterative step \theta_{k+1} = \theta_k - \alpha_k \nabla f(\theta_k), where k indexes the iteration, \theta_0 is an initial parameter vector, and \alpha_k > 0 is the step size (also called the learning rate) at step k. In the basic form, \alpha_k is fixed as a constant \alpha, though more advanced schemes adjust it dynamically. Convergence of this method relies on certain assumptions about f. Specifically, f must be continuously differentiable, ensuring the exists and is well-behaved everywhere in the . Additionally, the gradient \nabla f is assumed to be continuous with constant L > 0, meaning \| \nabla f(\theta) - \nabla f(\theta') \| \leq L \| \theta - \theta' \| for all \theta, \theta' \in \mathbb{R}^n. Under these conditions, if f is and \alpha_k \leq 1/L, the iterates \theta_k converge to a minimizer at a rate of O(1/k) in function value. To illustrate the formulation, consider the quadratic objective f(\theta) = \frac{1}{2} \theta^T A \theta - b^T \theta, where A \in \mathbb{R}^{n \times n} is symmetric positive definite and b \in \mathbb{R}^n. The gradient simplifies to \nabla f(\theta) = A \theta - b, and applying the update rule yields a sequence that solves the A \theta = b in the limit as k \to \infty, provided $0 < \alpha < 2 / \lambda_{\max}(A), where \lambda_{\max}(A) is the largest eigenvalue of A. This example highlights how systematically reduces the function value toward the unique minimum \theta^* = A^{-1} b.

Standard Batch Gradient Descent

Algorithm Description

Batch gradient descent, also known as vanilla gradient descent, is an iterative optimization algorithm that minimizes an objective function by repeatedly computing the gradient using the entire training dataset and updating the parameters in the direction opposite to the gradient. The process begins with the selection of an initial parameter vector θ₀, which is often chosen randomly from a normal or uniform distribution to avoid poor starting points that could lead to suboptimal convergence, or set to zero for simplicity in certain convex problems. In each iteration, the algorithm evaluates the gradient of the loss function ∇f(θ) with respect to the parameters θ across all n training samples, ensuring a precise estimate of the direction of steepest descent for the full dataset. The parameters are then updated according to the rule θ ← θ - η ∇f(θ), where η is the , a positive scalar that controls the step size and must be carefully chosen to balance convergence speed and stability. This full-batch computation repeats until a convergence criterion is met, such as the norm of the gradient falling below a small tolerance ε (e.g., ||∇f(θ)|| < ε), indicating that the parameters are near a stationary point, or after a fixed number of iterations to prevent excessive computation. The following pseudocode outlines the core procedure:
Initialize parameters θ ← θ₀ (e.g., random or zero)
Set learning rate η > 0 and [tolerance](/page/Tolerance) ε > 0
While ||∇f(θ)|| ≥ ε:
    Compute [gradient](/page/Gradient) ∇f(θ) = (1/n) Σ_{i=1}^n ∇f(θ; x_i, y_i) over entire [dataset](/page/Data_set)
    Update θ ← θ - η ∇f(θ)
Return θ
This structure highlights the algorithm's reliance on complete passes through the in every . Computationally, each requires O(n d) , where n is the number of samples and d is the dimensionality of the parameter space, due to the over all points for evaluation; this makes batch gradient descent suitable for small-to-medium but inefficient for large-scale problems where and time constraints arise from full-batch processing. In practice, initialization strategies like random draws from a Gaussian distribution with mean zero and small variance help mitigate issues such as vanishing or exploding gradients in non-convex settings, while stopping criteria based on (typically ε = 10^{-4} to 10^{-6}) or maximum (e.g., 1000) ensure termination without infinite loops, with the choice depending on the desired precision and computational budget.

Linear Systems Solution

Batch gradient descent applied to linear regression problems provides an for solving linear systems of the form A\theta = b, where A is an m \times n with m \geq n, by minimizing the least-squares \frac{1}{2} \|A\theta - b\|^2_2. The of this is \nabla f(\theta) = A^T (A\theta - b), and each updates \theta_{k+1} = \theta_k - \alpha_k \nabla f(\theta_k), where \alpha_k is typically chosen via exact to minimize f along the direction, yielding \alpha_k = \frac{\nabla f(\theta_k)^T \nabla f(\theta_k)}{\nabla f(\theta_k)^T A^T A \nabla f(\theta_k)}. This process is equivalent to gradient descent on the f(\theta) = \frac{1}{2} \theta^T (A^T A) \theta - (A^T b)^T \theta, assuming A^T A is symmetric positive definite (which holds if A has full column rank). The minimizer \theta^* satisfies the normal equations (A^T A) \theta^* = A^T b, and the method converges linearly to this solution, with the error reduction factor bounded by \left( \frac{\lambda_{\max} - \lambda_{\min}}{\lambda_{\max} + \lambda_{\min}} \right)^2, where \lambda_{\min} and \lambda_{\max} are the smallest and largest eigenvalues of A^T A. Geometrically, each iteration of steepest descent projects the current r_k = b - A \theta_k onto a aligned with the negative -\nabla f(\theta_k) = -A^T r_k, which is simpler than the A-orthogonal projections in conjugate gradient methods but still reduces the progressively. The update at step k is parallel to A^T r_k, and successive residuals satisfy r_{k+1}^T r_k = 0 under exact , ensuring between the new and the previous (and thus the previous update ). For a fixed step size \alpha, convergence is guaranteed if $0 < \alpha < \frac{2}{\lambda_{\max}}, where \lambda_{\max} is the largest eigenvalue of A^T A; the optimal fixed step size that minimizes the worst-case convergence rate is \alpha = \frac{2}{\lambda_{\min} + \lambda_{\max}}. This choice ensures monotonic decrease in the objective for symmetric positive definite A^T A and achieves the tightest linear convergence bound among constant-step variants.

Stochastic and Mini-Batch Variants

Stochastic Gradient Descent

Stochastic gradient descent (SGD) approximates the gradient of the objective function f(\theta) = \frac{1}{n} \sum_{i=1}^n f_i(\theta) by using the gradient from a single randomly selected training example i_k at each iteration k. The parameter update is given by \theta_{k+1} = \theta_k - \alpha_k \nabla f_{i_k}(\theta_k), where \alpha_k > 0 is the . This approach, introduced as a method, enables efficient optimization for large-scale problems by avoiding the need to compute the full gradient over the entire dataset. The stochastic gradient \nabla f_{i_k}(\theta_k) serves as an unbiased estimator of the true gradient \nabla f(\theta_k), satisfying \mathbb{E}[\nabla f_{i_k}(\theta_k)] = \nabla f(\theta_k), where the expectation is taken over the random selection of i_k. Despite this unbiasedness, the estimator exhibits high variance, which introduces noise into the optimization trajectory and can cause oscillations around the minimum. This variance is a key characteristic that distinguishes SGD from batch gradient descent, where the full gradient provides a low-variance but computationally expensive update. To ensure , particularly in non-convex settings, the \alpha_k is typically scheduled to over iterations, such as \alpha_k = \frac{\alpha_0}{\sqrt{k}}, where \alpha_0 > 0 is an . Under suitable assumptions like of the gradients and bounded variance, this scheduling yields an expected of \mathcal{O}(1/\sqrt{k}) for the gradient norm \mathbb{E}[\|\nabla f(\theta_k)\|^2]. SGD offers significant advantages in computational efficiency, with each iteration requiring only \mathcal{O}(d) time complexity, where d is the parameter dimension, making it scalable to massive datasets processed on-the-fly. The inherent noise from the stochastic estimates also aids in escaping sharp local minima, promoting exploration of the loss landscape toward better solutions. For instance, in for , the update uses the of the cross-entropy loss for one point (x_{i_k}, y_{i_k}), given by \nabla f_{i_k}(\theta_k) = ( \sigma(\theta_k^\top x_{i_k}) - y_{i_k} ) x_{i_k}, where \sigma is the .

Mini-Batch Gradient Descent

Mini-batch gradient descent is a variant of that computes the gradient estimate as the over a small subset, or mini-batch, of the training data, serving as a compromise between the full-batch approach and . This method updates the model parameters \theta using the rule \theta_{k+1} = \theta_k - \alpha_k \frac{1}{b} \sum_{i=1}^b \nabla f_i(\theta_k), where b is the mini-batch size, \alpha_k is the at iteration k, and \nabla f_i(\theta_k) is the of the loss function for the i-th example in the batch. The choice of mini-batch size b balances computational efficiency, gradient accuracy, and training stability, with b=1 reducing to stochastic gradient descent and b=n (the full dataset size) corresponding to batch gradient descent; in applications, typical values range from 32 to 256, often selected as powers of 2 to align with memory allocation. Smaller batches introduce more in the gradient estimate, which can act as a form of regularization but may necessitate smaller learning rates to maintain stability, while larger batches provide more accurate gradients at the cost of increased usage. Compared to , mini-batch gradient descent exhibits lower gradient variance due to the averaging over multiple samples, which reduces the noise by a factor of approximately $1/b, though this variance remains higher than in full-batch gradient descent; this trade-off enables more stable updates while still allowing for frequent parameter adjustments. Additionally, mini-batches facilitate parallel computation on GPUs, as the gradients for samples within a batch can be computed simultaneously, improving training throughput for large-scale models. Training with mini-batch gradient descent typically proceeds in epochs, where each epoch constitutes one complete pass through the entire training , divided into non-overlapping mini-batches; to ensure unbiased gradient estimates and prevent to data order, the dataset is randomly shuffled before forming mini-batches at the start of each epoch. Empirically, this approach yields smoother convergence curves than by mitigating erratic updates, while being faster than full-batch gradient descent for large datasets due to reduced per-iteration time and better utilization.

Accelerated and Modified Methods

Momentum Method

The momentum method, also known as the heavy-ball method, modifies standard gradient descent by incorporating a velocity term that accumulates from past updates, thereby accelerating particularly in challenging landscapes. Introduced by Boris T. Polyak in for solving systems of linear equations and optimizing quadratic functions, this technique draws an analogy to a heavy rolling down a potential , where from prior motion helps overcome local oscillations and maintain progress along the primary descent direction. The update rule for the momentum method is defined as follows: \mathbf{v}_{k+1} = \beta \mathbf{v}_k - \alpha \nabla f(\theta_k) \theta_{k+1} = \theta_k + \mathbf{v}_{k+1} where \beta \in (0,1) is the momentum that weights the previous , \alpha > 0 is the , and \mathbf{v}_k represents the accumulated (initialized to zero). This formulation effectively averages gradients over recent steps, smoothing the trajectory and reducing sensitivity to noise in the gradient estimates compared to gradient descent. For functions f(\theta) = \frac{1}{2} \theta^T A \theta with eigenvalues bounded between strong convexity parameter \mu > 0 and smoothness constant L \geq \mu, Polyak derived optimal hyperparameters to achieve accelerated , including \beta = \left( \frac{\sqrt{L} - \sqrt{\mu}}{\sqrt{L} + \sqrt{\mu}} \right)^2, which yields a geometric rate superior to that of standard gradient descent for high condition numbers \kappa = L/\mu. In practical implementations, especially in , a fixed momentum of \beta = 0.9 is commonly used, often in conjunction with a decaying \alpha (e.g., linearly or exponentially reduced over iterations) to stabilize and prevent . This method significantly reduces the number of iterations needed for convergence in ill-conditioned problems, where narrow ravines in the loss cause standard descent to oscillate and progress slowly; the term dampens these oscillations while building speed in the flatter dimensions.

Nesterov Accelerated Gradient

Nesterov Accelerated Gradient (NAG), also known as Nesterov's method, is an optimization technique that enhances the method by incorporating a lookahead step to evaluate the at an anticipated future position, thereby achieving faster rates for problems. Introduced by in 1983, this approach addresses limitations in classical descent by accelerating the search process through a combination of and predictive adjustments. The core intuition behind NAG lies in its use of a "lookahead" , where the is computed not at the current parameters but at a point extrapolated based on previous , allowing the algorithm to anticipate and reduce overshooting in the optimization trajectory. This predictive evaluation helps dampen oscillations and directs updates more efficiently toward the minimum, particularly in scenarios with smooth convex functions. The update rules for NAG are defined as follows: \mathbf{y}_k = \boldsymbol{\theta}_k + \beta (\boldsymbol{\theta}_k - \boldsymbol{\theta}_{k-1}) \boldsymbol{\theta}_{k+1} = \mathbf{y}_k - \alpha \nabla f(\mathbf{y}_k) \mathbf{y}_{k+1} = \boldsymbol{\theta}_{k+1} + \beta (\boldsymbol{\theta}_{k+1} - \boldsymbol{\theta}_k) Here, \boldsymbol{\theta}_k represents the parameters at k, \alpha is the , \beta is the momentum coefficient (typically set to a value like 0.9), and \nabla f(\mathbf{y}_k) is the of the objective function f evaluated at the lookahead point \mathbf{y}_k. Theoretically, NAG achieves an optimal rate of O(1/k^2) for functions, improving upon the O(1/k) rate of vanilla gradient descent and providing a in terms of iterations required to reach a given accuracy. In implementation, NAG is equivalent to the standard method but differs by adjusting the point of gradient computation to the lookahead position, which can be realized with minimal modifications to momentum-based code. This method is commonly employed in optimization tasks in , including .

Adaptive Optimization Techniques

RMSprop

RMSprop is an adaptive variant of gradient descent that normalizes the learning rate for each parameter by the of recent gradient magnitudes, enabling efficient handling of parameters with disparate scales and sparse updates. Proposed by in a 2012 lecture series on neural networks for , it addresses limitations in earlier adaptive methods like AdaGrad by using an exponentially decaying average rather than a cumulative sum of squared gradients, preventing premature decay of the . This makes RMSprop particularly suited to non-stationary optimization problems, such as training recurrent neural networks (RNNs), where gradient statistics shift over time. The core of RMSprop lies in its update mechanism for the exponentially weighted of squared , denoted E[g^2]_t, which captures the magnitude of recent without accumulating all past information: E[g^2]_t = \rho E[g^2]_{t-1} + (1 - \rho) g_t^2 Here, g_t = \nabla_\theta f(\theta_{t-1}) represents the of the objective f with respect to the parameters \theta at timestep t, and \rho is the that controls the memory of past . The parameters are then updated as: \theta_t = \theta_{t-1} - \frac{\alpha g_t}{\sqrt{E[g^2]_t} + \epsilon} where \alpha is the global learning rate and \epsilon is a small constant to avoid division by zero. Typical hyperparameter values include \rho = 0.9 for the decay rate, \alpha = 0.001 for the learning rate, and \epsilon = 10^{-8} for numerical stability. These choices ensure a balance between adapting to recent gradient information and maintaining robustness across iterations. Intuitively, RMSprop scales the effective learning rate inversely with the root mean square of recent gradients, allowing larger steps in directions where gradients are small or sparse—such as in high-dimensional spaces with many near-zero components—while constraining updates where gradients are large. This per-parameter adaptation reduces the need for manual hyperparameter tuning, as the algorithm automatically compensates for varying gradient scales across different model components. Compared to standard stochastic gradient descent, which can suffer from high variance in noisy or sparse settings, RMSprop provides more stable progress by normalizing these fluctuations. Among its key advantages, RMSprop excels in environments with ill-conditioned or non-stationary objectives by adapting learning rates dynamically, leading to faster convergence without the aggressive decay seen in cumulative methods. It has been widely adopted in frameworks for its simplicity and effectiveness in training models on complex datasets, though it requires careful selection of the decay rate to avoid over-smoothing recent gradients.

Adam Optimizer

The optimizer is a gradient-based optimization algorithm designed for training models, particularly deep neural networks, by adaptively estimating lower-order moments of the gradients. Introduced by Diederik P. Kingma and Jimmy Ba in 2014, it combines the concepts of from classical gradient descent variants and rates from methods like RMSprop, enabling efficient convergence in noisy or sparse gradient environments. At its core, Adam maintains adaptive estimates of the first (mean) and second (uncentered variance) of the s, which allow for per-parameter learning rates that adjust dynamically based on the historical information. This provides a robust for handling the varying scales and noise typical in optimization, leading to faster training and better generalization compared to fixed-rate methods. The update rule for Adam proceeds in two main steps: first computing exponentially decaying averages of the and its square, followed by correction to account for initialization es, especially in early iterations. Specifically, the first estimate is updated as: \mathbf{m}_{t} = \beta_1 \mathbf{m}_{t-1} + (1 - \beta_1) \mathbf{g}_t and the second moment estimate as: \mathbf{v}_{t} = \beta_2 \mathbf{v}_{t-1} + (1 - \beta_2) \mathbf{g}_t^2 where \mathbf{g}_t = \nabla_{\theta} J(\theta_{t-1}) is the stochastic gradient at timestep t, and \beta_1, \beta_2 are rates. Bias-corrected estimates are then: \hat{\mathbf{m}}_t = \frac{\mathbf{m}_t}{1 - \beta_1^t}, \quad \hat{\mathbf{v}}_t = \frac{\mathbf{v}_t}{1 - \beta_2^t} Finally, the parameters are updated via: \theta_t = \theta_{t-1} - \alpha \frac{\hat{\mathbf{m}}_t}{\sqrt{\hat{\mathbf{v}}_t} + \epsilon} with \alpha and small \epsilon for . This formulation ensures that the effective learning rate is inversely proportional to the root-mean-square of recent gradients, promoting . Default hyperparameters recommended for Adam include \beta_1 = 0.9, \beta_2 = 0.999, \alpha = 0.001, and \epsilon = 10^{-8}, which have been shown to work well across a variety of tasks without extensive tuning. Due to its empirical effectiveness and ease of implementation, has become one of the most widely adopted optimizers in training, with its original paper garnering over 150,000 citations as of 2024. A notable variant is AdamW, proposed by Loshchilov and Frank Hutter in 2017, which decouples weight decay regularization from the rate updates to better align with the original intent of regularization in settings. This modification improves in tasks like and by applying weight decay directly to parameters rather than incorporating it into the gradient, often leading to superior performance over standard Adam when regularization is needed.

Theoretical Analysis

Convergence Properties

Gradient descent exhibits well-established convergence properties under specific assumptions on the objective f. For a and L- (meaning the is continuous with constant L), batch gradient descent with step size \alpha = 1/L to the global minimum at a sublinear rate of O(1/k), where k is the number of iterations; specifically, the satisfies f(x_k) - f(x^*) \leq O(1/k), with x^* denoting the minimizer. This rate is derived from the and , ensuring monotonic decrease toward the optimum. For \mu-strongly and L- functions, batch gradient descent achieves linear , with \|x_k - x^*\|^2 \leq (1 - \mu/L)^k \|x_0 - x^*\|^2 under appropriate step sizes. In the stochastic setting, for convex and L-smooth objectives, (SGD) achieves an expected of O(1/\sqrt{k}) to the minimum, assuming bounded in the stochastic gradients; this slower compared to batch methods arises from the inherent , but appropriate step size schedules like diminishing \alpha_k = O(1/\sqrt{k}) yield the bound \mathbb{E}[f(\bar{x}_k) - f(x^*)] \leq O(1/\sqrt{k}), where \bar{x}_k is an average of iterates. Variance reduction techniques can improve this, but the standard SGD holds under these assumptions. Accelerated variants, such as Nesterov's accelerated gradient, attain faster rates for strongly convex objectives. For \mu-strongly convex and L-smooth functions (with \kappa = L/\mu), Nesterov's method converges at O(1/k^2), specifically f(x_k) - f(x^*) \leq O(1/k^2), by incorporating to achieve optimal oracle complexity. For non-convex L-smooth functions, gradient descent converges to a where \|\nabla f(x_k)\| \leq \epsilon in (for SGD) or deterministically, requiring O(1/\epsilon^2) iterations; however, no global minimum guarantee exists, as local minima or s may trap the algorithm. In SGD, the inherent noise enables escape from saddle points with high probability, facilitating progress toward better stationary points in non-convex landscapes. Recent analyses in the for over-parameterized models, such as deep neural networks, reveal that gradient descent induces implicit regularization, converging to solutions that minimize norms or promote sparsity beyond explicit penalties; for instance, in over-parameterized , full-batch GD preferentially finds minimum-norm solutions, akin to \ell_2 regularization, under regimes. These results highlight how continuous-time limits and initialization scales influence the implicit toward generalized solutions in high dimensions.

Geometric Interpretations

Gradient descent can be geometrically interpreted as following the direction of steepest on defined by , often visualized using plots that represent level sets of value. In such plots, particularly for ill-conditioned functions with elongated valleys, the algorithm's path exhibits a characteristic zigzagging behavior, where updates alternate between the two principal axes of the valley, leading to slow progress toward the minimum. This arises because each step is orthogonal to the previous one, causing the trajectory to bounce between the valley walls rather than proceeding directly downhill. In the linear case, where gradient descent solves systems of the form Ax = b by minimizing the f(x) = \frac{1}{2} x^T A x - b^T x, the directions span the Krylov subspace generated by the initial and powers of A. Specifically, the s r_k = b - A x_k at successive iterations are mutually orthogonal, ensuring that each new is to all previous ones, which geometrically projects the onto increasingly refined subspaces orthogonal to the of prior s. Furthermore, these s are to A \nabla f(x_j) for previous iterates j < k, reflecting the method's progressive orthogonalization against the matrix-weighted gradients. The method introduces a geometric to these trajectories by incorporating a term that accumulates past updates, akin to a heavy rolling down the optimization . In bowls, this results in less oscillatory paths compared to vanilla gradient descent, as the the zigzagging and allows the algorithm to maintain direction through flat regions or narrow passes, leading to more direct along the valley floor. Visualizations of such dynamics reveal coupled oscillatory modes where parameters control the of ripples, enabling larger effective step sizes without divergence. Stochastic gradient descent produces jagged trajectories due to the noisy estimates of the gradient from individual samples, causing the path to deviate erratically around the true minimum in contour plots. Despite this noise, multiple stochastic runs average to approximate the smoother trajectory of full-batch gradient descent, providing a geometric intuition for why the method converges in expectation while exploring the landscape more broadly. Two-dimensional visualizations of gradient descent on non-convex functions, such as the Rosenbrock banana-shaped surface, illustrate convergence basins as regions from which trajectories flow toward local minima, with the algorithm's path curving along contours to settle in the nearest . At saddle points, where the gradient vanishes but curvature changes sign, pure gradient descent may slow dramatically, but perturbations enable escapes by injecting noise that pushes the trajectory out of the flat direction, as seen in simulated paths that veer toward lower regions rather than stagnating.

References

  1. [1]
    An overview of gradient descent optimization algorithms - arXiv
    Sep 15, 2016 · This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use.
  2. [2]
    [PDF] Steepest Descent 1 Introduction - OSTI.GOV
    The steepest descent method, also known as the gradient descent method, was first proposed by Cauchy in. 1847 [1]. In the original paper, Cauchy proposed the ...
  3. [3]
    Deep Learning Book - Optimization
    minibatch stochastic gradient descent. Typically the term “batch gradient descent”. implies the use of the full training set, while the use of the term ...
  4. [4]
    [PDF] Méthode générale pour la résolution des syst`emes d'équations ...
    Being given a system of simultaneous equations that the concern is to resolve, one begins ordinarily by reducing them to a single one, by aid of successive ...
  5. [5]
    [PDF] Numerical Optimization - UCI Mathematics
    Page 1. Page 2. This is page iii. Printer: Opaque this. Jorge Nocedal Stephen J. Wright. Numerical Optimization. Second Edition. Page 3. This is pag. Printer: O.
  6. [6]
    None
    Below is a merged and comprehensive summary of the gradient descent algorithm based on the provided segments from "Convex Optimization" by Boyd & Vandenberghe, focusing on Chapter 9 where available, and supplemented with general knowledge where specific details are missing from the excerpts. To retain all information in a dense and organized manner, I will use a table in CSV format for key details, followed by a narrative summary that integrates additional context, quotes, and URLs. Since the provided content often lacks explicit Chapter 9 details, I will synthesize the most consistent and authoritative information across the segments while noting gaps or assumptions.
  7. [7]
    [PDF] An overview of gradient descent optimization algorithms - arXiv
    Jun 15, 2017 · Gradient descent is one of the most popular algorithms to perform optimization and by far the most common way to optimize neural networks.
  8. [8]
    [PDF] Iterative methods to solve linear systems, steepest descent
    Nov 21, 2009 · It does, however, imply that successive differences between consecutive approximations are orthogonal because these differences xi+1 − xi = αiri ...
  9. [9]
    [PDF] A Stochastic Approximation Method - Columbia University
    Author(s): Herbert Robbins and Sutton Monro. Source: The Annals of Mathematical Statistics , Sep., 1951, Vol. 22, No. 3 (Sep., 1951), pp. 400-407. Published ...
  10. [10]
    [PDF] Large-Scale Machine Learning with Stochastic Gradient Descent
    Applying the stochastic gradient rule to these variables and enforcing their positivity leads to sparser solutions. Page 4. 4. Léon Bottou. Table 1. Stochastic ...
  11. [11]
    and Zeroth-Order Methods for Nonconvex Stochastic Programming
    In this paper, we introduce a new stochastic approximation type algorithm, namely, the randomized stochastic gradient (RSG) method, for solving an important ...
  12. [12]
    An Alternative View: When Does SGD Escape Local Minima? - arXiv
    Feb 17, 2018 · We take an alternative view that SGD is working on the convolved (thus smoothed) version of the loss function. We show that, even if the function f has many ...
  13. [13]
    Some methods of speeding up the convergence of iteration methods
    Iteration methods which are more widely used are one-step (e.g. methods of successive approximations). They are generally simple from the calculation point of ...
  14. [14]
    Yu. E. Nesterov, “A method of solving a convex programming ...
    A method of solving a convex programming problem with convergence rate O(1k2) ... Full-text PDF (628 kB) Citations (9). Presented: L. V. Kantorovich Received ...
  15. [15]
    [PDF] On the importance of initialization and momentum in deep learning
    Momentum and Nesterov's. Accelerated Gradient. The momentum method (Polyak, 1964), which we refer to as classical momentum (CM), is a technique for ac-.
  16. [16]
    [PDF] Diverse Explanations for Object Detectors with Nesterov ...
    We propose to use Nesterov Accelerated Gradient (NAG) in iGOS++ to replace the line search, which speeds up the algorithm by 2× and improves the performance.
  17. [17]
    None
    ### Summary of RMSprop from Lecture Slides
  18. [18]
    [1412.6980] Adam: A Method for Stochastic Optimization - arXiv
    Dec 22, 2014 · Kingma, Jimmy Ba. View a PDF of the paper titled Adam: A Method for Stochastic Optimization, by Diederik P. Kingma and Jimmy Ba. View PDF.
  19. [19]
    [PDF] Adam: A Method for Stochastic Optimization - Semantic Scholar
    This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of ...
  20. [20]
    [1711.05101] Decoupled Weight Decay Regularization - arXiv
    Nov 14, 2017 · We propose a simple modification to recover the original formulation of weight decay regularization by \emph{decoupling} the weight decay from the optimization ...
  21. [21]
    [PDF] Handbook of Convergence Theorems for (Stochastic) Gradient ...
    Mar 9, 2024 · This handbook provides simple proofs of convergence for gradient and stochastic gradient methods, focusing on simple, easy to understand proofs.
  22. [22]
    [PDF] O(log T) Projections for Stochastic Optimization of Smooth and ...
    For general Lipschitz continuous convex functions, stochastic gradient descent exhibits the unimprovable O(1/√T) rate of convergence (Nemirovski & Yudin, 1983; ...<|control11|><|separator|>
  23. [23]
    [PDF] Nesterov accelerated gradient (English translation, 1983)
    In this note we propose a method of solving a convex programming problem in a. Hilbert space E. Unlike the majority of convex programming methods proposed ...
  24. [24]
    [PDF] An Introduction to the Conjugate Gradient Method Without the ...
    Aug 4, 1994 · Note the zigzag path, which appears because each gradient is orthogonal to the previous gradient. The algorithm, as written above, requires two ...
  25. [25]
    Why Momentum Really Works - Distill.pub
    Momentum is a heavy ball rolling down the same hill. The added inertia acts both as a smoother and an accelerator, dampening oscillations and causing us to ...<|control11|><|separator|>
  26. [26]
    [1703.00887] How to Escape Saddle Points Efficiently - arXiv
    Mar 2, 2017 · This paper shows that a perturbed form of gradient descent converges to a second-order stationary point in a number iterations which depends ...