Fact-checked by Grok 2 weeks ago

Least-squares adjustment

Least-squares adjustment is a statistical method employed in , , and related fields to estimate unknown parameters—such as coordinates or heights—from a set of redundant observations by minimizing the sum of the weighted squares of residuals between observed and model-predicted values. This technique assumes that observation errors follow a and uses weights inversely proportional to the variances of the observations to account for their relative precisions, ensuring unbiased estimates with minimum variance. The method traces its origins to the late , with foundational contributions from in 1774, Adrien-Marie Legendre's publication of Nouvelles méthodes pour la détermination des orbites des comètes in 1805, and Carl Friedrich Gauss's earlier work around 1794, which formalized the principle of maximum likelihood under Gaussian error assumptions. Friedrich Robert Helmert advanced its application in in 1872 by introducing constraint-based adjustments for network configurations. By the mid-20th century, computational tools enabled its widespread use, evolving through texts like Paul R. Wolf's 1968 lecture notes and subsequent editions of Adjustment Computations: Spatial Data Analysis (6th edition in 2017 by Charles D. Ghilani), which incorporated modern elements such as GPS and three-dimensional networks. At its core, least-squares adjustment operates through mathematical models like the Gauss-Markov model for linear cases and the Gauss-Helmert model for nonlinear problems, where observations are related to parameters via design matrices, and solutions are derived iteratively using matrix algebra—such as the normal equations \hat{\xi} = (A^T P A)^{-1} A^T P y for parameter estimates \hat{\xi}, with A as the Jacobian, P the weight matrix, and y the observation vector. It incorporates constraints to handle datum deficiencies in networks, computes variance-covariance matrices for uncertainty propagation (e.g., D\{\hat{\xi}\} = \sigma_0^2 (N)^{-1}, where N = A^T P A), and employs statistical tests like the chi-square or F-distributions for blunder detection and hypothesis validation. Redundancy in observations—calculated as r = m - n, with m observations and n unknowns—enables error analysis, including standardized residuals for identifying outliers at thresholds like 2.58 for 99% confidence. In practice, least-squares adjustment is indispensable for processing data from geodetic networks, including leveling (e.g., orthometric heights via GNSS integration), , , traverses, and GPS observations, often requiring minimal control such as one fixed point and for horizontal surveys. It supports coordinate transformations (e.g., from NAD 27 to NAD 83 using 2D/3D conformal models with 4–7 parameters), instrument calibrations like electronic distance (EDM), and to models such as lines or parabolas. Advanced variants, including variance component estimation and stochastic constraints, address heterogeneous error structures in large-scale applications like the North American-Pacific of 2022 (NAPGD2022), with implementation beginning in 2025. Overall, it enhances reliability by distributing errors optimally across interconnected data, forming the backbone of modern spatial .

Background and Principles

Definition and Purpose

Least-squares adjustment is an optimization technique employed to determine the best-fitting parameters for a model by minimizing the sum of the squared residuals between observed and predicted values. This method provides a statistically rigorous approach to estimate the most probable values of unknowns from a set of measurements that are typically inconsistent due to random errors. In fields such as and , the primary purpose of least-squares adjustment is to process redundant observations—measurements that exceed the minimum required to solve for the unknowns—thereby enhancing the accuracy and reliability of the results. By distributing errors across the network of observations, it yields adjusted values that best satisfy all measurements while accounting for their relative precisions, ultimately reducing the impact of individual measurement inaccuracies. The basic principle underlying least-squares adjustment involves solving an of equations expressed as Ax = l + v, where A is the relating observations to unknowns, x is the vector of unknown parameters, l is the vector of observed values, and v is the vector of residuals. The objective is to find the values of x that minimize the weighted sum of squared residuals, given by v^T P v, where P is the weight matrix that incorporates the variances and covariances of the observations. Unlike ordinary least squares, which assumes equal weights and independence among observations, least-squares adjustment specifically addresses correlated measurements in observational networks by using the weight matrix P to reflect these dependencies and precisions. This adaptation makes it particularly suited for complex systems where observations are interconnected, ensuring a more accurate representation of the underlying geometry.

Historical Development

The method of was first formally introduced to the by French mathematician in 1805, in his appendix to the astronomical treatise Nouvelles méthodes pour la détermination des orbites des comètes. Legendre applied the technique to minimize errors in orbital calculations for comets, presenting it as a practical algebraic procedure for fitting observational data when exact solutions were unattainable due to measurement inaccuracies. Prior to Legendre's publication, German mathematician had independently developed the fundamentals of the method between 1795 and 1801 while working on astronomical predictions, including the orbit of the asteroid . Gauss publicly claimed priority in his 1809 work Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientium, where he not only described the technique but also provided a probabilistic justification based on the assumption of normally distributed errors. This led to a notable priority dispute with Legendre, which persisted into the , though modern historians recognize both contributions as foundational, with Gauss's earlier private use predating Legendre's formalization. In the late , the method gained prominence in through the work of Friedrich Helmert, a geodesist who adapted it for adjusting large-scale survey networks. Helmert's influential 1872 textbook Die Ausgleichungsrechnung nach der Methode der kleinsten Quadrate systematized for geodetic computations, emphasizing its role in error propagation and network adjustment, and it became a standard reference for practical applications in and leveling surveys. Advancements in the included the adoption of notation to streamline formulations, with early notable uses appearing in the 1920s and gaining traction through works like Aitken's 1935 paper on linear combinations of observations. Post-World War II, the advent of electronic computers revolutionized implementations; for instance, the U.S. Coast and Geodetic Survey performed the first computer-based least squares adjustment of a network in 1946 using early tabulating machines, enabling efficient handling of complex datasets previously limited by manual calculations. Since the 1970s, least squares adjustment has been integral to , particularly with the development of the (GPS), where it processes pseudorange and carrier-phase observations to achieve high-precision positioning amid noisy signals and geometric dilutions. This integration addressed the need for global-scale adjustments in dynamic environments, marking a shift from terrestrial networks to space-based systems.

Mathematical Formulation

Parametric Adjustment Model

The parametric adjustment model in least-squares adjustment formulates the relationship between observed quantities and unknown parameters in a deterministic functional structure, typically expressed as observations \mathbf{l} = \mathbf{f}(\mathbf{x}) + \mathbf{v}, where \mathbf{l} is the m \times 1 vector of observations, \mathbf{f}(\mathbf{x}) is the nonlinear functional model depending on the n \times 1 vector of unknown parameters \mathbf{x}, and \mathbf{v} is the m \times 1 vector of residuals representing measurement errors. This model assumes that the true values of the observations satisfy the functional relationship exactly, with residuals accounting for discrepancies due to imperfect measurements. For practical computation, especially in geodetic applications where the model may be nonlinear, the relationship is linearized around an initial approximation \mathbf{x}^0 of the parameters, yielding the approximate form \mathbf{A} \Delta \mathbf{x} = \mathbf{l} - \mathbf{f}(\mathbf{x}^0) + \mathbf{v}, or more compactly \mathbf{A} \mathbf{x} = \mathbf{l} + \mathbf{v} after redefining \mathbf{x} = \Delta \mathbf{x} and adjusting the observation vector accordingly. This linearization is valid under the assumption of small adjustments, where higher-order terms are negligible, and an initial guess \mathbf{x}^0 is chosen to ensure the system is near-linear; iterative refinement may be applied if nonlinearity persists. The \mathbf{A}, of dimensions m \times n, encapsulates the of the to changes in the and is constructed from the matrix of partial derivatives of the functional model: A_{ij} = \frac{\partial f_i(\mathbf{x})}{\partial x_j}, evaluated at the initial approximation \mathbf{x}^0. Each element A_{ij} quantifies how the i-th varies with the j-th , providing a geometric or physical interpretation of the parameter's influence—for instance, in spatial networks, rows correspond to equations and columns to dependencies. The matrix \mathbf{A} must have full column rank (\rank(\mathbf{A}) = n) to ensure of the , assuming the system is consistent. A defining feature of the is its overdetermined nature, where the number of observations m exceeds the number of parameters n (m > n), creating r = m - n > 0 that allows for error detection, reliability assessment, and optimal parameter estimation by minimizing the residuals in a least-squares sense. This ensures the system has more equations than unknowns, distributing inconsistencies across the network rather than forcing an exact fit, which is essential for robust adjustments in measurement sciences. In a leveling network, for example, the unknown parameters \mathbf{x} represent the heights of benchmarks at stations, while the observations \mathbf{l} are measured height differences between connected stations; the \mathbf{A} then consists of coefficients of +1 or -1 indicating the direction of each difference relative to the heights. Consider a simple closed loop with three stations A, B, and C: observations might include l_1 = H_B - H_A, l_2 = H_C - H_B, and l_3 = H_A - H_C, yielding \mathbf{A} = \begin{bmatrix} -1 & 1 & 0 \\ 0 & -1 & 1 \\ 1 & 0 & -1 \end{bmatrix} for parameters [H_A, H_B, H_C]^T, with m=3, n=2 (after fixing one height datum), providing r=1 for . This setup highlights how the model propagates adjustments through the network while respecting the overdetermined structure.

Functional and Stochastic Components

In least-squares adjustment, the functional model establishes the mathematical relationship between the observed quantities and the unknown parameters to be estimated. This model is typically expressed as \mathbf{l} = \mathbf{f}(\mathbf{x}) + \mathbf{v}, where \mathbf{l} is the of observations, \mathbf{x} represents the unknown parameters, \mathbf{f}(\mathbf{x}) denotes the functional relationship (often nonlinear and linearized via expansion for computation), and \mathbf{v} is the of residuals accounting for errors. The functional model assumes that the observations are direct or indirect measurements of the parameters, with the residuals capturing deviations due to imperfections in the process. The model complements the functional model by describing the probabilistic nature of the errors. It assumes that the residuals \mathbf{v} follow a with zero , \mathbf{v} \sim \mathcal{N}(\mathbf{0}, \boldsymbol{\Sigma}_l), where the \boldsymbol{\Sigma}_l = \sigma^2 \mathbf{P}^{-1}; here, \sigma^2 is the a priori variance factor, and \mathbf{P} is the weight matrix that inversely reflects the error variances and covariances of the observations. This assumption of and zero ensures unbiased estimates, while the structure accounts for the statistical dependencies in the . Weighting within the stochastic model is crucial for incorporating the relative precisions of observations. For uncorrelated observations, \mathbf{P} is a , with elements p_{ii} proportional to $1/\sigma_i^2, where \sigma_i^2 represents the a priori variance of the i-th , often derived from specifications or empirical calibrations. In cases of correlated errors, such as those arising from shared or environmental factors in GNSS measurements, \mathbf{P} becomes a full that captures off-diagonal covariances, enabling more accurate adjustments by down-weighting less reliable . A priori variances are typically estimated using methods like variance component (VCE) or minimum norm quadratic unbiased (MINQUE), based on known and historical . The redundancy number r = m - n, where m is the number of observations and n the number of unknowns, quantifies the available for error estimation in the adjustment. This enables the computation of the a posteriori variance factor \hat{\sigma}^2 = \frac{\mathbf{v}^T \mathbf{P} \mathbf{v}}{r}, which provides an estimate of the overall adjustment quality by scaling the weighted sum of squared residuals by the ; a value close to the a priori \sigma^2 indicates a well-fitted model. Outlier detection in least-squares adjustment relies on standardized s to identify s that deviate significantly from the model. The standardized residual for the i-th is computed as \frac{v_i}{\hat{\sigma} \sqrt{Q_{v_{ii}}}}, where Q_{v_{ii}} is the i-th diagonal element of the cofactor of the residuals; values exceeding thresholds like 3.29 (at 99% ) signal potential s, prompting further investigation or data rejection. This approach, often using Baarda's data snooping , ensures the reliability of the adjustment by isolating gross errors without assuming their prior location.

Solution Methods

Normal Equations Derivation

In the least-squares adjustment model, the objective is to minimize the weighted sum of squared residuals \Phi = \mathbf{v}^T \mathbf{P} \mathbf{v}, subject to the linear constraint \mathbf{A} \mathbf{x} + \mathbf{v} = \mathbf{l}, where \mathbf{l} is the vector of observations, \mathbf{A} is the , \mathbf{x} is the vector of unknown parameters, \mathbf{v} is the vector of residuals, and \mathbf{P} is the weight matrix reflecting the properties of the observations. To solve this constrained optimization problem, the method of Lagrange multipliers is employed by introducing a vector of multipliers \boldsymbol{\lambda}. The Lagrangian function is formed as \mathcal{L}(\mathbf{x}, \mathbf{v}, \boldsymbol{\lambda}) = \frac{1}{2} \mathbf{v}^T \mathbf{P} \mathbf{v} + \boldsymbol{\lambda}^T (\mathbf{A} \mathbf{x} + \mathbf{v} - \mathbf{l}), where the factor of $1/2 is included for computational convenience in differentiation. The necessary conditions for a minimum are obtained by setting the partial derivatives to zero: \frac{\partial \mathcal{L}}{\partial \mathbf{v}} = \mathbf{P} \mathbf{v} + \boldsymbol{\lambda} = \mathbf{0}, \quad \frac{\partial \mathcal{L}}{\partial \mathbf{x}} = \mathbf{A}^T \boldsymbol{\lambda} = \mathbf{0}, \quad \frac{\partial \mathcal{L}}{\partial \boldsymbol{\lambda}} = \mathbf{A} \mathbf{x} + \mathbf{v} - \mathbf{l} = \mathbf{0}. From the first equation, \boldsymbol{\lambda} = -\mathbf{P} \mathbf{v}. Substituting into the second yields \mathbf{A}^T (-\mathbf{P} \mathbf{v}) = \mathbf{0}, or \mathbf{A}^T \mathbf{P} \mathbf{v} = \mathbf{0}. Combining with the third equation, \mathbf{v} = \mathbf{l} - \mathbf{A} \mathbf{x}, gives \mathbf{A}^T \mathbf{P} (\mathbf{l} - \mathbf{A} \mathbf{x}) = \mathbf{0}, which simplifies to the normal equations \mathbf{N} \mathbf{x} = \mathbf{n}, where \mathbf{N} = \mathbf{A}^T \mathbf{P} \mathbf{A} is the normal matrix and \mathbf{n} = \mathbf{A}^T \mathbf{P} \mathbf{l} is the right-hand side vector. Assuming \mathbf{N} is invertible (which requires \mathbf{P} to be positive definite and \mathbf{A} to have full column rank), the least-squares estimator is \hat{\mathbf{x}} = \mathbf{N}^{-1} \mathbf{n} = (\mathbf{A}^T \mathbf{P} \mathbf{A})^{-1} \mathbf{A}^T \mathbf{P} \mathbf{l}. Under the Gauss-Markov assumptions—where the observations are unbiased with expectation \mathbb{E}(\mathbf{l}) = \mathbf{A} \mathbf{x}_0 and covariance \sigma^2 \mathbf{P}^{-1}, with \mathbf{v} uncorrelated—this estimator is the best linear unbiased estimator (BLUE). The variance-covariance matrix of the estimator is then \mathbf{Q}_{\hat{\mathbf{x}}} = \sigma^2 (\mathbf{A}^T \mathbf{P} \mathbf{A})^{-1}, where \sigma^2 is the variance of unit weight, estimated post-adjustment from the residuals. The adjusted observations are computed as \hat{\mathbf{l}} = \mathbf{A} \hat{\mathbf{x}}, and the residuals as \mathbf{v} = \mathbf{l} - \hat{\mathbf{l}}. To confirm that this solution corresponds to a minimum, the second partial derivatives () of the are examined. The relevant Hessian with respect to \mathbf{x} is \partial^2 \mathcal{L} / \partial \mathbf{x}^2 = \mathbf{A}^T \mathbf{P} \mathbf{A}, which is positive definite under the stated assumptions, ensuring the critical point is a global minimum. The Hessian with respect to \mathbf{v} is \mathbf{P}, also positive definite.

Computational Implementation

Computational implementation of least-squares adjustment typically involves solving the normal equations \mathbf{N} \hat{\mathbf{x}} = \mathbf{n}, where \mathbf{N} is the symmetric positive-definite and \mathbf{n} is the right-hand side vector, derived from the observation equations. For smaller to medium-sized systems, direct methods are preferred due to their reliability and exactness within floating-point precision. The factors \mathbf{N} = \mathbf{L} \mathbf{L}^T, where \mathbf{L} is lower triangular, allowing efficient forward and backward substitution to solve for the parameter estimates \hat{\mathbf{x}}. This approach exploits the symmetry and positive-definiteness of \mathbf{N}, reducing computational cost compared to general factorizations and improving for well-conditioned problems. For cases where \mathbf{N} may not be strictly positive-definite due to weighting or constraints, LU factorization can be applied to the augmented system, though it is less efficient for symmetric matrices. In large-scale adjustments involving thousands of observations, such as extensive surveying networks, direct methods become prohibitive due to O(n^3) complexity for an n \times n matrix, leading to high memory and time demands. Iterative methods are then essential, particularly for sparse or ill-conditioned systems. The conjugate gradient (CG) method is widely used for symmetric positive-definite \mathbf{N}, iteratively minimizing the quadratic form associated with the least-squares objective and converging in at most n steps theoretically, though practically much faster for well-structured problems. For ill-conditioned matrices, preconditioned variants of CG or the Gauss-Seidel method can accelerate convergence by smoothing errors in a successive-over-relaxation manner, making them suitable for networks with redundant observations exceeding thousands. These methods scale to systems with millions of variables, solving in minutes on standard hardware for sparse geodetic problems. Practical implementation relies on established software libraries that integrate these solvers. In , the operator (\) or Optimization Toolbox functions like lsqnonlin employ Cholesky or QR-based decompositions for least-squares problems, with extensions for large sparse matrices via iterative solvers. Python's library provides scipy.optimize.least_squares for bounded and unbounded adjustments, supporting both direct (via ) and iterative (e.g., Trust Region Reflective) methods, while specialized packages like linz-adjustment handle survey-specific least-squares for . In , the Bernese GNSS Software performs least-squares adjustments for global navigation satellite systems, incorporating sequential processing for large datasets with up to millions of epochs. Since the , GPU acceleration has enhanced efficiency for massive systems; for instance, CUDA-based implementations of Cholesky or can achieve 10-100x speedups for least-squares in or reconstruction tasks involving thousands of parameters. Singularity in \mathbf{N}, often arising from datum deficiencies in free networks, is addressed through techniques like pseudo-observations, which introduce minimal constraints (e.g., fixing centroids or orientations) with high weights to stabilize the system without biasing results. Alternatively, free network adjustments use minimum inner constraints or pseudo-inverse methods, such as (), to project onto the rank-deficient subspace and estimate only the adjustable parameters. These approaches ensure solvability for underdetermined systems in , where the number of observations exceeds parameters but rank issues persist. For efficiency in large networks with thousands of observations, hybrid strategies combine storage (e.g., via compressed ) with iterative solvers, reducing time from hours to seconds on multi-core systems. Modern frameworks support to 10^5-10^6 observations, as demonstrated in geostatistical models or optimization problems, where or recursive least-squares variants further mitigate computational bottlenecks.

Applications

Geodetic and Surveying Networks

In geodetic networks, least-squares adjustment is essential for integrating diverse measurements such as distances in , angles in , and GPS baselines in combined networks to determine precise station coordinates. networks rely on measured distances between stations, forming condition equations that minimize the sum of squared residuals to adjust for observational errors and achieve a consistent network geometry. , historically prominent, uses angular observations from theodolites, where least-squares methods adjust azimuths or angles to resolve redundancies and propagate across large areas. Modern combined networks incorporate GPS-derived baselines alongside traditional measurements, enabling simultaneous adjustment of thousands of observations for improved accuracy in establishing horizontal . A practical application in is the adjustment of traverse networks, where sequential of lengths and directions form a closed or connect to fixed control points, and least-squares distributes angular and linear misclosures proportionally to estimate refined coordinates for all stations. In a typical traverse, the functional model relates observed bearings and distances to coordinate differences, while the adjustment minimizes discrepancies by solving the normal equations, ensuring that coordinate uncertainties reflect the precisions. This approach outperforms compass rule methods by fully utilizing redundancies, particularly in polygonal traverses spanning or rugged . Least-squares adjustment in these networks provides key benefits, including blunder detection through analysis of post-adjustment residuals, where large standardized residuals flag gross errors for removal or investigation prior to finalization. Reliability is further assessed using chi-squared tests on the sum of squared residuals, which evaluate whether the adjustment's goodness-of-fit aligns with the assumed model, typically confirming the network's at a 95% confidence level. A prominent is the establishment of the of 1983 (NAD83), which involved a simultaneous least-squares adjustment of approximately 250,000 stations using historical , , Doppler satellite, and (VLBI) observations to redefine the continental horizontal reference frame. This adjustment minimized distortions from the prior NAD27 by incorporating weighted observations in a minimally constrained adjustment, resulting in coordinate accuracies of 1-2 meters for primary control points. Subsequent realizations, such as NAD83 (2011), refined this through readjustments of 81,055 passive marks, enhancing alignment with global systems like ITRF. More recent efforts, such as the North American-Pacific Geodetic Datum of 2022 (NAPGD2022), further integrate GNSS and inertial data in plate-fixed adjustments. In contemporary , least-squares adjustment underpins kinematic (RTK) positioning, where carrier-phase observations from a and are processed to resolve ambiguities and compute centimeter-level coordinates instantaneously. RTK systems apply sequential least-squares filters to double-differenced measurements, incorporating and atmospheric corrections to maintain network integrity in dynamic tasks like infrastructure mapping. This extends traditional geodetic adjustments to mobile, high-frequency operations, with post-processing options for enhanced precision in control network densification.

Engineering and Scientific Measurements

In engineering and scientific , least-squares adjustment is employed to process overdetermined systems of from experiments and instruments, yielding optimal estimates by minimizing the sum of squared residuals weighted by precisions. This approach is particularly valuable when redundant observations arise from multiple trials or sensors, allowing for the integration of diverse types while accounting for their uncertainties. Unlike ad-hoc fitting methods, it provides statistically rigorous results, including variance-covariance matrices for , which are essential for reliable inference in fields beyond spatial . In , least-squares adjustment is routinely applied to calibrate strain-gage balances used in testing, where redundant load readings from multiple gages are fitted to derive prediction equations for aerodynamic forces. For instance, an improved weighted least-squares assigns factors based on the number of loaded gages, enhancing the influence of pure single-component loads and mitigating asymmetries in calibration schedules, thereby reducing residual errors compared to unweighted methods. This technique is critical for , such as fitting strain data from bridge monitoring sensors to model material responses under load, ensuring precise deformation estimates from overdetermined datasets. In physics and chemistry, the method adjusts spectroscopic measurements from multiple trials to refine calibration curves and fundamental constants. For example, multidimensional spectra are fitted to line-shape models using least-squares to extract parameters like relaxation times, achieving precision improvements of 8 to 50 times over classical methods by optimizing residuals in correlated spectral features. Similarly, the CODATA task group has historically used least-squares adjustments of spectroscopic and other experimental data to determine values for constants like the and Planck's constant, incorporating redundancies from diverse laboratories to minimize inconsistencies and propagate uncertainties accurately. Astronomy leverages least-squares for orbit determination from telescopic observations, where angular positions over time are adjusted to minimize residuals in right ascension and declination. Batch least-squares filters process sequences of telescope images to estimate orbital elements, such as semi-major axis and eccentricity, from redundant sightings, enabling precise predictions for satellites or asteroids with positional accuracies below 1 arcsecond in simulated tests. This application is vital for tracking near-Earth objects, integrating observations from multiple ground-based telescopes to resolve ambiguities in preliminary orbits. A representative example is for , where least-squares estimation aligns overlapping images to recover scene geometry, camera poses, and point clouds by minimizing reprojection errors. The method exploits geometric constraints like symmetries in architectural scenes, solving a unified for parameters and achieving sub-pixel precision in real-world datasets, such as urban facades reconstructed from a few views. This facilitates applications in and virtual heritage, with analysis providing reconstruction uncertainties. One key advantage of least-squares adjustment in these contexts is its ability to handle correlated errors from instruments, such as those arising in multi-sensor arrays or sequential , by incorporating the full into weighting schemes, which propagates variances rigorously and enhances overall estimate reliability compared to error assumptions.

Extensions

Weighted and Constrained Adjustments

In least-squares adjustment, observations often have unequal precisions due to varying accuracies, qualities, or environmental factors, necessitating a weighted approach to account for these differences. The weight matrix \mathbf{P} is introduced, where each diagonal element p_{ii} is the inverse of the variance of the corresponding , p_{ii} = 1/\sigma_i^2, and off-diagonal elements capture correlations if present. This matrix scales the residuals in the objective function, minimizing \mathbf{v}^T \mathbf{P} \mathbf{v} subject to the linear observation equations \mathbf{l} + \mathbf{v} = \mathbf{A} \mathbf{x}, where \mathbf{v} are residuals, \mathbf{l} the vector, \mathbf{A} the , and \mathbf{x} the parameter . The solution yields the generalized normal equations \mathbf{N} \hat{\mathbf{x}} = \mathbf{n}, with \mathbf{N} = \mathbf{A}^T [\mathbf{P}](/page/P′′) \mathbf{A} as the coefficient matrix and \mathbf{n} = \mathbf{A}^T [\mathbf{P}](/page/P′′) \mathbf{l} as the right-hand side , providing unbiased estimates with minimum variance under Gaussian assumptions. This formulation extends the unweighted case by incorporating the stochastic model through \mathbf{P}, ensuring higher-precision observations contribute more to the adjustment. In geodetic networks, \mathbf{P} is typically derived from the cofactor matrix \mathbf{Q} = \mathbf{P}^{-1}, often estimated from repeated measurements or prior knowledge. Constrained adjustments arise when additional conditions must be imposed, such as datum definitions, fixed control points, or geometric constraints in free networks to define position, orientation, and scale. These are incorporated using Lagrange multipliers, forming the Lagrangian \mathcal{L} = \mathbf{v}^T \mathbf{P} \mathbf{v} + 2 \boldsymbol{\lambda}^T (\mathbf{B} \mathbf{x} - \mathbf{w}), where \mathbf{B} \mathbf{x} = \mathbf{w} represents the m linear constraints (m < n, with n parameters), and \boldsymbol{\lambda} are the multipliers. Differentiating and setting to zero gives the augmented system: \begin{bmatrix} \mathbf{A}^T \mathbf{P} \mathbf{A} & \mathbf{B}^T \\ \mathbf{B} & \mathbf{0} \end{bmatrix} \begin{bmatrix} \hat{\mathbf{x}} \\ \hat{\boldsymbol{\lambda}} \end{bmatrix} = \begin{bmatrix} \mathbf{A}^T \mathbf{P} \mathbf{l} \\ \mathbf{w} \end{bmatrix}, which solves the combined least-squares problem while satisfying the constraints exactly. This approach is essential for overparameterized systems, as it resolves singularities in \mathbf{N}. In practice, minimally constrained adjustments are used in geodetic networks to avoid rank deficiency from datum freedom, applying the fewest constraints (typically 6 or 7 for 3D networks) to fix translation, rotation, and scale while preserving internal network geometry. For example, in free networks without fixed points, constraints minimize artificial deformations like rotation or scale distortion. This method ensures a unique solution without biasing the relative positions derived from observations. Recent developments in weighting enhance robustness against outliers, which can distort classical least-squares results in real-world geodetic data affected by gross errors. M-estimation, introduced in geodesy in the 1980s, modifies the objective function to \sum \rho(r_i / \hat{\sigma}), where \rho is a bounded influence function (e.g., Huber's or Danish method) that downweights large residuals, reducing outlier impact while retaining efficiency for clean data. Pioneered by works like Yang (1991) on robust Bayesian estimation, M-estimators iteratively update weights via p_i = \psi(r_i / \hat{\sigma}) / (r_i / \hat{\sigma}), integrating into the weighted framework for improved reliability in surveying and deformation analysis. These methods have become standard for handling contaminated datasets since their adoption in the late 20th century.

Nonlinear Least-Squares Adjustment

In nonlinear least-squares adjustment, the functional model relating observations \mathbf{l} to unknown parameters \mathbf{x} is expressed as \mathbf{l} = \mathbf{f}(\mathbf{x}) + \mathbf{v}, where \mathbf{f} is a nonlinear function and \mathbf{v} represents residuals assumed to follow a normal distribution with zero mean and known covariance. This formulation extends the linear parametric model by accommodating relationships where observations depend nonlinearly on parameters, such as distances or angles in surveying networks. To solve this, the model is linearized using a first-order Taylor series expansion around an approximate value \mathbf{x}_k: \mathbf{f}(\mathbf{x}_k + \Delta \mathbf{x}) \approx \mathbf{f}(\mathbf{x}_k) + \mathbf{A}_k \Delta \mathbf{x}, leading to the iterative observation equation \mathbf{A}_k \Delta \mathbf{x} = \mathbf{l} - \mathbf{f}(\mathbf{x}_k) + \mathbf{v}_k, where \mathbf{A}_k = \partial \mathbf{f}/\partial \mathbf{x} evaluated at \mathbf{x}_k is the Jacobian matrix. The solution proceeds iteratively using the Gauss-Newton method: begin with an initial approximation \mathbf{x}_0, compute the linearized system, and solve the weighted normal equations \mathbf{A}_k^T \mathbf{P} \mathbf{A}_k \Delta \mathbf{x}_k = \mathbf{A}_k^T \mathbf{P} (\mathbf{l} - \mathbf{f}(\mathbf{x}_k)), where \mathbf{P} is the weight matrix derived from the stochastic model. Update the parameters as \mathbf{x}_{k+1} = \mathbf{x}_k + \Delta \mathbf{x}_k and repeat until convergence, typically assessed by criteria such as \|\Delta \mathbf{x}_k\| < \epsilon (e.g., \epsilon = 10^{-6}) or when the change in the sum of squared residuals stabilizes. For cases with poor initial guesses, convergence can be improved using damped least squares, such as the Levenberg-Marquardt algorithm, which modifies the normal equations to (\mathbf{A}_k^T \mathbf{P} \mathbf{A}_k + \mu \mathbf{I}) \Delta \mathbf{x}_k = \mathbf{A}_k^T \mathbf{P} (\mathbf{l} - \mathbf{f}(\mathbf{x}_k)), where \mu > 0 is a damping factor adjusted dynamically to balance between Gauss-Newton steps and gradient descent. Applications include satellite , where nonlinear orbital equations are fitted to tracking data, and nonlinear tasks like resection, in which instrument positions are estimated from measured angles and distances to known points. In resection, for instance, the model linearizes relations iteratively to refine coordinates with millimeter-level precision in geodetic networks. Challenges arise from the potential for multiple local minima in the objective function, requiring robust initial approximations to avoid , and the computational expense of evaluation, often performed numerically if analytical derivatives are unavailable.

Comparison to Other Estimation Techniques

Least-squares adjustment serves as a special case of (MLE) when the errors in the observations are assumed to follow a Gaussian distribution with zero mean and constant variance. Under this normality assumption, minimizing the sum of squared residuals in least squares is mathematically equivalent to maximizing the , yielding identical parameter estimates. However, MLE offers greater generality, as it can accommodate non-Gaussian error distributions, such as or , where least squares may produce biased or inefficient estimates. In contrast to ordinary , which assumes the explanatory variables or model parameters are measured without error and attributes all uncertainty to the response variables, addresses errors-in-variables models by accounting for noise in both the observations and the model structure. This makes particularly suitable for scenarios where measurement inaccuracies affect independent variables, such as in problems or geophysical data fitting, potentially reducing that ordinary least squares cannot mitigate. Bayesian estimation methods differ fundamentally from least squares by incorporating prior knowledge about parameters to derive a full posterior distribution, rather than producing a single point estimate like the least-squares solution. In Bayesian approaches, the posterior reflects updated beliefs after observing data, enabling quantification of uncertainty through credible intervals, whereas least squares relies solely on the data for its minimum-variance unbiased point estimates under linear assumptions. This incorporation of priors allows Bayesian methods to handle complex dependencies or sparse data more flexibly, though at the cost of increased computational demands compared to the closed-form solution of least squares. Least-squares adjustment excels in computational efficiency for linear models, offering a closed-form solution via the normal equations or , which can be solved using stable algorithms like Cholesky factorization in O(n^3) time for n parameters. It achieves optimality as the best linear unbiased estimator () under the Gauss-Markov theorem's assumptions of , no , homoscedasticity, and uncorrelated errors, providing minimum variance among such estimators when errors are normally distributed. Nonetheless, its performance degrades in the presence of non-Gaussian noise, heteroscedasticity, or outliers, where alternatives like or MLE may yield more reliable results. Historically, prior to the early 1800s, parameter estimation in fields like astronomy and predominantly involved graphical methods, such as manually drawing lines through plotted data points to approximate fits, which were subjective and lacked a rigorous mathematical foundation. The introduction of by in 1805 and Carl Friedrich Gauss's probabilistic justification around 1809 marked a pivotal shift, establishing it as the dominant technique for handling overdetermined systems due to its objectivity and statistical grounding. This transition supplanted earlier approaches, solidifying ' role in modern estimation practices.

Error Propagation and Analysis

In least-squares adjustment, the uncertainties associated with the estimated parameters are quantified through the variance-covariance of the adjusted parameters, denoted as \hat{Q}_{\hat{x}} = \hat{\sigma}^2 N^{-1}, where N is the normal formed from the and weight , and \hat{\sigma}^2 is the a posteriori variance estimated from the residuals after adjustment. This captures the joint variability and correlations among the parameter estimates, providing a complete description of their under the of normally distributed errors. The diagonal elements of \hat{Q}_{\hat{x}} yield the variances of individual parameters, while off-diagonal elements indicate covariances that reflect dependencies introduced by the adjustment process. The variance factor \hat{\sigma}^2 is computed as \hat{\sigma}^2 = \frac{\mathbf{v}^T P \mathbf{v}}{r}, where \mathbf{v} is the of residuals, P is the weight matrix of the observations, and r is the (number of observations minus the number of parameters). This assesses the overall fit of the model to the and is used to scale the , ensuring that the propagated uncertainties reflect the realized rather than a priori assumptions. In practice, \hat{\sigma}^2 is tested for consistency with the a priori variance to validate the model. For geodetic applications involving or coordinates, the variance-covariance \hat{Q}_{\hat{x}} is often visualized using error ellipses (or ellipsoids in ), which represent regions at a specified probability level, such as 95%. These ellipses are derived from the of the position submatrix of \hat{Q}_{\hat{x}}; the lengths are proportional to the square roots of the eigenvalues, scaled by the chi-squared for the desired and . The orientation of the ellipse follows the eigenvector directions, illustrating the directional in the coordinate estimates, with more circular shapes indicating isotropic and elongated ones highlighting anisotropic errors due to network geometry. Reliability analysis in least-squares adjustment includes tests to detect model inconsistencies or s. The overall model test uses the statistic \tau^2 = \frac{\mathbf{v}^T P \mathbf{v}}{\hat{\sigma}^2 r}, which follows a central with r under the of no gross errors; rejection at a chosen significance level prompts further investigation. For outlier detection, Baarda's snooping sequentially tests each standardized w_i = \frac{v_i}{\hat{\sigma} \sqrt{r_{ii}}}, where r_{ii} is the redundancy number for observation i, against critical values from the adjusted for multiple testing; if an is identified and removed, the adjustment is repeated. This method quantifies the internal reliability by assessing the minimal detectable bias for each observation. Uncertainties in derived quantities, such as transformed coordinates or network-derived distances, are propagated linearly from the adjusted parameters using the : for a function \mathbf{g}(\hat{\mathbf{x}}), the approximate variance is \hat{\sigma}_g^2 \approx \left( \frac{\partial \mathbf{g}}{\partial \mathbf{x}} \right) \hat{Q}_{\hat{x}} \left( \frac{\partial \mathbf{g}}{\partial \mathbf{x}} \right)^T, evaluated at \hat{\mathbf{x}}. This first-order approximation assumes small errors and is particularly useful in geodetic networks for computing variances of functions like coordinate differences or azimuths. Sensitivity analysis evaluates the impact of individual observations on the overall adjustment reliability, particularly how removing an observation affects \hat{Q}_{\hat{x}}. The change in the covariance matrix upon removal of the i-th observation is given by \Delta \hat{Q}_{\hat{x}, -i} = \hat{Q}_{\hat{x}} - \frac{\hat{Q}_{\hat{x}} \mathbf{a}_i \mathbf{a}_i^T \hat{Q}_{\hat{x}}}{\sigma_i^2 + \mathbf{a}_i^T \hat{Q}_{\hat{x}} \mathbf{a}_i}, where \mathbf{a}_i is the i-th row of the design matrix; observations with low redundancy (r_{ii} \approx 0) lead to large increases in parameter variances, indicating critical dependencies. This approach highlights vulnerable observations and informs network design to enhance robustness.

References

  1. [1]
    [PDF] NOAA Technical Memorandum NOS NGS 74 On Least Squares ...
    May 1, 2018 · When least-squares tech- niques are applied, the estimation process is called Least-Squares Adjustment. In this memorandum, a new observational ...
  2. [2]
    [PDF] ADJUSTMENT COMPUTATIONS
    ... least squares. Clearly, modern surveyors must be able to apply the method of least squares to adjust their measured data, and they must also be able to ...
  3. [3]
    [PDF] Adjustment Computations - School of Earth Sciences
    Jan 19, 2021 · Adjustment Computations involve observations, parameters, random errors, and essential math, based on former Geodetic Science Courses.
  4. [4]
    [PDF] Basic Principles of Least Squares Adjustment Computation ...
    Least squares adjustment uses first principle, observation, and condition equations. It aims to minimize the sum of squares of observational residuals.
  5. [5]
    [PDF] THE METHOD OF LEAST SQUARES
    May 3, 1971 · The method of least squares, used in surveying, integrates statistical concepts, linear algebra, and digital computers.
  6. [6]
    What is a least-squares adjustment?—ArcGIS Pro | Documentation
    A least-squares adjustment uses statistical analysis to estimate the most likely coordinates for connected points in a measurement in a network.
  7. [7]
    [PDF] Legendre On Least Squares - University of York
    Adrien-Marie Legendre (1752–1833) was for five years a professor of mathematics ... We see then that the method of least squares reveals to us, in a fashion, the ...
  8. [8]
    Carl Friedrich Gauss & Adrien-Marie Legendre Discover the Method ...
    Adrien-Marie Legendre Offsite Link was the first to publish the method of least squares Offsite Link in 1805, Carl Friedrich Gauss Offsite Link is credited ...
  9. [9]
    Gauss, Least Squares, and the Missing Planet - Actuaries Institute
    Mar 30, 2021 · However, Gauss did not officially publish his method until 1809 in his famous treatise “Theoria motus corporum coelestium in sectionibus conicis ...
  10. [10]
    Gauss and the Invention of Least Squares - jstor
    The most famous priority dispute in the history of statistics is that between Gauss and Legendre, over the discovery of the method of least squares.
  11. [11]
    Die Ausgleichsrechnung nach der Methode der kleinsten Quadrate
    How to cite ... Helmert, Friedrich Robert. Die Ausgleichsrechnung nach der Methode der kleinsten Quadrate. Leipzig: Teubner, 1872. <http://eudml.org/doc/203764>.
  12. [12]
    Earliest Uses of Symbols for Matrices and Vectors - MacTutor Index
    Matrix notation was first used in the 1920s but the most noticed of the early contributions was a paper by Aitken, "On least squares and linear combinations of ...Missing: adjustment | Show results with:adjustment
  13. [13]
    Milestones of the Survey - History
    1946 First C&GS use of a computer to do a least squares adjustment. 1946 First Least Squares adjustment of triangulation using a computer. 1947 Surplus B-17 ...
  14. [14]
    Development Of The National Spatial Reference System
    ... least squares adjustment theory. By 1985, GPS could easily outperform first-order horizontal accuracy, and was becoming increasingly more affordable and ...
  15. [15]
    [PDF] GPS and Space-Based Geodetic Methods
    By the early 1970s, Doppler positioning with 10 m accuracy became possible on the global scale, leading to the precise global reference frame 'World Geodetic ...
  16. [16]
    [PDF] Nonlinear Parametric Least-Squares Adjustment. - DTIC
    A parametric adjustment model expresses n observables In terms of u parameters, where the structure linking the two groups Is In general nonlinear.
  17. [17]
    [PDF] Least Squares Adjustment: Linear and Nonlinear Weighted ...
    Sep 19, 2013 · This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying ...
  18. [18]
    [PDF] Adjustment Computations - School of Earth Sciences
    Feb 4, 2021 · Adjustment Computations involve observations, parameters, random errors, and math, based on former Geodetic Science Courses GS 650 and GS 651.
  19. [19]
    Adjustment Computations - School of Earth Sciences
    Sep 20, 2021 · Chapter 2 covers the model of direct observations and shows how the least- squares estimate of the unknown model parameter is derived.
  20. [20]
    [PDF] Functional and stochastic modelling of satellite gravity data - NCG
    It is common practice in gravity field modelling to use different satellite data sets in a common least-squares adjustment, e.g., data from different missions, ...
  21. [21]
    Empirical Stochastic Model of Multi-GNSS Measurements - PMC
    The stochastic model, together with the functional model, form the mathematical model of observation that enables the estimation of the unknown parameters.
  22. [22]
    [PDF] Implementation of Stochastic Modelling in Enhanced Cadastral ...
    Stochastic Modelling (SM) was a crucial component of least squares adjustment (LSA), particularly when processing data from geodetic networks.
  23. [23]
    [PDF] The QR and Cholesky Factorizations - Cornell: Computer Science
    Least squares fitting results when the 2-norm of Ax−b is used to quantify success. In §7.1 we introduce the least squares problem and solve a simple fitting ...
  24. [24]
    [PDF] 5 Least Squares Problems
    (1) Compute the Cholesky factorization A∗A = R∗R. (2) Solve the lower triangular system R∗w = A∗b for w. (3) Solve the upper triangular system Rx = w for x.Missing: adjustment | Show results with:adjustment
  25. [25]
    [PDF] III. Computing the Solution to Least-Squares Problems
    The Cholesky decomposition is unique. It can actually be computed slightly faster than a general LU decomposition, and is easier to stabilize. Example ...Missing: adjustment | Show results with:adjustment
  26. [26]
    [PDF] Iterative Methods for Sparse Linear Systems Second Edition
    In the six years that passed since the publication of the first edition of this book, iterative methods for linear systems have made good progress in ...
  27. [27]
    [PDF] An Introduction to the Conjugate Gradient Method Without the ...
    Aug 4, 1994 · CG is the most popular iterative method for solving large systems of linear equations. CG is effective for systems of the form. (1) where is ...<|separator|>
  28. [28]
    Overview of the Methods - NetLib.org
    The Gauss-Seidel method is like the Jacobi method, except that it uses updated values as soon as they are available. In general, if the Jacobi method converges ...
  29. [29]
    [PDF] An Interior-Point Method for Large-Scale -Regularized Least Squares
    The interior-point method can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. It can efficiently solve ...
  30. [30]
    Least-Squares (Model Fitting) Algorithms - MATLAB & Simulink
    There are six least-squares algorithms in Optimization Toolbox solvers, in addition to the algorithms used in mldivide.
  31. [31]
    linz/python-linz-adjustment - GitHub
    The python-linz-adjustment software is used to calculate least squares adjustment of survey observations for quality control of the observations.<|separator|>
  32. [32]
    [PDF] Bernese GNSS Software Version 5.4 Tutorial
    Aug 27, 2022 · The main parameter estimation based on a least–squares adjustment is the task of program ... This is the standard application of the Bernese GNSS ...
  33. [33]
    A GPU-Accelerated Mixed-Precision Graph Optimization Framework
    Sep 30, 2025 · As our work proposes a GPU-based approach for non-linear least squares optimization for problems such as bundle adjustment, we review literature ...
  34. [34]
    [PDF] Least Squares on GPUs in Multiple Double Precision
    Mar 20, 2022 · This paper describes the application of the code generated by the CAMPARY software to accelerate the solving of linear systems in the least ...
  35. [35]
    [PDF] USER MANUAL - Purdue Engineering
    In a free network adjustment only the observations are tested. The minimum number of known coordinates is used in a free network adjustment: just enough to fix.
  36. [36]
    Free network adjustment: Minimum inner constraints and Pseudo ...
    Sep 12, 2020 · Two mainly approaches of free adjustment are used to solve a geodetic network, the minimum inner constraints and pseudo-inverse technique. Both ...
  37. [37]
    [PDF] The method of detection and localization of configuration defects in ...
    Dec 10, 2021 · The classical least-squares adjustment algorithm signals the singularity of the normal matrix and the program is stopped. We know that the ...
  38. [38]
    [PDF] Algorithms and Analysis for the Efficient Solution of Large-Scale ...
    The linear least squares regression framework is a fundamental component in many modern data analysis and statistical learning problems. Even on its own, ...
  39. [39]
    A scalable implementation of the recursive least-squares algorithm ...
    Here, we developed a scalable implementation of the recursive least-squares algorithm (RLS) to train spiking neural networks of tens to hundreds of thousands of ...
  40. [40]
    Survey Measurement Adjustments by Least Squares*
    Trilateration consists exclusively of distance measurements, triangulation principally involves angle measurement with some base-line distances observed, and ...
  41. [41]
    [PDF] ADJUSTMENT OF TRILATERATION IN FUNDAMENTAL FIGURES
    Area equations are used for the formation of the condition equations in the least squares adjustment of the sides of the triangles of fundamental figures.
  42. [42]
    [PDF] LEAST SQUARES ADJUSTMENT OF SATELLITE OBSERVATIONS ...
    A further stipulation is that the centre of the coordinate system coincides with the centre of gravity of the earth. Average Terrestrial (X). (a) 3-axis ...
  43. [43]
    [PDF] A Comparison of Methods of Least Squares Adjustment of Traverses.
    Traverse is a method of surveyinq in which a sequence of lenqths and directions of lines between points on the Earth are measured and used in.
  44. [44]
    [PDF] Blunder Detection Using a Sequential Least Squares Adjustment.
    One approach to blunder detection is to employ a detection algorithm in a sequential least squares adjustment.
  45. [45]
    [PDF] NOAA Technical Report NOS 65 NGS 1 The Statistics of Residuals ...
    The examination of least-squares residuals for the detection of "bad data" is one of the most important and effective means for the quality control of geodetic ...Missing: chi- | Show results with:chi-
  46. [46]
    The North American datum of 1983: Project methodology and ...
    Jan 21, 2014 · The fundamental task ofNAD 83 was a simultaneous least squares adjustment involving 266,436 stations in the United States, Canada, Mexico, and ...Missing: NAD83 | Show results with:NAD83
  47. [47]
    [PDF] The National Adjustment of 2011 - National Geodetic Survey - NOAA
    Jul 29, 2020 · NAD 83 coordinates were determined through a simultaneous least squares adjustment of 81,055 passive control marks, using a nationwide ...
  48. [48]
    [PDF] User Guidelines for Single Base Real Time GNSS Positioning
    Using the GNSS manufacturer's firmware in the field, or software in the office, it is a relatively easy task to perform a least squares best fit to these ...
  49. [49]
    [PDF] GNSS Real-Time Kinematic Positioning
    This technique meticulously quantifies measurement noise, ensuring that least squares adjustments yield unbiased estimates with minimal variance. The ...
  50. [50]
  51. [51]
    A tutorial history of least squares with applications to astronomy and ...
    In 1821–1823, Gauss published the method of weighted least squares to solve linear systems Ax=b with a matrix A with n linearly independent columns and m⩾n rows ...
  52. [52]
    [PDF] Topic 15: Maximum Likelihood Estimation - Arizona Math
    (yi − (α + βxi))2. Thus, the principle of maximum likelihood is equivalent to the least squares criterion for ordinary linear regression. The maximum ...
  53. [53]
    [PDF] Chapter 7 Least Squares Estimation
    Thus maximizing the likelihood function in this case will lead to identical results to minimizing the sum of the squares of the residuals. Note that this is a ...
  54. [54]
    On the accuracy of total least squares and least squares techniques ...
    Often, both A and B are inaccurate. For these cases, a more general fitting technique, called total least squares (TLS), is devised. This paper investigates, ...
  55. [55]
    23.2 - Bayesian Estimation | STAT 415 - STAT ONLINE
    A Bayesian might estimate a population parameter. The difference has to do with whether a statistician thinks of a parameter as some unknown constant or as a ...
  56. [56]
    Least Squares Data Fitting - CS 357 - Course Websites
    Since the system of normal equations yield a square and symmetric matrix, the least-squares solution can be computed using efficient methods such as Cholesky ...Missing: adjustment | Show results with:adjustment<|separator|>
  57. [57]
    [PDF] Lectures 9 & 10 - Ordinary Least Squares Estimation - Oscar Volpe
    The Gauss-Markov Theorem says that, under homoskedasticity, the OLS estimator is the best among those that are linear and unbiased. best means having the ...
  58. [58]
  59. [59]
    Part 1: Foundations for Computing Error Ellipses - xyHt
    Apr 26, 2017 · The standard error ellipse is bounded by the standard error rectangle. It is the results of the bivariate (x and y variables) distribution shown in Figure 2(a).
  60. [60]
    [PDF] On Data Snooping and Multiple Outlier Testing
    Normally distributed observations are not required for the least- squares adjustment, but this is assumed here for the purpose of hypothesis testing. A ...
  61. [61]
    Methods of Analyzing Data Part 3: Data Snooping & the tau Criterion
    Jan 23, 2017 · Baarda used a single critical value from the t distribution to decide if an observation with standardized residual v–i is in fact a blunder.
  62. [62]
    [PDF] least-squares covariance matrix adjustment - Stanford University
    We consider the problem of finding the smallest adjustment to a given symmetric n × n matrix, as measured by the Euclidean or Frobenius norm, so that it ...
  63. [63]
    [PDF] Propagation of Variance and Covariance - ASPRS
    least-squares solution. In general, the normal equations in a least-squares adjustment prob- lem may be represented by the following matrix equation: N!i.=C.
  64. [64]
    [PDF] ESTIMATION OF VARIANCE-COVARIANCE COMPONENTS FOR ...
    The effect of neglecting the errors of the estimated variance-covariance components, in the least squares adjustment, on the covariance matrix of the estimated ...