Fact-checked by Grok 2 weeks ago

Quantile regression

Quantile regression is a statistical method in econometrics and data analysis that estimates conditional quantiles of a response variable as linear functions of one or more predictor variables, extending classical least-squares regression—which focuses solely on the conditional mean—by providing insights into the full distributional impact of predictors across different points of the response distribution. Formally introduced by Roger Koenker and Gilbert Bassett in 1978, it solves an optimization problem that minimizes a weighted sum of absolute deviations from the quantile, using a check function or tilted absolute value to target specific quantiles like the median (0.5 quantile) or others such as the 0.1 or 0.9 quantiles. This approach, computable via linear programming, allows for robust estimation even in the presence of outliers or non-normal errors. One key advantage of quantile regression over ordinary is its ability to reveal heterogeneity in the effects of predictors at different parts of the outcome , such as stronger impacts on lower quantiles in studies or varying exposures in financial models. It does not require assumptions of homoscedasticity or , making it particularly suitable for skewed data or scenarios with heavy-tailed s, and it can accommodate non-linear relationships through extensions like nonparametric or variants. For instance, in demand analysis, it has been used to model Engel curves where food expenditure responses differ across levels, highlighting how mean-focused models might obscure such variations. Quantile regression finds broad applications in , where it analyzes wage , effects on earnings, and accumulation; in health sciences, for studying factors influencing infant birthweights or treatment outcomes across risk levels; and in , for value-at-risk calculations and . Environmental research employs it to assess effects on different quantiles of ecological responses, while large-scale data contexts leverage computational advances for high-dimensional settings. Since its inception, the method has evolved with software implementations in languages like and , facilitating its adoption in interdisciplinary fields.

Fundamentals of Quantiles

Quantiles of Random Variables

In , the τ-quantile of a random variable Y, for \tau \in (0,1), is defined as the infimum of the set \{x \in \mathbb{R} : P(Y \leq x) \geq \tau\}. This value, often denoted Q_Y(\tau) or simply q_\tau, represents the threshold below which at least a proportion \tau of the distribution's probability mass lies. Key properties of quantiles include monotonicity, where Q_Y(\tau_1) \leq Q_Y(\tau_2) for $0 < \tau_1 < \tau_2 < 1, ensuring that higher-order quantiles are at least as large as lower-order ones. The median corresponds to the 0.5-quantile, Q_Y(0.5), which divides the distribution into two equal probability halves. Uniqueness holds when the cumulative distribution function (CDF) F_Y is strictly increasing, in which case the quantile is the unique solution to F_Y(x) = \tau; otherwise, for distributions with flat CDF segments (e.g., discrete cases), the quantile may form an interval. The quantile function Q_Y(\tau) = F_Y^{-1}(\tau) provides an intuitive inverse perspective on the CDF, mapping probabilities back to values on the real line. For the uniform distribution on [a, b], Q_Y(\tau) = a + (b - a)\tau, yielding a linear increase from a to b as \tau rises. In the standard normal distribution N(0,1), Q_Y(\tau) is symmetric around 0 with no closed-form expression, but values like the 0.975-quantile approximate 1.96, highlighting the concentration of probability near the mean. A concrete example is the exponential distribution with rate parameter \lambda > 0, where the CDF is F_Y(x) = 1 - e^{-\lambda x} for x \geq 0. The τ-quantile is given by Q_Y(\tau) = -\frac{1}{\lambda} \ln(1 - \tau), which starts at 0 when \tau = 0 and grows without bound as \tau approaches 1, illustrating the distribution's positive skew and the increasing spread of higher quantiles. This formula underscores how quantiles capture the tail behavior, with, for instance, the median at \tau = 0.5 equaling \frac{\ln 2}{\lambda} \approx \frac{0.693}{\lambda}.

Sample Quantiles

Sample quantiles provide empirical estimates of population quantiles derived from observed data. For a sample of size n, the sample \tau-quantile is defined as an order statistic from the ordered observations X_{(1)} \leq X_{(2)} \leq \cdots \leq X_{(n)}. A common estimator is the k-th order statistic where k = \lceil n \tau \rceil, selecting the value that would position \tau proportion of the data below it. This approach directly ties the estimate to the data's empirical distribution without assuming an underlying model. Several refined estimators address limitations of the basic method, particularly for varying sample sizes. The Harrell-Davis estimator computes the \tau-quantile as a weighted of all , using weights from the incomplete to emphasize observations near the target quantile. It offers superior efficiency for continuous distributions and small samples (n < 50), reducing mean squared error compared to simple , but it lacks robustness to outliers due to its zero breakdown point. In contrast, the Hyndman-Fan framework outlines nine -based methods, with Type 7 (inverse of the empirical CDF) recommended for general continuous data as it performs well across sample sizes and minimizes bias in large samples (n > 100). Type 8, a slight variant, is preferred for discrete data to avoid overestimation at boundaries, though differences among types are negligible for large n but pronounced in small samples where affects precision. To illustrate computation using the basic order statistic and interpolation approaches, consider a small dataset of n=10 observations: 3, 1, 8, 5, 2, 7, 4, 9, 6, 10. First, sort the data: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. For the median (\tau = 0.5), k = \lceil 10 \times 0.5 \rceil = 5, yielding X_{(5)} = 5; alternatively, averaging the 5th and 6th values gives (5 + 6)/2 = 5.5 under Hyndman-Fan Type 7. For the first quartile (\tau = 0.25), the position is h = (n-1) \times 0.25 + 1 = 3.25, so interpolate as X_{(3)} + 0.25 (X_{(4)} - X_{(3)}) = 3 + 0.25(4 - 3) = 3.25. The third quartile (\tau = 0.75) at position 7.75 yields X_{(7)} + 0.75 (X_{(8)} - X_{(7)}) = 7 + 0.75(8 - 7) = 7.75. These steps highlight how sorting enables direct selection or linear interpolation for non-integer positions. Sample quantiles possess desirable statistical properties that underpin their reliability. They are consistent estimators, converging in probability to the true \tau- as n \to \infty under mild assumptions on the . For finite n, however, they exhibit , which is typically small for central quantiles like the but increases toward the tails, depending on the underlying ; variance is approximately \tau(1-\tau)/(n f(Q(\tau))^2), where f is the density at the quantile, and decreases with n. These properties ensure sample quantiles become increasingly accurate with larger datasets, though advanced estimators like Harrell-Davis can mitigate finite-sample and variance for specific scenarios.

The Quantile Regression Model

Conditional Quantiles

Conditional quantiles extend the concept of quantiles to the distribution of a response variable Y given a set of covariates X = x, providing a framework for analyzing how the distribution of Y varies across different values of the covariates. The conditional \tau-quantile, for \tau \in (0,1), is formally defined as Q_{Y|X}(\tau \mid x) = \inf \left\{ y : P(Y \leq y \mid X = x) \geq \tau \right\}, which identifies the smallest value y such that the conditional probability of Y being at most y given X = x is at least \tau. This definition captures the \tau-th percentile of the conditional distribution of Y at a specific point x in the covariate space. Intuitively, conditioning on covariates X allows the of Y to shift, , or change depending on the values of X, enabling researchers to examine different segments of the conditional beyond just . For instance, in scenarios where the effect of X on Y is heterogeneous across the , conditional quantiles reveal how extreme values or tails behave differently from the or , which is particularly useful for understanding variability in subgroups defined by the covariates. This approach contrasts with unconditional quantiles, which arise as a special case when no covariates are present and thus describe the of Y. A illustrative example is the relationship between and in a , where serves as the covariate predicting . While the conditional of might increase linearly with across the , the 0.9-quantile (90th ) of could rise more steeply for taller individuals, reflecting greater variability or risk of higher weights at the upper tail, such as in studies of among young adults. This highlights how conditional quantiles can uncover differential impacts, like stronger associations in the upper quantiles compared to the line. The conditional quantile function Q_{Y|X}(\tau \mid x) is the generalized inverse of the conditional cumulative distribution function (CDF) F_{Y|X}(y \mid x) = P(Y \leq y \mid X = x), such that Q_{Y|X}(\tau \mid x) = F_{Y|X}^{-1}(\tau \mid x). This inverse relationship means that estimating the full set of conditional quantiles for varying \tau effectively reconstructs the entire conditional distribution of Y given X = x, offering a complete distributional perspective.

Model Formulation

The standard linear quantile regression model specifies the conditional τ-quantile of the response variable Y given covariates X = x as Q_{Y|X}(\tau | x) = x^T \beta(\tau), where \beta(\tau) is a of coefficients that varies with the quantile level \tau \in (0,1). This parametric form assumes a linear relationship between the covariates and the conditional , allowing for the of how different quantiles of the response shift with changes in the predictors. Estimation proceeds by minimizing an objective function based on the check loss, or pinball loss, defined as \rho_\tau(u) = u (\tau - I(u < 0)), where I(\cdot) is the indicator function. For a sample of n independent observations \{(y_i, x_i)\}_{i=1}^n, the τ-specific coefficient vector is obtained as \hat{\beta}(\tau) = \arg\min_{\beta \in \mathbb{R}^p} \sum_{i=1}^n \rho_\tau(y_i - x_i^T \beta). This loss function introduces asymmetry: for τ > 0.5, it penalizes negative residuals more heavily than positive ones, and vice versa for τ < 0.5, reflecting the quantile's position in the distribution. This framework generalizes ordinary least squares (OLS) regression, which estimates the conditional mean by minimizing the sum of squared residuals \sum (y_i - x_i^T \beta)^2. In quantile regression, the symmetric squared error is replaced by an asymmetric absolute error via the check loss, enabling estimation across the full conditional distribution rather than just the mean; for instance, when τ = 0.5, it reduces to median regression using least absolute deviations. The model relies on the assumption of independence across observations but imposes no requirements for homoscedasticity (constant variance) or normality of errors, making it robust to heteroscedasticity and non-normal distributions common in real data.

Estimation Methods

Linear Programming Approach

The linear programming approach reformulates the quantile regression estimation problem, originally defined as minimizing the expected check loss \rho_\tau(u) = u(\tau - I(u < 0)), into an equivalent linear program that can be solved efficiently. This reformulation introduces non-negative slack variables u_i^+ and u_i^- to represent the positive and negative parts of the residuals, respectively. The optimization problem is then to minimize \sum_{i=1}^n \left( \tau u_i^+ + (1 - \tau) u_i^- \right) subject to the constraints y_i = x_i^T \beta + u_i^+ - u_i^- for i = 1, \dots, n, and u_i^+ \geq 0, u_i^- \geq 0 for all i. This setup transforms the piecewise linear objective into a standard with k + 1 + 2n variables (where k is the number of regressors) and n equality constraints, allowing the use of established linear programming solvers. The linear program can be solved using the simplex algorithm, originally adapted for this context by Barrodale and Roberts, which proceeds by iteratively pivoting through basic feasible solutions until optimality is reached. For larger datasets, interior-point methods, such as those developed by Portnoy and Koenker, offer superior scalability by navigating the interior of the feasible region via barrier functions and Newton steps, achieving computational times comparable to ordinary least squares for sample sizes exceeding 100,000 observations while maintaining polynomial-time complexity. These methods scale linearly with the number of observations in practice after preprocessing to reduce dimensionality, making them suitable for datasets up to millions of points. To illustrate, consider a toy dataset with n=5 observations, an intercept, one regressor x, and \tau = 0.5 (median regression):
ix_iy_i
101
202
313
413
514
The linear program is to minimize \sum_{i=1}^5 (0.5 u_i^+ + 0.5 u_i^-) subject to:
  • $1 = \beta_0 + 0 \cdot \beta_1 + u_1^+ - u_1^-
  • $2 = \beta_0 + 0 \cdot \beta_1 + u_2^+ - u_2^-
  • $3 = \beta_0 + 1 \cdot \beta_1 + u_3^+ - u_3^-
  • $3 = \beta_0 + 1 \cdot \beta_1 + u_4^+ - u_4^-
  • $4 = \beta_0 + 1 \cdot \beta_1 + u_5^+ - u_5^-
with all u_i^+, u_i^- \geq 0. Applying the simplex method (or an equivalent solver) yields an optimal solution \beta_0 = 1, \beta_1 = 2, with u_1^+ = u_1^- = u_3^+ = u_3^- = u_4^+ = u_4^- = 0, u_2^+ = 1, u_2^- = 0, u_5^+ = 1, u_5^- = 0, and objective value 1 (corresponding to a sum of absolute residuals of 2). This approach provides an exact solution in finite samples, as linear programming guarantees global optimality at a vertex of the feasible polyhedron.

Alternative Computational Techniques

Alternative approaches to the linear programming formulation include smoothed quantile regression, which approximates the non-differentiable check function with a smooth convex surrogate (e.g., using a Huber-type loss), enabling the application of differentiable optimization methods such as or quasi-Newton algorithms. These methods, introduced by Powell (1986), facilitate easier computation of standard errors and are particularly useful for multiple quantile estimation or nonparametric extensions. For large-scale and high-dimensional data, subgradient-based methods and stochastic approximation techniques, such as stochastic gradient descent (SGD) adapted to the pinball loss, provide efficient scalable solutions. These are especially prevalent in machine learning applications, offering approximate solutions with linear time complexity in the number of observations.

Asymptotic and Distributional Properties

Consistency and Asymptotic Normality

Under standard assumptions, the quantile regression estimator \hat{\beta}(\tau) is consistent for the true parameter \beta(\tau), meaning \hat{\beta}(\tau) \xrightarrow{p} \beta(\tau) as the sample size n \to \infty. This holds for independent and identically distributed (i.i.d.) errors with a distribution function F that is absolutely continuous and has a positive density at the conditional \tau-quantile Q(\tau \mid x), along with the design matrix satisfying full column rank conditions, such as n^{-1} X^T X \to_p \Omega where \Omega is positive definite. Strong consistency, \hat{\beta}(\tau) \to \beta(\tau) almost surely, follows under similar i.i.d. conditions augmented by moment restrictions to prevent outliers from dominating the objective function. The asymptotic normality of the estimator is given by \sqrt{n} \left( \hat{\beta}(\tau) - \beta(\tau) \right) \xrightarrow{d} N\left( 0, \tau(1-\tau) D^{-1} \Omega D^{-1} \right), where D = E\left[ f(Q(\tau \mid x)) x x^T \right] with f denoting the conditional density of the error term evaluated at its \tau-quantile, and \Omega = E\left[ x x^T \right]. This result requires the aforementioned i.i.d. setup, continuity of F with bounded and strictly positive density f in a neighborhood of Q(\tau \mid x), and the design matrix condition ensuring \Omega > 0. The form parallels the asymptotic distribution of ordinary but incorporates the sparsity of the check function \rho_\tau, leading to a covariance structure that accounts for potential heteroscedasticity through D. Joint normality extends to estimators at multiple quantiles \tau_1, \dots, \tau_M. A Bahadur representation provides a linear approximation for uniform convergence over \tau \in (0,1), expressing \hat{\beta}(\tau) - \beta(\tau) = O_p(n^{-1/2}) with high probability, uniformly in \tau, under conditions including i.i.d. observations, bounded conditional densities with f(Q(\tau \mid x)) \geq c > 0 for some compact interval of \tau, and moment conditions on the covariates and errors to bound the remainder term. Specifically, \sqrt{n} \left( \hat{\beta}(\tau) - \beta(\tau) \right) = \frac{1}{\sqrt{n}} \sum_{i=1}^n x_i \left( \tau - I(\epsilon_i < 0) \right) D^{-1} + o_p(1) uniformly in \tau, facilitating results like uniform consistency and weak convergence of empirical processes based on regression quantiles. This representation treats the quantile estimator as an M-estimator and relies on strong approximation techniques for the score process.

Equivariance Properties

Quantile regression estimators exhibit desirable equivariance properties under affine transformations of the data, ensuring that the estimated coefficients transform in a predictable manner consistent with the changes applied to the response and predictor variables. These properties include location-scale equivariance and reparameterization equivariance, which contribute to the robustness and interpretability of the method. Location-scale equivariance refers to the behavior of the estimator under shifts and scalings of the response variable Y. Specifically, if the transformed response is Y^* = a + b Y with b > 0, then the quantile regression coefficients satisfy \hat{\beta}^*(\tau) = a + b \hat{\beta}(\tau), where the intercept scales and shifts accordingly, and the slopes scale by b across all quantiles \tau. For b < 0, the property adjusts to \hat{\beta}^*(\tau) = a + b \hat{\beta}(1 - \tau), reflecting the reversal of the quantile order due to the sign change. This holds because the quantile definition is invariant to positive scaling and location shifts, as the \tau-th conditional quantile function Q_Y(\tau \mid X) transforms to Q_{Y^*}(\tau \mid X) = a + b Q_Y(\tau \mid X). A proof sketch for the scale case with b > 0 follows from the optimization problem: the objective function for the transformed data is \sum_i \rho_\tau(Y_i^* - X_i^T \beta^*) = b \sum_i \rho_\tau(Y_i - X_i^T \beta), where substituting \beta^* = b \beta (adjusting intercept appropriately) minimizes it equivalently to the original, since the positive scaling factor b does not affect the argmin. For the location shift (b=1), adding a constant a to Y simply adds a to the intercept component of \beta, as the check function \rho_\tau shifts uniformly. The location-scale properties for the response combine with reparameterization for the predictors, where linear transformations of X transform the slope coefficients via the inverse matrix. Reparameterization equivariance ensures invariance under linear transformations of the predictors. For a nonsingular A, if X^* = X A, then \hat{\beta}^*(\tau) = A^{-1} \hat{\beta}(\tau), preserving the fitted conditional quantiles X^{*T} \hat{\beta}^*(\tau) = X^T \hat{\beta}(\tau). This property arises because the formulation of the estimator reparameterizes the constraints and objective equivalently under the transformation, maintaining the optimal solution up to the mapping. A proof sketch involves substituting the transformed design into the quantile regression minimization: the subgradient condition for optimality transforms linearly, yielding the inverse relationship for \beta^*. These equivariances extend naturally to simultaneous transformations of Y and X, as in the location-scale case above. As an illustrative example, consider scaling the response by 2, so Y^* = 2Y. The slope estimates double across all \tau, \hat{\beta}_s^*(\tau) = 2 \hat{\beta}_s(\tau), while the intercept also doubles if no location shift is applied. This demonstrates the scale equivariance in practice: for instance, in a model where Y is log-wages, doubling Y (e.g., to model changes) proportionally adjusts the covariate effects on conditional quantiles, preserving relative interpretations.

Inference and Diagnostics

Coefficient Interpretation

In quantile regression, the slope coefficient \beta_j(\tau) associated with a covariate X_j at a specified quantile \tau \in (0,1) quantifies the change in the \tau-th conditional of the response Y resulting from a one-unit increase in X_j, while holding all other covariates fixed. This interpretation extends the classical framework by focusing on location shifts within the conditional rather than shifts in the , enabling the examination of heterogeneous marginal effects across different points of the outcome . The intercept term \beta_0(\tau) similarly denotes the \tau-th conditional of Y when all covariates are equal to zero, serving as a baseline location parameter that varies with \tau. A key feature of quantile regression coefficients is their dependence on \tau, which allows for the detection of varying covariate impacts along the conditional distribution of Y. For instance, in econometric models of wage determination, the coefficient \beta_{\text{education}}(\tau) at \tau = 0.9 measures the marginal return to an additional year of education for individuals at the upper end of the wage distribution (high-wage earners), often revealing premium effects for skilled workers, whereas at \tau = 0.1, it captures the return for those at the lower end (low-wage workers), where returns may be smaller or even negative due to limited skill utilization. This \tau-varying structure provides policy-relevant insights, such as how education policies might disproportionately benefit certain segments of the labor market, highlighting inequalities in returns across socioeconomic groups. However, the \tau-dependence of coefficients can lead to non-monotonic effects, where the marginal impact of a covariate increases or decreases irregularly across , potentially resulting in crossing quantile functions that imply an invalid (non-monotonic) conditional . Such crossings arise because individual quantile regressions are estimated separately, and while asymptotic supports on \beta(\tau) for fixed \tau, they underscore the need to interpret coefficients in the context of the full distributional profile to avoid misleading conclusions about overall effects.

Goodness-of-Fit Measures

Evaluating the goodness-of-fit in quantile regression models requires measures adapted to the minimization of the quantile loss function, \rho_\tau(u) = u(\tau - \mathbb{I}(u < 0)), rather than squared residuals as in ordinary least squares. A primary analog to the R^2 statistic is the quantile R^2, defined as R^1(\tau) = 1 - \frac{\sum_{i=1}^n \rho_\tau(y_i - \hat{q}_\tau(y_i))}{\sum_{i=1}^n \rho_\tau(y_i - \hat{q}_\tau)}, where \hat{q}_\tau(y_i) is the predicted conditional \tau-quantile and \hat{q}_\tau is the unconditional \tau-quantile (median for \tau=0.5). This local measure quantifies the relative reduction in quantile loss achieved by the model compared to the null intercept-only model, ranging from 0 (no improvement) to 1 (perfect fit). The quantile loss reduction, V_n^0(\tau) - V_n(\tau), where V_n(\tau) is the minimized objective function for the fitted model and V_n^0(\tau) for the null model, provides a direct assessment of fit improvement at each \tau. Weighted versions extend this to a global measure, such as the integrated quantile R^2(\tau) = \int_0^1 R^1(\tau) \, d\tau, which averages fit across the distribution under a uniform weighting, or density-weighted variants to emphasize central quantiles. These global metrics facilitate comparison across models by summarizing distributional fit without assuming symmetry. For specification testing, the Wald test W_n(\tau) evaluates joint hypotheses on coefficients at specific \tau, using estimated covariance matrices to assess overall model adequacy against alternatives like omitted variables. To detect serial dependence, the quantile F-test (QF test) examines autocorrelation in quantile residuals via an auxiliary regression of residuals on lags, with the test statistic \mathrm{QF} = (T - p - k) \frac{\sum_{t=1}^T (\tilde{v}_t^2 - \hat{v}_t^2)}{\sum_{t=1}^T \hat{v}_t^2} following asymptotically a \chi^2_p distribution under the null of independence, where p is the number of lags, k the number of explanatory variables, \tilde{v}_t are restricted residuals, and \hat{v}_t unrestricted residuals; it performs robustly across \tau without size distortions common in Lagrange multiplier alternatives. Coefficient estimates serve as inputs to these diagnostics for residual computation. In simulated examples, the quantile R^2 varies by \tau. For a Gaussian location-shift model (y_i = x_i + u_i, u_i \sim N(0,1), x_i \sim N(5,1), n=100), R^1(\tau) is approximately constant around 0.5 across \tau, indicating uniform explanatory power. In contrast, a scale-shift model (y_i = (x_i + 0.25 x_i^2) u_i, u_i \sim N(0,1/100), x_i \sim N(3,1)) yields R^1(0.5) \approx 0.1 (minimal at median) but R^1(0.9) \approx 0.4 (stronger in upper tail), highlighting heteroscedasticity effects.

Extensions and Variants

Censored Quantile Regression

Censored quantile regression addresses scenarios where the response variable is subject to censoring, such as left-censoring in economic data where values below a known threshold are not observed or are recorded at the threshold. In the fixed censoring framework, the observed outcome Y = \max(Y^*, \gamma), where Y^* = X^\top \beta(\tau) + \varepsilon is the latent variable with the conditional \tau-quantile of \varepsilon given X equal to zero, and \gamma is a known constant censoring point. Powell's censored quantile regression (CQR) estimator minimizes the check loss function only over uncensored observations, defined as \hat{\beta}(\tau) = \arg\min_{\beta} \sum_{i=1}^n \rho_\tau(Y_i - X_i^\top \beta) \cdot I(Y_i > \gamma), where \rho_\tau(u) = u(\tau - I(u < 0)) is the check function. Key assumptions for the CQR estimator mirror those of standard quantile regression but are adapted for the censored sample: the errors \varepsilon are independent and identically distributed (i.i.d.) conditional on X, with a positive density around zero, and the design matrix satisfies a full rank condition on the uncensored subsample. Identification relies on the monotonicity of the conditional quantile function and the condition that the \tau-quantile of the latent variable exceeds the censoring point with positive probability, ensuring P(Y^* > \gamma \mid X) > 1 - \tau, which prevents the estimator from collapsing to the . An illustrative application appears in analyses of data, where wages are left-censored below a minimum threshold (e.g., zero or a ). For instance, Buchinsky applied CQR to U.S. Census from 1963–1987 to estimate the 0.75 quantile of log , revealing heterogeneous returns to and across the , with higher quantiles showing stronger effects for skilled workers compared to mean-based models. Asymptotic properties of the CQR include and asymptotic specific to the censored setting: \sqrt{n} (\hat{\beta}(\tau) - \beta(\tau)) \xrightarrow{d} N(0, \tau(1-\tau) D^{-1} / f(0)^2 ), where D is the of the outer product of X over uncensored observations, and f(0) is the of \varepsilon at zero, with the consistently estimable under i.i.d. errors using the uncensored subsample. These properties hold under the identification condition and ensure valid for quantiles above the censoring threshold, distinguishing them from uncensored cases by relying solely on non-censored data points.

Bayesian and Machine Learning Methods

Bayesian quantile regression extends the classical framework by incorporating prior distributions on the regression coefficients \beta(\tau) for a specified quantile \tau, often using independent normal priors to reflect uncertainty in the parameters. The likelihood is modeled using the (ALD), which naturally accommodates the quantile-specific check function and allows for asymmetric error distributions around the \tau-th quantile. The posterior distribution of \beta(\tau) is then inferred via (MCMC) methods, such as , which decomposes the sampling into efficient conditional updates for the parameters and latent variables introduced by the ALD representation. This approach, first formalized in seminal work, enables full , including credible intervals that incorporate prior information and provide probabilistic statements about parameter locations. In small datasets, Bayesian credible intervals for \beta(\tau) often yield narrower and more stable uncertainty bounds compared to frequentist confidence intervals, particularly when priors regularize estimates against . This advantage is evident in applications like the engel , where credible bands more accurately capture tail behaviors without excessive widening from resampling variability. methods have further advanced quantile regression by leveraging ensemble and neural architectures for flexible, non-parametric estimation. Quantile regression forests adapt random forests to predict conditional by constructing trees and estimating the \tau-th from weighted empirical distributions in leaf nodes, offering robust handling of high-dimensional covariates and interactions without assuming . Deep quantile regression networks train multi-output neural networks using the loss function, which generalizes the quantile check loss to penalize over- and under- asymmetrically, allowing simultaneous of multiple quantiles and capturing complex nonlinearities in large-scale data. These networks excel in scenarios with heteroscedastic errors, as the objective directly optimizes coverage. Recent developments integrate large language models (LLMs) into regression for enhanced predictive distributions in domains like price forecasting. By LLMs on structured inputs via token embeddings, these methods perform token-based quantile mapping to generate calibrated quantile predictions, outperforming traditional neural approaches in benchmarks. This fusion leverages LLMs' contextual understanding for distributional outputs, enabling applications in volatile markets where full is critical.

Models for Heteroscedasticity and Large-Scale Data

Quantile regression models can be extended to account for heteroscedasticity, where the of the response variable depends on covariates or the quantile level τ, by employing location-scale frameworks. In these models, the is parameterized as Q_τ(y | x) = x^T β(τ) = x^T γ_0 + (x^T γ_1) z(τ), where γ_0 captures the parameters, γ_1 the parameters, and z(τ) denotes the τ-quantile of a standardized , such as the standard normal or logistic. This approach allows the variance to vary with covariates while maintaining the interpretability of quantile estimates across the . For instance, assuming logistic errors leads to a linear parameterization of the coefficients as β(τ) = γ_0 + γ_1 τ, enabling efficient via weighted quantile regression that adjusts for the τ-dependent . Such models improve upon homoscedastic assumptions by providing more accurate estimates in applications like assessment, where variance heterogeneity is prevalent. To handle large-scale and streaming datasets, online composite quantile regression (CQR) methods have been developed, which estimate multiple simultaneously for enhanced efficiency and robustness without requiring full dataset recomputation. These approaches update estimates incrementally as new arrives, retaining only like sufficient statistics from historical batches, thus breaking storage barriers for massive volumes exceeding 10^6 observations. A key advancement is the online updating CQR algorithm, which minimizes a composite over a set of quantile levels {τ_1, ..., τ_K} and renews parameters via recursive formulas that preserve asymptotic normality and consistency. This is particularly suited for streaming environments, such as sensor or financial feeds, where traditional batch methods become computationally infeasible. methods, like variants, further enable scalable implementations by approximating updates in high dimensions. Recent methodological advances address specific challenges in heteroscedastic and large-scale settings. Lp-quantile regression, for 1 < p ≤ 2, generalizes standard quantile regression by minimizing an Lp loss that requires only finite 2(p-1)-th moments of errors, offering robustness to heavy-tailed distributions without assuming finite variance. The composite Lp-quantile regression variant achieves higher efficiency than traditional CQR in simulations with infinite-variance errors, as measured by asymptotic relative efficiency exceeding 1 for p > 1, and supports high-dimensional via algorithms. Complementing this, M-quantile regression for zero-inflated data extends the framework to datasets with excess zeros, such as count data in , by modeling influence functions that resist outliers and accommodate heteroscedasticity through robust weighting. This 2025 development applies to small area , showing improved performance over standard QR in simulations and empirical studies. For illustration, consider a streaming update for online CQR on a with n=10^6 observations arriving in batches. The below outlines the renewable update process, initializing with a base estimate and iteratively refining using subgradients of the composite check loss ρ_τ(u) = u(τ I(u≥0) - (1-τ) I(u<0)) over K quantiles:
Initialize: β_0 from initial batch; store summary S_0 = (∑ x_i x_i^T, ∑ x_i ρ_τ(y_i - x_i^T β_0), n_0)
For each new batch t = 1,2,... with data {(x_{i,t}, y_{i,t})}_{i=1}^{n_t}:
    Compute subgradient g_t = (1/n_t) ∑_{i=1}^{n_t} x_{i,t} ψ_τ(y_{i,t} - x_{i,t}^T β_{t-1}), where ψ_τ is the derivative of ρ_τ
    Update stepsize η_t = c / √(∑_{k=1}^t n_k) for constant c > 0
    Renewable estimate: β_t = β_{t-1} - η_t g_t
    Update summary: S_t = S_{t-1} + (∑ x_{i,t} x_{i,t}^T, ∑ x_{i,t} ρ_τ(y_{i,t} - x_{i,t}^T β_t), n_t)
Output: β_T ≈ argmin_β ∑_{k=1}^K w_k ∑_i ρ_{τ_k}(y_i - x_i^T β) for weights w_k
This converges at rate O(1/√N) for total N = ∑ n_t, enabling processing of large-scale streams in O(d n_t) time per batch, where d is .

Applications and Advantages

Real-World Applications

Quantile regression has been extensively applied in to analyze , particularly to examine how factors like influence wages across different points of the . In a study of male wages in from 1994 to 2002, quantile regression revealed that returns to vary significantly across the wage , with yielding the highest returns (approximately 14% in 1994, declining to 13.1% by 2002) and contributing substantially to within-group , as the spread in returns between the 90th and 10th quantiles increased by 27 points. This approach highlighted how premiums are higher at the upper , exacerbating at the top end of the spectrum despite an overall decline in dispersion. Similarly, cross-country evidence from 16 nations demonstrated that levels reduce more effectively at lower quantiles, where the returns to schooling help narrow gaps for low-wage workers, though the effect diminishes at higher quantiles. In finance, quantile regression underpins risk management practices, notably through the estimation of Value-at-Risk (VaR), which represents the α-quantile (e.g., 0.05 or 5%) of a portfolio's loss distribution. The Conditional Autoregressive Value at Risk (CAViaR) model, an autoregressive extension of quantile regression, directly forecasts time-varying VaR without assuming normality or independence of returns, using regression quantiles to capture tail dependencies. Empirical evaluations on stock indices like the S&P 500 from 1986 to 1999 showed that CAViaR variants, such as the asymmetric slope model, accurately predict 1% and 5% VaR levels, outperforming traditional GARCH models by better accounting for asymmetric tail behaviors during market stress. This has made quantile-based VaR a standard tool for regulatory compliance and internal risk assessment in financial institutions. Environmental science leverages quantile regression to assess impacts on events, such as rainfall, by focusing on upper quantiles that represent risks rather than central tendencies. of subdaily rainfall from 133 stations using quantile regression demonstrated relationships between and , revealing seasonal variations: positive in winter (indicating intensification with warming) and negative in summer, with coefficients unbiased by sample size unlike binning methods. In regions like , these conditional quantile estimates were statistically significant at the 95% level, underscoring quantile regression's utility in projecting climate-driven extremes without restrictive distributional assumptions. Further applications to regional datasets have quantified trends in heavy rainfall quantiles (e.g., 0.98 quantile), such as increases in some catchments, informing strategies. Recent advancements in quantile regression for unit interval data (bounded between 0 and 1), such as proportions or rates, have found applications in for modeling outcomes like success rates. The sine unit Weibull quantile regression model, introduced in 2025, extends traditional approaches to handle flexible shapes in proportion data, using a link for conditional and outperforming or Kumaraswamy regressions in fit metrics like AIC on datasets involving success proportions. In medical contexts, similar unit interval quantile models, like the unit generalized half-normal, robustly estimate quantiles for skewed proportion data such as recovery rates or response probabilities, enabling analysis of covariate effects across the distribution while accommodating outliers common in clinical trials. These methods provide nuanced insights into heterogeneous effects, for instance, revealing how characteristics influence low-quantile success rates in therapies.

Benefits Over Mean Regression

Quantile regression offers greater robustness to outliers compared to ordinary least squares (OLS) regression, as it minimizes the sum of absolute deviations (L1 norm) rather than the sum of squared deviations (L2 norm) used in OLS. This L1 criterion reduces the influence of extreme values, whereas a single in OLS can disproportionately shift the estimated toward it. For instance, in a of incomes, an extreme high value will pull the OLS upward significantly, but the (τ=0.5 quantile) remains unaffected, providing a more stable central estimate. A key advantage is its ability to capture heterogeneity in relationships across the response distribution, where effects of predictors can vary by quantile. In OLS, a single set of coefficients assumes uniform impact throughout the distribution, but quantile regression estimates τ-specific coefficients β(τ), revealing, for example, stronger effects at upper quantiles in skewed data like health expenditures. These varying β(τ) can be visualized using fan charts, which plot confidence bands for coefficients across τ from 0 to 1, highlighting how relationships "fan out" with the distribution. Unlike OLS, quantile regression does not require assumptions of in the error distribution or homoscedasticity (constant variance). OLS relies on these for efficient , but violations—common in real data like economic indicators—can estimates and invalidate standard errors. methods relax these, estimating conditional quantiles directly and accommodating heteroscedasticity by design. This equivariance to and further enhances robustness in non-i.i.d. settings. However, quantile regression has limitations relative to OLS, as it produces separate estimates for each quantile τ rather than a single β vector, increasing computational demands and complicating joint inference across quantiles. While OLS provides a parsimonious summary of the conditional , multiple quantile fits may overparameterize simple models without distributional variation.

Historical Development

Origins and Key Contributions

The concept of quantile regression traces its roots to early statistical explorations of conditional distributions beyond the . In 1886, introduced the idea of regression towards mediocrity (the ) in the context of hereditary stature, emphasizing the as a robust measure of in bivariate relationships. Precursors include Roger Joseph Boscovich's 18th-century use of and Pierre-Simon Laplace's weighted medians. Building on this, extended the framework in 1888 to general , proposing methods for estimating conditional through , which anticipated key aspects of modern quantile estimation. The formal foundation of quantile regression was established in 1978 by Roger Koenker and Gilbert Bassett Jr., who defined regression quantiles as the solution to a problem that minimizes the asymmetrically weighted absolute deviations, generalizing ordinary sample quantiles to linear models with covariates. This approach provided a robust alternative to regression, allowing estimation of the entire conditional rather than just the mean. Early extensions addressed specific challenges in data structures. In 1986, James L. Powell developed censored quantile regression, adapting the framework to handle right-censored observations by modifying the objective function to account for censoring mechanisms. Roger Koenker has been a central figure in the field's development, authoring the seminal 1978 paper and continuing to advance theoretical and computational aspects through subsequent works that solidified quantile regression's econometric applications.

Recent Advances

Recent advances in quantile regression (QR) since 2020 have increasingly integrated the method with techniques, particularly large language models (LLMs), to enhance forecasting capabilities in environments. A notable development is the application of QR with LLMs for probabilistic price prediction, where models like are adapted to generate quantile-based distributions from token embeddings of textual inputs, such as product descriptions or market news. This approach outperforms traditional numerical regression baselines by capturing multimodal distributions inherent in tasks in datasets like products and used vehicles. In parallel, innovations in streaming and online QR algorithms have addressed the challenges of processing, enabling renewable estimation without full . For instance, online updating methods for composite QR on streaming datasets maintain efficiency by storing only from historical batches, improving computational and memory efficiency while preserving asymptotic consistency. These algorithms are particularly suited for applications like networks or financial tick data, where data arrives continuously and storage is limited. Specialized QR models have emerged for constrained response variables, including those bounded to unit intervals, which are common in proportions, rates, or probabilities. A 2025 model reparameterizes QR for unit-interval regressands using a trigonometric extension of the Weibull distribution with a link, allowing direct estimation of conditional quantiles while ensuring predictions remain within (0,1), and demonstrating superior fit in applications like , stackloss, and yield datasets over generalized linear alternatives. Similarly, extensions to censored QR with selection mechanisms incorporate sample selection rules to handle non-random missingness in right-censored data, such as times or valuation surveys, via semiparametric estimators that improve correction in studies compared to naive QR. Theoretically, advances in L_p-QR have bolstered robustness against s and heteroscedasticity by generalizing the check function to L_p norms ( $1 < p < \infty ), bridging the gap between the outlier resistance of standard QR (p=1) and the efficiency of (p=2). Recent work in 2024 establishes Bayesian frameworks for L_p-QR, providing posterior that enhances small-sample performance and adaptability to heavy-tailed errors, with applications showing reduced in contaminated datasets.

Software Implementations

Packages and Libraries

In the R programming language, the quantreg package, authored by Roger Koenker, provides comprehensive tools for estimating and inferring conditional quantile functions through linear and nonlinear parametric and nonparametric methods, including support for linear programming solvers and bootstrapping procedures. Additionally, the tidymodels framework supports quantile regression through the parsnip package, offering tidy interfaces and integration with engines like quantreg for consistent modeling workflows, as of 2025. This package is widely used for its flexibility in handling various quantile regression variants, such as censored data models. Python users can access quantile regression via the statsmodels library's QuantReg class, which employs iterative reweighted for model estimation, enabling straightforward fitting of linear quantile models with options for regularization. Complementing this, 's GradientBoostingRegressor supports quantile regression by specifying a quantile loss function (loss='quantile') and an alpha parameter to target specific , facilitating tree-based approaches suitable for predictive intervals. Commercial software like includes the qreg command, which fits quantile regression models, including median regression as a special case, using least absolute deviation minimization. Similarly, SAS's PROC QUANTREG procedure models the effects of covariates on conditional quantiles, supporting multiple algorithms for estimation and computation. In , the QuantileRegressions.jl package offers implementations of quantile regression, ported from established methods like reweighted , for efficient computation in a high-performance environment. These packages generally implement core estimation methods such as and iterative reweighted , tailored to their respective languages' ecosystems.
PackageLanguageEase of UseSupported Variants
quantregHigh; extensive vignettes and CRAN integration for quick setup and visualization.Linear, nonlinear, nonparametric; censored quantile regression; bootstrapping.
statsmodels.quantregHigh; integrates with and Jupyter for interactive analysis.Linear quantile models; regularized fits.
scikit-learn GradientBoostingRegressorModerate; requires tuning hyperparameters for optimal performance in ensemble settings.Tree-based quantile regression for prediction intervals.
qregHigh; command-line syntax with built-in postestimation tools for diagnostics.Linear and regression.
PROC QUANTREGModerate; procedure-based with options for advanced output but steeper learning for non-SAS users.Conditional quantiles; multiple algorithms including sparsity-adjusted CIs.
QuantileRegressions.jlModerate; leverages Julia's speed but requires familiarity with package manager for dependencies.Linear quantile regression via reweighted .

Practical Implementation Notes

When implementing quantile regression, selecting the quantile level \tau is crucial for capturing the full distributional effects of predictors on the response variable. While the median (\tau = 0.5) provides a robust central tendency analogous to least squares, estimating multiple \tau values, such as \tau \in \{0.05, 0.25, 0.5, 0.75, 0.95\}, offers a comprehensive view of heterogeneity across the conditional distribution, revealing how relationships vary from lower to upper tails. Relying solely on the median may overlook tail behaviors critical in applications like risk assessment or inequality analysis. For , particularly in small samples where asymptotic standard errors may be unreliable, provides a reliable alternative to derive confidence intervals and p-values. The wild bootstrap or pairs bootstrap methods, implemented via resampling residuals or observations, account for the non-differentiability of the objective function and yield accurate coverage in finite samples. This approach is especially useful when sample sizes are below 100, as simulations demonstrate improved performance over naive normality assumptions. Common pitfalls arise in nonlinear quantile regression, where the objective function becomes non-convex, potentially leading to multiple local minima and requiring careful initialization or techniques to avoid suboptimal solutions. Additionally, with data featuring ties, the standard exhibits flat regions, causing non-uniqueness in estimates and step-wise constant quantile functions that violate assumptions; model-based adjustments, such as those incorporating the underlying , mitigate these issues by or reparameterizing the quantiles. Packages like quantreg in serve as practical tools for these implementations, supporting linear and nonlinear fits with built-in and visualization options. A basic example using the engel dataset for food expenditure on at the follows:
r
library(quantreg)
data(engel)
fit <- rq(foodexp ~ [income](/page/Income), tau = 0.5, data = engel)
plot(engel$income, engel$foodexp, main = "Median Regression", xlab = "[Income](/page/Income)", ylab = "Food Expenditure")
abline(fit, col = "blue", lwd = 2)
summary(fit)
This code fits the model, overlays the estimated line on a scatterplot, and summarizes coefficients with bootstrapped standard errors if specified (e.g., summary(fit, se = "boot")).

References

  1. [1]
    [PDF] Quantile Regression - Econometrics at the university of illinois
    Deaton (1997) offers a nice introduction to quantile regression for demand analysis. In a study of Engel curves for food expenditure in Pakistan, he finds ...
  2. [2]
    [PDF] Regression Quantiles - Roger Koenker; Gilbert Bassett, Jr.
    Oct 30, 2003 · Classical statistical arguments lead us to treat assumptions as if they were in some way axiomatic and yet consideration will show that, in fact ...
  3. [3]
    [PDF] QUANTILE REGRESSION AN INTRODUCTION
    Quantile regression methods are illustrated with applications to models for CEO pay, food expenditure, and infant birthweight.
  4. [4]
    Basic Analyses | US EPA
    Feb 7, 2025 · Quantile regression models the relationship between a specified conditional quantile (or percentile) of a dependent (response) variable and one or more ...
  5. [5]
  6. [6]
    [PDF] Sample quantiles in statistical packages. - Rob J Hyndman
    Q2(p) is shown in Figure 2. Definition 3. Q3 (p) is defined as the order statistic X(k) where k is the nearest integer to np. So we set m -and, and. Freq(XQa ...
  7. [7]
    [PDF] order statistics and sample quantiles
    The pth sample quantile is defined in terms of the order statistics, but ... One simple definition uses the kth order statistic X(k),. eXn(p) = X(k).
  8. [8]
    new distribution-free quantile estimator | Biometrika - Oxford Academic
    Abstract. A new distribution-free estimator Qp of the pth population quantile is formulated, where Qp is a linear combination of order statistics admitting.
  9. [9]
    Trimmed Harrell-Davis quantile estimator based on the highest ...
    Oct 19, 2021 · The Harrell-Davis quantile estimator shows decent statistical efficiency in the case of light-tailed distributions: its estimations are much ...Missing: pros | Show results with:pros
  10. [10]
    (PDF) Sample Quantiles in Statistical Packages - ResearchGate
    Feb 25, 2014 · We compare the most commonly implemented sample quantile definitions by writing them in a common notation and investigating their motivation and some of their ...
  11. [11]
    How to Find Quartiles in Even and Odd Length Datasets - Statology
    Dec 21, 2022 · Example 1: Calculate Quartiles for Even Length Dataset​​ The median value is the average of the middle two values, which is (10 + 14) / 2 = 12. ...
  12. [12]
    [PDF] Asymptotic properties of sample quantiles from a finite population
    Oct 16, 2008 · Abstract In this paper we consider the problem of estimating quantiles of a finite population of size N on the basis of a finite sample of size ...
  13. [13]
    Demonstration of sample quantile bias - Stats StackExchange
    Nov 11, 2013 · The sample quantile is a biased estimator of the true quantile. And, according to my simulations, a potentially very biased one.Is the sample quantile unbiased for the true quantile?Quantifying the bias of a quantile estimator based on order statistics ...More results from stats.stackexchange.com
  14. [14]
    [PDF] The Second-order Bias of Quantile Estimators
    We discover that while the median is unbiased for symmetric error distributions and the bias of the other quantiles is larger at the tails of any distribution.
  15. [15]
    Regression Quantiles - jstor
    The remarkable parallelism between the asymptotic behavior of ordinary sample quantiles in the location model and regression quantiles in the linear model ...
  16. [16]
    [PDF] Short Course on Quantile Regression - Cemmap
    Quantile regression is a statistical technique intended to estimate, and conduct inference about, conditional quantile functions. Just as classical linear ...
  17. [17]
    [PDF] 213-30: An Introduction to Quantile Regression and the QUANTREG ...
    Quantile regression, which was introduced by Koenker and Bassett (1978), extends the regression model to conditional quantiles of the response variable ...
  18. [18]
    Getting Started with Quantile Regression - UVA Library
    Sep 20, 2015 · Using our example above, we could estimate the 0.10 and 0.90 quantile weights for 1st year UVA males given their height.
  19. [19]
    Quantile Regression: A Flexible Alternative to Linear Regression
    But with quantile regression, you can find out how height and weight are related for shorter, taller, or people in between.
  20. [20]
    [PDF] Lecture 1 Quantile Methods Manuel Arellano
    Dec 11, 2018 · The function q (x, u) is called a conditional quantile function. It contains the same information as the conditional cdf (it is its inverse), ...
  21. [21]
    A nonparametric approach for quantile regression
    Jul 18, 2018 · Step 3: Estimate the local conditional quantile function ξ(τ|x) of y given x by inverting an estimated conditional c.d.f. F ^ ( y | x ) .
  22. [22]
    [PDF] QUANTILE REGRESSION
    The extension to quantiles other than the median was introduced in Koenker and Bassett (1978). 2. An Example. To illustrate the approach we may consider an ...
  23. [23]
    [PDF] Regression Quantiles
    BY ROGER KOENKER AND GILBERT BASSETT, JR. A simple minimization problem yielding the ordinary sample quantiles in the location model is shown to generalize ...
  24. [24]
    regression quantiles and related empirical processes - jstor
    The strong consistency of regression quantile statistics (Koenker and Bassett. [4]) in linear models with iid errors is established.
  25. [25]
    A general Bahadur representation of M-estimators and its ...
    This paper obtains strong Bahadur representations for a general class of M-estimators, including minimum Lp distance estimators, and applies to regression with ...
  26. [26]
  27. [27]
    [PDF] STAT 593 Quantile regression - Joseph Salmon
    Koenker and G. Bassett. “Regression quantiles”. In ... Equivariance properties continued. ▷ the α-th quantile regression is regression equivariant, i.e.,.
  28. [28]
    [PDF] Testing for Autocorrelation in Quantile Regression Models
    Hence, researchers can conclude that there exists strong empirical evidence to support serial correlation in the quantile regression model under consideration.
  29. [29]
    [PDF] Goodness of Fit and Related Inference Processes for Quantile ...
    Feb 17, 2012 · We introduce a goodness-of-fit process for quantile regression analogous to the conventional R2 statistic of least squares regression.
  30. [30]
  31. [31]
    Changes in the U.S. Wage Structure 1963-1987 - jstor
    The bootstrap method for quantile regression was first implemented by. Buchinsky (1991) and was adopted also by Chamnberlain (1991). Several authors considered ...
  32. [32]
    Quantile Regression with Large Language Models for Price Prediction
    Jun 7, 2025 · We propose a novel quantile regression approach that enables LLMs to produce full predictive distributions, improving upon traditional point ...
  33. [33]
    Quantile Regression for Location-Scale Time Series Models ... - arXiv
    Jan 3, 2014 · This paper considers quantile regression for a wide class of time series models including ARMA models with asymmetric GARCH (AGARCH) errors. The ...
  34. [34]
    [PDF] Efficient Quantile Regression for Heteroscedastic Models
    The theoretical treatment of the previous subsection which echoes results in Koenker and Zhao (1994) suggests the need for a √n-consistent estimator. Our ...
  35. [35]
    Online Updating Composite Quantile Regression for Streaming Data
    Sep 21, 2025 · The online updating estimation for CQR outperforms the quantile regression method for streaming data in both numerical simulations and real data ...
  36. [36]
    [PDF] Composite Lp-quantile regression, near quantile regression and the ...
    Oct 20, 2025 · In this paper, we systematically study some problems about Lp-quantile regression, which only requires a finite 2(p−1)th (1 < p ≤ 2) moment of ...
  37. [37]
    None
    Summary of each segment:
  38. [38]
    [PDF] Wage inequality and returns to education in Turkey: A quantile ...
    The Institute for the Study of Labor (IZA) in Bonn is a local and virtual international research center and a place of communication between science, ...
  39. [39]
  40. [40]
    [PDF] Conditional Autoregressive Value at Risk by Regression Quantiles
    Abstract – Value at Risk (VaR) has become the standard measure of market risk employed by financial institutions for both internal and regulatory purposes.
  41. [41]
    Quantile regression for investigating scaling of extreme precipitation ...
    Apr 7, 2014 · The aim of this paper was to demonstrate an alternative methodology for investigating the scaling of extreme precipitation with temperature.
  42. [42]
    Quantile Regression Based Methods for Investigating Rainfall ...
    Oct 28, 2019 · Two quantiles are used to represent heavy rainfall condition (0.98 quantile) and severe dry condition (0.02 quantile). The trends of these two ...3 Methodology · 3.1 Quantile Regression · 4 Results And Discussion
  43. [43]
    New Quantile Regression Model for Unit Interval Regressands
    Oct 9, 2025 · This paper contributes significantly to regression analysis in that the authors have proposed a new and modern quantile regression (QR) ...
  44. [44]
    The unit generalized half-normal quantile regression model - NIH
    Jul 27, 2022 · A new quantile regression for modeling bounded data under a unit Birnbaum-Saunders distribution with applications in medicine and politics.
  45. [45]
    [PDF] 1. Galton's Regression to the Mean
    Edgeworth (1888) proposed dropping the zero mean ... Edgeworth's example highlights an aspect of median regression which continues to perplex the unwary.
  46. [46]
    Censored regression quantiles - ScienceDirect.com
    June 1986, Pages 143-155. Journal of Econometrics. Censored regression quantiles☆. Author links open overlay panel. James L. Powell. Show more. Add to Mendeley.
  47. [47]
    Quantile Regression - American Economic Association
    Quantile regression, as introduced by Koenker and Bassett (1978), may be viewed as an extension of classical least squares estimation of conditional mean models ...
  48. [48]
    Online Updating Composite Quantile Regression for Streaming Data
    Sep 21, 2025 · In this paper, we consider an online renewable algorithm for composite quantile regression, which only needs to retain the key information ...Missing: pseudocode | Show results with:pseudocode
  49. [49]
  50. [50]
    Introduction and Some Recent Advances in Lp Quantile Regression
    When there are outliers or heavy-tailed distributions in the data, quantile regression not only supplements the deficiency of mean regression, but also ...
  51. [51]
    [PDF] quantreg: Quantile Regression
    The func- tion minimizes a weighted sum of absolute residuals that can be formulated as a linear programming problem. As noted above, there are several ...
  52. [52]
    GradientBoostingRegressor — scikit-learn 1.7.2 documentation
    'quantile' allows quantile regression (use alpha to specify the quantile). See Prediction Intervals for Gradient Boosting Regression for an example that ...Prediction Intervals for... · HistGradientBoostingRegressor
  53. [53]
    [PDF] qreg — Quantile regression - Stata
    qreg fits quantile (including median) regression models, also known as least absolute value, minimum absolute deviation, or minimum L1-norm value.
  54. [54]
    [PDF] The QUANTREG Procedure - SAS Support
    The QUANTREG procedure uses quantile regression to model the effects of covariates on the conditional quantiles of a response variable.
  55. [55]
    pkofod/QuantileRegressions.jl: Quantile regression in Julia - GitHub
    This package was originally created as a port of the reweighed least squares code by Vincent Arel-Bundock from the python project statsmodels.Missing: QuantileReg. | Show results with:QuantileReg.
  56. [56]
    Prediction Intervals for Gradient Boosting Regression - Scikit-learn
    Prediction intervals for gradient boosting regression are created using quantile regression. Models with alpha=0.05 and 0.95 produce 90% confidence intervals.
  57. [57]
    Quantile regression | Stata
    By default, qreg performs median regression—the estimates above were obtained by minimizing the sums of the absolute residuals. By comparison, the results from ...
  58. [58]
    [PDF] Quantile Regression in R: a vignette
    This vignette offers a brief tutorial introduction to the package. R and the package quantreg are open-source software projects and can be freely downloaded ...
  59. [59]
  60. [60]
    Using bootstrapped quantile regression analysis for small sample ...
    Jan 14, 2019 · The main aim of the current paper is to give a detailed explanation of methodological and practical implications inherent in a robust statistical method
  61. [61]
    On Bootstrap Inference for Quantile Regression Panel Data - MDPI
    This paper evaluates bootstrap inference methods for quantile regression panel data models. We propose to construct confidence intervals for the parameters ...
  62. [62]
    Quantile Regression: 40 Years On - Annual Reviews
    He provided an effective algorithm for computing the estimator that anticipates modern simplex methods (for further details, see Koenker 2017). Edgeworth's ...