Fact-checked by Grok 2 weeks ago
References
-
[1]
High-Dimensional Statistical Learning: Roots, Justifications, and ...High-dimensional data generally refer to data in which the number of variables is larger than the sample size. Analyzing such datasets poses great challenges ...
-
[2]
Statistics for High-Dimensional Data - SpringerLinkA special characteristic of the book is that it contains comprehensive mathematical theory on high-dimensional statistics combined with methodology, algorithms ...
-
[3]
High Dimensional Data Analysis | Department of StatisticsHigh-dimensional statistics focuses on data sets in which the number of features is of comparable size, or larger than the number of observations.
-
[4]
[PDF] Invited Review Article - Jianqing FanIn this article we address the issues of variable selection for high dimensional statistical modeling in the unified framework of penalized likelihood ...
-
[5]
High-Dimensional StatisticsNo researcher has deepened our understanding of high-dimensional statistics more than Martin Wainwright. ... non-asymptotic results related to sparsity and non- ...
- [6]
-
[7]
[PDF] Stein-1956.pdf - Yale Statistics and Data SciencePage 1. : INADMISSIBILITY OF THE USUAL ESTI-. MATOR FOR THE MEAN OF A MULTI-. VARIATE NORMAL DISTRIBUTION. CHARLES STEIN. STANFORD UNIVERSITY. 1. Introduction.
-
[8]
[PDF] Estimation with quadratic lossThis paper will be concerned with optimum properties or failure of optimum properties of the natural estimator in certain special problems with the risk usually ...
-
[9]
Ridge Regression: Biased Estimation for Nonorthogonal ProblemsIntroduced is the ridge trace, a method for showing in two dimensions the effects of nonorthogonality. It is then shown how to augment X′X to obtain biased ...
-
[10]
THE GENERALISED PRODUCT MOMENT DISTRIBUTION IN ...JOHN WISHART, M.A., B.Sc; THE GENERALISED PRODUCT MOMENT DISTRIBUTION IN SAMPLES FROM A NORMAL MULTIVARIATE POPULATION, Biometrika, Volume 20A, Issue 1-2,Missing: original | Show results with:original
-
[11]
Regression Shrinkage and Selection Via the Lasso - Tibshirani - 1996We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the ...
-
[12]
Double/debiased machine learning for treatment and structural ...Summary. We revisit the classic semi‐parametric problem of inference on a low‐dimensional parameter θ0 in the presence of high‐dimensional nuisance paramet.
-
[13]
[PDF] On Model Selection Consistency of LassoTheorem 1 states that, if Strong Irrepresentable Condition holds, then the probability of Lasso selecting the true model approaches 1 at an exponential rate ...
-
[14]
The Adaptive Lasso and Its Oracle Properties - Taylor & Francis OnlineWe show that the adaptive lasso enjoys the oracle properties; namely, it performs as well as if the true underlying model were given in advance.
-
[15]
Variable Selection via Nonconcave Penalized Likelihood and its ...The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters.
-
[16]
Nearly unbiased variable selection under minimax concave penaltyThe MCP provides the convexity of the penalized loss in sparse regions to the greatest extent given certain thresholds for variable selection and unbiasedness.
-
[17]
Regularized estimation of large covariance matrices - Project EuclidThis paper estimates covariance matrices by banding or tapering, showing consistency if (log p)/n→0. It also introduces a Gaussian white noise model analogue.
-
[18]
On the distribution of the largest eigenvalue in principal components ...The result suggests that some aspects of large p multivariate distribution theory may be easier to apply in practice than their fixed p counterparts. Citation.
-
[19]
A well-conditioned estimator for large-dimensional covariance ...This paper introduces an estimator that is both well-conditioned and more accurate than the sample covariance matrix asymptotically.
-
[20]
High-dimensional graphs and variable selection with the Lasso - arXivAug 1, 2006 · We show that neighborhood selection with the Lasso is a computationally attractive alternative to standard covariance selection for sparse high-dimensional ...
-
[21]
Estimation of high-dimensional low-rank matrices - Project EuclidFurthermore, we prove minimax lower bounds for collaborative sampling and USR matrix completion problems. The main point of this paper is to show that the ...
-
[22]
Controlling the False Discovery Rate: A Practical and Powerful ...A simple sequential Bonferronitype procedure is proved to control the false discovery rate for independent test statistics.
-
[23]
Exact post-selection inference, with application to the lassoAbstract. We develop a general approach to valid inference after model selection. At the core of our framework is a result that characterizes the distribution ...
-
[24]
Regression Shrinkage and Selection Via the Lasso - Oxford AcademicSUMMARY. We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute valu.Missing: original | Show results with:original
-
[25]
[PDF] High-dimensional regressionthe number of predictors p rivals—or even exceeds—the number of observations n. In fact, when p>n, the linear regression estimate is actually not well-defined.
-
[26]
On Model Selection Consistency of LassoIn this paper, we prove that a single condition, which we call the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true ...
-
[27]
[PDF] Clustering and Feature Selection using Sparse Principal ... - arXivOct 8, 2008 · Abstract. In this paper, we study the application of sparse principal component analysis (PCA) to clustering and feature selection problems.Missing: seminal | Show results with:seminal
-
[28]
NoneSummary of each segment:
-
[29]
[PDF] MATRIX FACTORIZATION TECHNIQUES FOR RECOMMENDER ...Recommender systems rely on different types of input data, which are often placed in a matrix with one dimension representing users and the other dimension.Missing: completion | Show results with:completion
-
[30]
[PDF] Double debiased machine learning nonparametric inference with ...This paper proposes a nonparametric method using double debiased machine learning (DML) to estimate causal effects of continuous treatments, using a doubly ...