Results 21 to 30 of about 4,557 (101)
We propose a computationally intensive method, the random lasso method, for variable selection in linear models. The method consists of two major steps.
Nan, Bin +3 more
core +1 more source
Variable Selection in General Multinomial Logit Models [PDF]
The use of the multinomial logit model is typically restricted to applications with few predictors, because in high-dimensional settings maximum likelihood estimates tend to deteriorate.
Pößnecker, Wolfgang +2 more
core +2 more sources
Preliminary test and Stein-type shrinkage LASSO-based estimators [PDF]
Peer ...
Arashi, Mohammad, Norouzirad, Mina
core
Ridge Estimation for Multinomial Logit Models with Symmetric Side Constraints [PDF]
In multinomial logit models, the identifiability of parameter estimates is typically obtained by side constraints that specify one of the response categories as reference category.
Tutz, Gerhard, Zahid, Faisal Maqbool
core +1 more source
Adaptive Monotone Shrinkage for Regression [PDF]
We develop an adaptive monotone shrinkage estimator for regression models with the following characteristics: i) dense coefficients with small but important effects; ii) a priori ordering that indicates the probable predictive importance of the features.
Foster, Dean, Ma, Zhuang, Stine, Robert
core
Beyond Support in Two-Stage Variable Selection [PDF]
Numerous variable selection methods rely on a two-stage procedure, where a sparsity-inducing penalty is used in the first stage to predict the support, which is then conveyed to the second stage for estimation or inference purposes.
Ambroise, Christophe +3 more
core +4 more sources
Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry, and electroencephalography, matrix type covariates frequently arise when measurements are obtained for each ...
Li, Lexin, Zhou, Hua
core +1 more source
The accurate estimation of correlation matrices is a foundational challenge in high-dimensional statistics. The sample correlation matrix, while unbiased, suffers from high variance when the number of variables p is large relative to the sample size n ...
Muath Awadalla, Yücel Tandoğdu
doaj +1 more source
Ridge Estimation of Inverse Covariance Matrices from High-Dimensional Data
We study ridge estimation of the precision matrix in the high-dimensional setting where the number of variables is large relative to the sample size. We first review two archetypal ridge estimators and note that their utilized penalties do not coincide ...
Peeters, Carel F. W. +1 more
core +2 more sources
Lecture notes on ridge regression
The linear regression model cannot be fitted to high-dimensional data, as the high-dimensionality brings about empirical non-identifiability. Penalized regression overcomes this non-identifiability by augmentation of the loss function by a penalty (i.e ...
van Wieringen, Wessel N.
core

