Results 21 to 30 of about 392 (92)

The Accessible Lasso Models

open access: yes, 2016
A new line of research on the lasso exploits the beautiful geometric fact that the lasso fit is the residual from projecting the response vector $y$ onto a certain convex polytope.
Harris, Naftali, Sepehri, Amir
core   +1 more source

Optimization of ridge parameters in multivariate generalized ridge regression by plug-in methods

open access: yes, 2012
Generalized ridge (GR) regression for a univariate linear model was proposed simultaneously with ridge regression by Hoerl and Kennard (1970). In this paper, we deal with a GR regression for a multivariate linear model, referred to as a multivariate GR ...
Isamu Nagai, H. Yanagihara, K. Satoh
semanticscholar   +1 more source

Orthogonalized smoothing for rescaled spike and slab models

open access: yes, 2008
Rescaled spike and slab models are a new Bayesian variable selection method for linear regression models. In high dimensional orthogonal settings such models have been shown to possess optimal model selection properties.
Ishwaran, Hemant, Papana, Ariadni
core   +1 more source

The Influence Function of Penalized Regression Estimators [PDF]

open access: yes, 2014
To perform regression analysis in high dimensions, lasso or ridge estimation are a common choice. However, it has been shown that these methods are not robust to outliers.
Alfons, Andreas   +2 more
core   +2 more sources

Normalized and standard Dantzig estimators: Two approaches

open access: yes, 2015
We reconsider the definition of the Dantzig estimator and show that, in contrast to the LASSO, standardization of an experimental matrix leads in general to a different estimator than in the case when it is based on the original data.
J. Mielniczuk, Hubert Szymanowski
semanticscholar   +1 more source

Asymptotic oracle properties of SCAD-penalized least squares estimators

open access: yes, 2007
We study the asymptotic properties of the SCAD-penalized least squares estimator in sparse, high-dimensional, linear regression models when the number of covariates may increase with the sample size.
Huang, Jian, Xie, Huiliang
core   +1 more source

Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness [PDF]

open access: yes, 2013
We investigate the learning rate of multiple kernel learning (MKL) with $\ell_1$ and elastic-net regularizations. The elastic-net regularization is a composition of an $\ell_1$-regularizer for inducing the sparsity and an $\ell_2$-regularizer for ...
Sugiyama, Masashi, Suzuki, Taiji
core   +2 more sources

Integrating Ridge-type regularization in fuzzy nonlinear regression

open access: yes, 2012
In this paper, we deal with the ridge-type estimator for fuzzy nonlinear regression models using fuzzy numbers and Gaussian basis functions. Shrinkage regularization methods are used in linear and nonlinear regression models to yield consistent ...
R. Farnoosh, J. Ghasemian, O. S. Fard
semanticscholar   +1 more source

Improving both domain and total area estimation by composition [PDF]

open access: yes, 2004
In this article we propose small area estimators for both the small and large area parameters. When the objective is to estimate parameters at both levels, optimality is achieved by a sample design that combines fixed and proportional allocation. In such
Costa, Àlex, Satorra, A., Ventura, Eva
core   +3 more sources

Piecewise linear regularized solution paths

open access: yes, 2007
We consider the generic regularized optimization problem $\hat{\mathsf{\beta}}(\lambda)=\arg \min_{\beta}L({\sf{y}},X{\sf{\beta}})+\lambda J({\sf{\beta}})$. Efron, Hastie, Johnstone and Tibshirani [Ann. Statist.
Rosset, Saharon, Zhu, Ji
core   +2 more sources

Home - About - Disclaimer - Privacy