Results 21 to 30 of about 396 (97)
Priors constructed from scale mixtures of normal distributions have long played an important role in decision theory and shrinkage estimation. This paper demonstrates equivalence between the maximum aposteriori estimator constructed under one such prior ...
R. Strawderman, M. Wells, E. Schifano
semanticscholar +1 more source
Orthogonalized smoothing for rescaled spike and slab models
Rescaled spike and slab models are a new Bayesian variable selection method for linear regression models. In high dimensional orthogonal settings such models have been shown to possess optimal model selection properties.
Ishwaran, Hemant, Papana, Ariadni
core +1 more source
The distribution of model averaging estimators and an impossibility result regarding its estimation [PDF]
The finite-sample as well as the asymptotic distribution of Leung and Barron's (2006) model averaging estimator are derived in the context of a linear regression model. An impossibility result regarding the estimation of the finite-sample distribution of
Benedikt M. Pötscher +1 more
core +3 more sources
Improving both domain and total area estimation by composition [PDF]
In this article we propose small area estimators for both the small and large area parameters. When the objective is to estimate parameters at both levels, optimality is achieved by a sample design that combines fixed and proportional allocation. In such
Costa, Àlex, Satorra, A., Ventura, Eva
core +3 more sources
This paper considers estimation of the predictive density for a normal linear model with unknown variance under alpha-divergence loss for -1
Maruyama, Yuzo, Strawderman, William E.
core +1 more source
Optimization of ridge parameters in multivariate generalized ridge regression by plug-in methods
Generalized ridge (GR) regression for a univariate linear model was proposed simultaneously with ridge regression by Hoerl and Kennard (1970). In this paper, we deal with a GR regression for a multivariate linear model, referred to as a multivariate GR ...
Isamu Nagai, H. Yanagihara, K. Satoh
semanticscholar +1 more source
Asymptotic oracle properties of SCAD-penalized least squares estimators
We study the asymptotic properties of the SCAD-penalized least squares estimator in sparse, high-dimensional, linear regression models when the number of covariates may increase with the sample size.
Huang, Jian, Xie, Huiliang
core +1 more source
Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness [PDF]
We investigate the learning rate of multiple kernel learning (MKL) with $\ell_1$ and elastic-net regularizations. The elastic-net regularization is a composition of an $\ell_1$-regularizer for inducing the sparsity and an $\ell_2$-regularizer for ...
Sugiyama, Masashi, Suzuki, Taiji
core +2 more sources
Normalized and standard Dantzig estimators: Two approaches
We reconsider the definition of the Dantzig estimator and show that, in contrast to the LASSO, standardization of an experimental matrix leads in general to a different estimator than in the case when it is based on the original data.
J. Mielniczuk, Hubert Szymanowski
semanticscholar +1 more source
Piecewise linear regularized solution paths
We consider the generic regularized optimization problem $\hat{\mathsf{\beta}}(\lambda)=\arg \min_{\beta}L({\sf{y}},X{\sf{\beta}})+\lambda J({\sf{\beta}})$. Efron, Hastie, Johnstone and Tibshirani [Ann. Statist.
Rosset, Saharon, Zhu, Ji
core +2 more sources

