Results 41 to 50 of about 396 (97)
Selection of Tuning Parameters, Solution Paths and Standard Errors for Bayesian Lassos
Penalized regression methods such as the lasso and elastic net (EN) have become popular for simultaneous variable selection and coefficient estimation. Implementation of these methods require selection of the penalty parameters.
Vivekananda Roy, S. Chakraborty
semanticscholar +1 more source
Quantile regression in high-dimension with breaking [PDF]
The paper considers a linear regression model in high-dimension for which the predictive variables can change the influence on the response variable at unknown times (called change-points).
Ciuperca, Gabriela
core +4 more sources
The Influence Function of Penalized Regression Estimators [PDF]
To perform regression analysis in high dimensions, lasso or ridge estimation are a common choice. However, it has been shown that these methods are not robust to outliers.
Alfons, Andreas +2 more
core +2 more sources
Isotonic regression offers a flexible modeling approach under monotonicity assumptions, which are natural in many applications. Despite this attractive setting and extensive theoretical research, isotonic regression has enjoyed limited interest in ...
Ronny Luss, Saharon Rosset
semanticscholar +1 more source
Understanding efficiency in high dimensional linear models is a longstanding problem of interest. Classical work with smaller dimensional problems dating back to Huber and Bickel has illustrated the clear benefits of efficient loss functions.
Jelena Bradic
semanticscholar +1 more source
Skellam shrinkage: Wavelet-based intensity estimation for inhomogeneous Poisson data
The ubiquity of integrating detectors in imaging and other applications implies that a variety of real-world data are well modeled as Poisson random variables whose means are in turn proportional to an underlying vector-valued signal of interest. In this
Hirakawa, Keigo, Wolfe, Patrick J.
core +2 more sources
Majorization-Minimization algorithms for nonsmoothly penalized objective functions
The use of penalization, or regularization, has become common in high-dimensional statistical analysis, where an increasingly frequent goal is to simultaneously select important variables and estimate their effects.
E. Schifano, R. Strawderman, M. Wells
semanticscholar +1 more source
A popular approach to smooth models for longitudinal data is to express the model as a mixed model, since this often leads to immediate model fitting with standard procedures. This approach is particularly appealing when truncated polynomials are used as
V. Djeundje, I. Currie
semanticscholar +1 more source
An extended class of minimax generalized Bayes estimators of regression coefficients
We derive minimax generalized Bayes estimators of regression coefficients in the general linear model with spherically symmetric errors under invariant quadratic loss for the case of unknown scale. The class of estimators generalizes the class considered
Abramowitz +11 more
core +1 more source
Selecting the number of principal components: estimation of the true rank of a noisy matrix [PDF]
Principal component analysis (PCA) is a well-known tool in multivariate statistics. One significant challenge in using PCA is the choice of the number of components.
Choi, Yunjin +2 more
core

