Results 11 to 20 of about 314 (54)
Piecewise linear regularized solution paths
We consider the generic regularized optimization problem $\hat{\mathsf{\beta}}(\lambda)=\arg \min_{\beta}L({\sf{y}},X{\sf{\beta}})+\lambda J({\sf{\beta}})$. Efron, Hastie, Johnstone and Tibshirani [Ann. Statist.
Rosset, Saharon, Zhu, Ji
core +2 more sources
Breakdown points for maximum likelihood estimators of location-scale mixtures [PDF]
ML-estimation based on mixtures of Normal distributions is a widely used tool for cluster analysis. However, a single outlier can make the parameter estimation of at least one of the mixture components break down. Among others, the estimation of mixtures
Hennig, Christian
core +2 more sources
Recursive Parameter Estimation: Convergence
We consider estimation procedures which are recursive in the sense that each successive estimator is obtained from the previous one by a simple adjustment.
Sharia, Teo
core +2 more sources
On adaptive estimation of linear functionals [PDF]
Adaptive estimation of linear functionals over a collection of parameter spaces is considered. A between-class modulus of continuity, a geometric quantity, is shown to be instrumental in characterizing the degree of adaptability over two parameter spaces
Cai, T. Tony, Low, Mark G.
core +3 more sources
Statistical inference based on robust low-rank data matrix approximation
The singular value decomposition is widely used to approximate data matrices with lower rank matrices. Feng and He [Ann. Appl. Stat. 3 (2009) 1634-1654] developed tests on dimensionality of the mean structure of a data matrix based on the singular value ...
Feng, Xingdong, He, Xuming
core +1 more source
Improving Point and Interval Estimates of Monotone Functions by Rearrangement [PDF]
Suppose that a target function is monotonic, namely, weakly increasing, and an available original estimate of this target function is not weakly increasing.
Chernozhukov, Victor +2 more
core +5 more sources
The breakdown behavior of the maximum likelihood estimator in the logistic regression model. [PDF]
: In this note we discuss the breakdown behavior of the Maximum Likelihood (ML) estimator in the logistic regression model. We formally prove that the ML-estimator never explodes to infinity, but rather breaks down to zero when adding severe outliers to ...
Croux, Christophe +2 more
core +3 more sources
This article pursues a statistical study of the Hough transform, the celebrated computer vision algorithm used to detect the presence of lines in a noisy image.
Goldenshluger, Alexander, Zeevi, Assaf
core +1 more source
Cross-validation in nonparametric regression with outliers [PDF]
A popular data-driven method for choosing the bandwidth in standard kernel regression is cross-validation. Even when there are outliers in the data, robust kernel regression can be used to estimate the unknown regression curve [Robust and Nonlinear Time ...
Leung, Denis Heng-Yan
core +3 more sources
Depth weighted scatter estimators [PDF]
General depth weighted scatter estimators are introduced and investigated. For general depth functions, we find out that these affine equivariant scatter estimators are Fisher consistent and unbiased for a wide range of multivariate distributions, and ...
Cui, Hengjian, Zuo, Yijun
core +1 more source

