Results 11 to 20 of about 2,762 (131)
A Commentary on Chatterjee Et Al. (2018): A Corrected Framework for Group Sparsity in Zero-Inflated Negative Binomial Models. [PDF]
ABSTRACT We reexamine GOOOGLE, the group‐regularized zero‐inflated negative binomial (ZINB) approach of Chatterjee et al. We show that in the released implementation, the tuning parameter is selected using a Bayesian information criterion (BIC) computed on a Gaussian surrogate.
Iqbal A, Mallick H, Ogundimu EO.
europepmc +2 more sources
Variable selection in modelling clustered data via within-cluster resampling. [PDF]
Abstract In many biomedical applications, there is a need to build risk‐adjustment models based on clustered data. However, methods for variable selection that are applicable to clustered discrete data settings with a large number of candidate variables and potentially large cluster sizes are lacking.
Ye S +5 more
europepmc +2 more sources
Approximate Message Passing Algorithm for Nonconvex Regularization
In this paper, we study the sparse signal reconstruction with nonconvex regularization, mainly focusing on two popular nonconvex regularization methods, minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD).
Hui Zhang +4 more
doaj +1 more source
Linear Convergence of Adaptively Iterative Thresholding Algorithms for Compressed Sensing [PDF]
This paper studies the convergence of the adaptively iterative thresholding (AIT) algorithm for compressed sensing. We first introduce a generalized restricted isometry property (gRIP).
Chang, Xiangyu +4 more
core +1 more source
A variety of statistical methods, such as admixture models, have been used to estimate genomic breed composition (GBC). These methods, however, tend to produce non-zero components to reference breeds that shared some genomic similarity with a test animal.
Yangfan Wang +10 more
doaj +1 more source
Background Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1, SCAD and MC+.
Zhenqiu Liu +2 more
doaj +1 more source
SIS: An R Package for Sure Independence Screening in Ultrahigh-Dimensional Statistical Models
We revisit sure independence screening procedures for variable selection in generalized linear models and the Cox proportional hazards model. Through the publicly available R package SIS, we provide a unified environment to carry out variable selection ...
Diego Franco Saldana, Yang Feng
doaj +1 more source
On the adaptive elastic-net with a diverging number of parameters [PDF]
We consider the problem of model selection and estimation in situations where the number of parameters diverges with the sample size. When the dimension is high, an ideal method should have the oracle property [J. Amer. Statist. Assoc.
Zhang, Hao Helen, Zou, Hui
core +2 more sources
We report the comprehensive identification of periodic genes and their network inference, based on a gene co-expression analysis and an Auto-Regressive eXogenous (ARX) model with a group smoothly clipped absolute deviation (SCAD) method using a time ...
Satoru Koda +16 more
doaj +1 more source
We analyse a linear regression problem with nonconvex regularization called smoothly clipped absolute deviation (SCAD) under an overcomplete Gaussian basis for Gaussian random data.
Sakata, Ayaka, Xu, Yingying
core +1 more source

