Results 21 to 30 of about 379,537 (253)

New Developments in Sparse PLS Regression

open access: yesFrontiers in Applied Mathematics and Statistics, 2021
Methods based on partial least squares (PLS) regression, which has recently gained much attention in the analysis of high-dimensional genomic datasets, have been developed since the early 2000s for performing variable selection.
Jérémy Magnanensi   +6 more
doaj   +1 more source

Greedy permanent magnet optimization

open access: yesNuclear Fusion, 2023
A number of scientific fields rely on placing permanent magnets in order to produce a desired magnetic field. We have shown in recent work that the placement process can be formulated as sparse regression.
Alan A. Kaptanoglu   +2 more
doaj   +1 more source

Principal component‐guided sparse regression [PDF]

open access: yesCanadian Journal of Statistics, 2021
AbstractWe propose a new method for supervised learning, the “principal components lasso” (“pcLasso”). It combines the lasso (ℓ1) penalty with a quadratic penalty that shrinks the coefficient vector toward the feature matrix's leading principal components (PCs). pcLasso can be especially powerful if the features are preassigned to groups. In that case,
Jingyi K. Tay   +2 more
openaire   +2 more sources

Sparse Multivariate Gaussian Mixture Regression [PDF]

open access: yesIEEE Transactions on Neural Networks and Learning Systems, 2015
Fitting a multivariate Gaussian mixture to data represents an attractive, as well as challenging problem, in especial when sparsity in the solution is demanded. Achieving this objective requires the concurrent update of all parameters (weight, centers, and precisions) of all multivariate Gaussian functions during the learning process. Such is the focus
Weruaga Prieto, Luis   +1 more
openaire   +3 more sources

Sparse hierarchical regression with polynomials [PDF]

open access: yesMachine Learning, 2020
We present a novel method for exact hierarchical sparse polynomial regression. Our regressor is that degree $r$ polynomial which depends on at most $k$ inputs, counting at most $\ell$ monomial terms, which minimizes the sum of the squares of its prediction errors.
Dimitris Bertsimas, Bart Van Parys
openaire   +4 more sources

Sparse regression for extreme values [PDF]

open access: yesElectronic Journal of Statistics, 2021
We study the problem of selecting features associated with extreme values in high dimensional linear regression. Normally, in linear modeling problems, the presence of abnormal extreme values or outliers is considered an anomaly which should either be removed from the data or remedied using robust regression methods.
Chang, Andersen   +2 more
openaire   +3 more sources

Sparse Poisson regression via mixed-integer optimization.

open access: yesPLoS ONE, 2021
We present a mixed-integer optimization (MIO) approach to sparse Poisson regression. The MIO approach to sparse linear regression was first proposed in the 1970s, but has recently received renewed attention due to advances in optimization algorithms and ...
Hiroki Saishu, Kota Kudo, Yuichi Takano
doaj   +1 more source

Uncertainty Analysis and Experimental Validation of Identifying the Governing Equation of an Oscillator Using Sparse Regression

open access: yesApplied Sciences, 2022
In recent years, the rapid growth of computing technology has enabled identifying mathematical models for vibration systems using measurement data instead of domain knowledge.
Yaxiong Ren   +2 more
doaj   +1 more source

On sparse optimal regression trees

open access: yesEuropean Journal of Operational Research, 2022
In this paper, we model an optimal regression tree through a continuous optimization problem, where a compromise between prediction accuracy and both types of sparsity, namely local and global, is sought. Our approach can accommodate important desirable properties for the regression task, such as cost-sensitivity and fairness.
Blanquero, Rafael   +3 more
openaire   +5 more sources

Distributed Sparse Linear Regression

open access: yesIEEE Transactions on Signal Processing, 2010
The Lasso is a popular technique for joint estimation and continuous variable selection, especially well-suited for sparse and possibly under-determined linear regression problems. This paper develops algorithms to estimate the regression coefficients via Lasso when the training data are distributed across different agents, and their communication to a
Mateos, Gonzalo   +2 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy