Results 81 to 90 of about 6,969,735 (365)

Feature Redundancy Based on Interaction Information for Multi-Label Feature Selection

open access: yesIEEE Access, 2020
Recent years, multi-label feature selection has gradually attracted significant attentions from machine learning, statistical computing and related communities and has been widely applied to diverse problems from music recognition to text mining, image ...
Wanfu Gao   +3 more
doaj   +1 more source

Feature Selection by Reordering

open access: yes, 2005
Feature selection serves for both reduction of the total amount of available data (removing of valueless data) and improvement of the whole behavior of a given induction algorithm (removing data that cause deterioration of the results). A method of proper selection of features for an inductive algorithm is discussed.
Jiřina, M. (Marcel), Jiřina jr., M.
openaire   +4 more sources

Insights into PI3K/AKT signaling in B cell development and chronic lymphocytic leukemia

open access: yesFEBS Letters, EarlyView.
This Review explores how the phosphoinositide 3‐kinase and protein kinase B pathway shapes B cell development and drives chronic lymphocytic leukemia, a common blood cancer. It examines how signaling levels affect disease progression, addresses treatment challenges, and introduces novel experimental strategies to improve therapies and patient outcomes.
Maike Buchner
wiley   +1 more source

Feature and Variable Selection in Classification [PDF]

open access: yes, 2014
The amount of information in the form of features and variables avail- able to machine learning algorithms is ever increasing. This can lead to classifiers that are prone to overfitting in high dimensions, high di- mensional models do not lend themselves
Karper, Aaron
core  

Labeling the Features Not the Samples: Efficient Video Classification with Minimal Supervision

open access: yes, 2015
Feature selection is essential for effective visual recognition. We propose an efficient joint classifier learning and feature selection method that discovers sparse, compact representations of input features from a vast sea of candidates, with an almost
Baluja, Shumeet   +3 more
core   +1 more source

Feature Selection for Portfolio Optimization [PDF]

open access: yesSSRN Electronic Journal, 2015
Most portfolio selection rules based on the sample mean and covariance matrix perform poorly out-of-sample. Moreover, there is a growing body of evidence that such optimization rules are not able to beat simple rules of thumb, such as 1/N. Parameter uncertainty has been identified as one major reason for these findings. A strand of literature addresses
Alex Weissensteiner   +4 more
openaire   +6 more sources

Making tau amyloid models in vitro: a crucial and underestimated challenge

open access: yesFEBS Letters, EarlyView.
This review highlights the challenges of producing in vitro amyloid assemblies of the tau protein. We review how accurately the existing protocols mimic tau deposits found in the brain of patients affected with tauopathies. We discuss the important properties that should be considered when forming amyloids and the benchmarks that should be used to ...
Julien Broc, Clara Piersson, Yann Fichou
wiley   +1 more source

Feature Selection in k-Median Clustering [PDF]

open access: yes, 2004
An e ective method for selecting features in clustering unlabeled data is proposed based on changing the objective function of the standard k-median clustering algorithm.
Mangasarian, Olvi, Wild, Edward
core   +1 more source

Feature selection in bioinformatics [PDF]

open access: yesSPIE Proceedings, 2012
In bioinformatics, there are often a large number of input features. For example, there are millions of single nucleotide polymorphisms (SNPs) that are genetic variations which determine the dierence between any two unrelated individuals. In microarrays, thousands of genes can be proled in each test. It is important to nd out which input features (e.g.,
openaire   +3 more sources

Pitfalls of supervised feature selection [PDF]

open access: yesBioinformatics, 2009
Pitfalls of supervised feature selection Pawel Smialowski1,2,∗, Dmitrij Frishman1,2 and Stefan Kramer3 1Department of Genome Oriented Bioinformatics, Technische Universitat Munchen Wissenschaftszentrum Weihenstephan, Am Forum 1, 85350 Freising, 2Helmholtz Zentrum Munich, National Research Center for Environment and Health, Institute for Bioinformatics,
Smialowski, P., Frishman, D., Kramer, S.
openaire   +4 more sources

Home - About - Disclaimer - Privacy