Results 101 to 110 of about 129,562 (262)

Scaling Laws for Hyperparameter Optimization

open access: yes, 2023
Accepted at NeurIPS ...
Kadra, Arlind   +3 more
openaire   +2 more sources

Multi‐Site Transfer Classification of Major Depressive Disorder: An fMRI Study in 3335 Subjects

open access: yesAdvanced Science, EarlyView.
The study proposes graph convolution network with sparse pooling to learn the hierarchical features of brain graph for MDD classification. Experiment is done on multi‐site fMRI samples (3335 subjects, the largest functional dataset of MDD to date) and transfer learning is applied, achieving an average accuracy of 70.14%.
Jianpo Su   +14 more
wiley   +1 more source

Stochastic Hyperparameter Optimization through Hypernetworks

open access: yes, 2018
Machine learning models are often tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of weights and hyperparameters. Our process trains a neural network to output approximately optimal weights as a function of hyperparameters.
Lorraine, Jonathan, Duvenaud, David
openaire   +2 more sources

Comprehensive Profiling of N6‐methyladnosine (m6A) Readouts Reveals Novel m6A Readers That Regulate Human Embryonic Stem Cell Differentiation

open access: yesAdvanced Science, EarlyView.
This research deciphers the m6A transcriptome by profiling its sites and functional readout effects: from mRNA stability, translation to alternative splicing, across five different cell types. Machine learning model identifies novel m6A‐binding proteins DDX6 and FXR2 and novel m6A reader proteins FUBP3 and L1TD1.
Zhou Huang   +11 more
wiley   +1 more source

Divergent Responses of Bacterial Communities to Permafrost Degradation and Their Associations With Carbon Across Vertical Profiles

open access: yesAdvanced Science, EarlyView.
Bacterial α‐diversity decreases, but stochasticity and community stability increase across the 15 m‐depth vertical profiles and along the degraded gradient within the active layer. The abundance and interaction of core taxa mainly control community stability in the active and permafrost layers, respectively.
Shengyun Chen   +13 more
wiley   +1 more source

Heuristically Adaptive Diffusion‐Model Evolutionary Strategy

open access: yesAdvanced Science, EarlyView.
Building on the mathematical equivalence between diffusion models and evolutionary algorithms, researchers demonstrate unprecedented control over evolutionary optimization through conditional diffusion. By training diffusion models to associate parameters with specific traits, they can guide evolution toward solutions exhibiting desired behaviors ...
Benedikt Hartl   +3 more
wiley   +1 more source

ACHO: Adaptive Conformal Hyperparameter Optimization

open access: yes, 2022
Several novel frameworks for hyperparameter search have emerged in the last decade, but most rely on strict, often normal, distributional assumptions, limiting search model flexibility. This paper proposes a novel optimization framework based on upper confidence bound sampling of conformal confidence intervals, whose weaker assumption of ...
openaire   +2 more sources

MGM as a Large‐Scale Pretrained Foundation Model for Microbiome Analyses in Diverse Contexts

open access: yesAdvanced Science, EarlyView.
We present the Microbial General Model (MGM), a transformer‐based foundation model pretrained on over 260,000 microbiome samples. MGM learns contextualized microbial representations via self‐supervised language modeling, enabling robust transfer learning, cross‐regional generalization, keystone taxa discovery, and prompt‐guided generation of realistic,
Haohong Zhang   +5 more
wiley   +1 more source

Overtuning in Hyperparameter Optimization

open access: yes
Accepted at the Fourth Conference on Automated Machine Learning (Methods Track).
Schneider, Lennart   +2 more
openaire   +2 more sources

Hyperparameters: Optimize, or Integrate Out? [PDF]

open access: yes, 1996
I examine two approximate methods for computational implementation of Bayesian hierarchical models, that is, models which include unknown hyperparameters such as regularization constants. In the ‘evidence framework’ the model parameters are integrated over, and the resulting evidence is maximized over the hyperparameters.
openaire   +1 more source

Home - About - Disclaimer - Privacy