Results 81 to 90 of about 109,092 (251)

Hyperparameter Optimization with Differentiable Metafeatures

open access: yes, 2021
Metafeatures, or dataset characteristics, have been shown to improve the performance of hyperparameter optimization (HPO). Conventionally, metafeatures are precomputed and used to measure the similarity between datasets, leading to a better initialization of HPO models.
Jomaa, Hadi S.   +2 more
openaire   +2 more sources

Ultra‐Improved Interfacial Strength Between Metallic Surface and Polyurethane via Cost‐Effective Anodizing Process

open access: yesAdvanced Materials Interfaces, EarlyView.
SAA significantly enhanced Al/PU bonding, increasing SLSS by up to 920% and fracture energy by 15 100% through optimized micro‐nano porous surfaces. RSM identified the optimal anodizing conditions, while ML confirmed sulfuric acid concentration and roughness as dominant predictors of strength.
Umut Bakhbergen   +6 more
wiley   +1 more source

Comparative Study on Hyperparameter Tuning for Predicting Concrete Compressive Strength

open access: yesBuildings
This study assesses the impact of hyperparameter optimization algorithms on the performance of machine learning-based concrete compressive strength prediction models.
Jeonghyun Kim, Donwoo Lee
doaj   +1 more source

HyperSpace: Distributed Bayesian Hyperparameter Optimization

open access: yes2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), 2018
As machine learning models continue to increase in complexity, so does the potential number of free model parameters commonly known as hyperparameters. While there has been considerable progress toward finding optimal configurations of these hyperparameters, many optimization procedures are treated as black boxes. We believe optimization methods should
M. Todd Young   +3 more
openaire   +2 more sources

Scaling Laws for Hyperparameter Optimization

open access: yes, 2023
Accepted at NeurIPS ...
Kadra, Arlind   +3 more
openaire   +2 more sources

Characterization of Droplet Formation in Ultrasonic Spray Coating: Influence of Ink Formulation Using Phase Doppler Anemometry and Machine Learning

open access: yesAdvanced Materials Technologies, EarlyView.
This study explores how machine learning models, trained on small experimental datasets obtained via Phase Doppler Anemometry (PDA), can accurately predict droplet size (D32) in ultrasonic spray coating (USSC). By capturing the influence of ink complexity (solvent, polymer, nanoparticles), power, and flow rate, the model enables precise droplet control
Pieter Verding   +5 more
wiley   +1 more source

Research on parameter selection and optimization of C4.5 algorithm based on algorithm applicability knowledge base

open access: yesScientific Reports
Given that the decision tree C4.5 algorithm has outstanding performance in prediction accuracy on medical datasets and is highly interpretable, this paper carries out an optimization study on the selection of hyperparameters of the algorithm in order to ...
Yiyan Zhang, Yi Xin, Qin Li
doaj   +1 more source

An Optimized Hyperparameter Tuning for Improved Hate Speech Detection with Multilayer Perceptron

open access: yesJurnal RESTI (Rekayasa Sistem dan Teknologi Informasi)
Hate speech classification is a critical task in the domain of natural language processing, aiming to mitigate the negative impacts of harmful content on digital platforms.
Muhamad Ridwan, Ema Utami
doaj   +1 more source

Long Short Term Memory Hyperparameter Optimization for a Neural Network Based Emotion Recognition Framework

open access: yesIEEE Access, 2018
Recently, emotion recognition using low-cost wearable sensors based on electroencephalogram and blood volume pulse has received much attention. Long short-term memory (LSTM) networks, a special type of recurrent neural networks, have been applied ...
Bahareh Nakisa   +4 more
doaj   +1 more source

Stochastic Hyperparameter Optimization through Hypernetworks

open access: yes, 2018
Machine learning models are often tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of weights and hyperparameters. Our process trains a neural network to output approximately optimal weights as a function of hyperparameters.
Lorraine, Jonathan, Duvenaud, David
openaire   +2 more sources

Home - About - Disclaimer - Privacy