Results 11 to 20 of about 127,719 (261)

Hyperparameter optimization with approximate gradient

open access: yes, 2016
Most models in machine learning contain at least one hyperparameter to control for model complexity. Choosing an appropriate set of hyperparameters is both crucial in terms of model accuracy and computationally challenging.
Pedregosa, Fabian
core   +2 more sources

Hyperparameter Optimization for AST Differencing

open access: yesIEEE Transactions on Software Engineering, 2023
Computing the differences between two versions of the same program is an essential task for software development and software evolution research. AST differencing is the most advanced way of doing so, and an active research area. Yet, AST differencing algorithms rely on configuration parameters that may have a strong impact on their effectiveness.
Matias Martinez   +2 more
openaire   +3 more sources

Optimizing microservices with hyperparameter optimization

open access: yes2021 17th International Conference on Mobility, Sensing and Networking (MSN), 2021
In the last few years, the cloudification of applications requires new concepts and techniques to fully reap the benefits of the new computing paradigm. Among them, the microservices architectural style, which is inspired by service-oriented architectures, has gained attention from both industry and academia.
Dinh-Tuan, Hai   +2 more
openaire   +2 more sources

Hyperparameter Optimization [PDF]

open access: yes, 2019
Recent interest in complex and computationally expensive machine learning models with many hyperparameters, such as automated machine learning (AutoML) frameworks and deep neural networks, has resulted in a resurgence of research on hyperparameter optimization (HPO). In this chapter, we give an overview of the most prominent approaches for HPO.
Feurer, Matthias, Hutter, Frank
openaire   +2 more sources

Tuning of Bayesian optimization for materials synthesis: simulation of the one-dimensional case

open access: yesScience and Technology of Advanced Materials: Methods, 2022
Materials exploration requires the optimization of a multidimensional space including the chemical composition and synthesis parameters such as temperature and pressure.
Ryo Nakayama   +8 more
doaj   +1 more source

Is one hyperparameter optimizer enough? [PDF]

open access: yesProceedings of the 4th ACM SIGSOFT International Workshop on Software Analytics, 2018
Hyperparameter tuning is the black art of automatically finding a good combination of control parameters for a data miner. While widely applied in empirical Software Engineering, there has not been much discussion on which hyperparameter tuner is best for software analytics.
Tu, Huy, Nair, Vivek
openaire   +2 more sources

Metalearning for Hyperparameter Optimization [PDF]

open access: yes, 2022
SummaryThis chapter describes various approaches for the hyperparameter optimization (HPO) and combined algorithm selection and hyperparameter optimization problems (CASH). It starts by presenting some basic hyperparameter optimization methods, including grid search, random search, racing strategies, successive halving and hyperband. Next, it discusses
Brazdil, Pavel   +3 more
openaire   +2 more sources

PyHopper -- Hyperparameter optimization

open access: yes, 2022
Hyperparameter tuning is a fundamental aspect of machine learning research. Setting up the infrastructure for systematic optimization of hyperparameters can take a significant amount of time. Here, we present PyHopper, a black-box optimization platform designed to streamline the hyperparameter tuning workflow of machine learning researchers. PyHopper's
Lechner, Mathias   +4 more
openaire   +2 more sources

Bayesian Optimized Echo State Network Applied to Short-Term Load Forecasting

open access: yesEnergies, 2020
Load forecasting impacts directly financial returns and information in electrical systems planning. A promising approach to load forecasting is the Echo State Network (ESN), a recurrent neural network for the processing of temporal dependencies.
Gabriel Trierweiler Ribeiro   +4 more
doaj   +1 more source

Enhanced Deep Deterministic Policy Gradient Algorithm Using Grey Wolf Optimizer for Continuous Control Tasks

open access: yesIEEE Access, 2023
Deep Reinforcement Learning (DRL) allows agents to make decisions in a specific environment based on a reward function, without prior knowledge. Adapting hyperparameters significantly impacts the learning process and time.
Ebrahim Hamid Hasan Sumiea   +6 more
doaj   +1 more source

Home - About - Disclaimer - Privacy