Results 91 to 100 of about 129,562 (262)

Improving the Robustness of Visual Teach‐and‐Repeat Navigation Using Drift Error Correction and Event‐Based Vision for Low‐Light Environments

open access: yesAdvanced Robotics Research, EarlyView.
Visual teach‐and‐repeat (VTR) navigation allows robots to learn and follow routes without building a full metric map. We show that navigation accuracy for VTR can be improved by integrating a topological map with error‐drift correction based on stereo vision.
Fuhai Ling, Ze Huang, Tony J. Prescott
wiley   +1 more source

Brain tumor classification from MRI scans: a framework of hybrid deep learning model with Bayesian optimization and quantum theory-based marine predator algorithm

open access: yesFrontiers in Oncology
Brain tumor classification is one of the most difficult tasks for clinical diagnosis and treatment in medical image analysis. Any errors that occur throughout the brain tumor diagnosis process may result in a shorter human life span.
Muhammad Sami Ullah   +5 more
doaj   +1 more source

Hyperparameter Optimization: A Spectral Approach

open access: yes, 2017
We give a simple, fast algorithm for hyperparameter optimization inspired by techniques from the analysis of Boolean functions. We focus on the high-dimensional regime where the canonical example is training a neural network with a large number of hyperparameters.
Hazan, Elad, Klivans, Adam, Yuan, Yang
openaire   +2 more sources

Bayesian Optimization with Unknown Constraints [PDF]

open access: yes, 2014
Recent work on Bayesian optimization has shown its effectiveness in global optimization of difficult black-box objective functions. Many real-world optimization problems of interest also have constraints which are unknown a priori.
Adams, Ryan P.   +2 more
core  

Continual Learning for Multimodal Data Fusion of a Soft Gripper

open access: yesAdvanced Robotics Research, EarlyView.
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley   +1 more source

Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets

open access: yes, 2017
Bayesian optimization has become a successful tool for hyperparameter optimization of machine learning algorithms, such as support vector machines or deep neural networks.
Bartels, Simon   +4 more
core  

Parsimonious Mahalanobis Kernel for the Classification of High Dimensional Data [PDF]

open access: yes, 2012
The classification of high dimensional data with kernel methods is considered in this article. Exploit- ing the emptiness property of high dimensional spaces, a kernel based on the Mahalanobis distance is proposed.
Benediktsson, J. A.   +3 more
core  

Hyperparameter Optimization with Differentiable Metafeatures

open access: yes, 2021
Metafeatures, or dataset characteristics, have been shown to improve the performance of hyperparameter optimization (HPO). Conventionally, metafeatures are precomputed and used to measure the similarity between datasets, leading to a better initialization of HPO models.
Jomaa, Hadi S.   +2 more
openaire   +2 more sources

ChicGrasp: Imitation‐Learning‐Based Customized Dual‐Jaw Gripper Control for Manipulation of Delicate, Irregular Bio‐Products

open access: yesAdvanced Robotics Research, EarlyView.
Automated poultry processing lines still rely on humans to lift slippery, easily bruised carcasses onto a shackle conveyor. Deformability, anatomical variance, and hygiene rules make conventional suction and scripted motions unreliable. We present ChicGrasp, an end‐to‐end hardware‐software co‐designed imitation learning framework, to offer a ...
Amirreza Davar   +8 more
wiley   +1 more source

HyperSpace: Distributed Bayesian Hyperparameter Optimization

open access: yes2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), 2018
As machine learning models continue to increase in complexity, so does the potential number of free model parameters commonly known as hyperparameters. While there has been considerable progress toward finding optimal configurations of these hyperparameters, many optimization procedures are treated as black boxes. We believe optimization methods should
M. Todd Young   +3 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy