Results 51 to 60 of about 117,260 (178)
Dropout has recently emerged as a powerful and simple method for training neural networks preventing co-adaptation by stochastically omitting neurons.
Herlau, Tue +2 more
core +1 more source
Spectral Adaptive Dropout: Frequency-Based Regularization for Improved Generalization
Deep neural networks are often susceptible to overfitting, necessitating effective regularization techniques. This paper introduces Spectral Adaptive Dropout, a novel frequency-based regularization technique that dynamically adjusts dropout rates based ...
Zhigao Huang, Musheng Chen, Shiyan Zheng
doaj +1 more source
Dataset distillation with stochastic neural networks
Dataset distillation aims to synthesize tiny and high-fidelity data that contains the most important information of a given target dataset. Recent studies primarily used gradient-matching based methods to attain practical performance.
Zeyuan Wang +5 more
doaj +1 more source
Additive Ensemble Neural Networks
Deep neural networks (DNNs) have been making progress in many ways. DNNs are typically used to model complex nonlinearity of high-dimensional data in regression or classification problems.
Minyoung Park +3 more
doaj +1 more source
Data Dropout in Arbitrary Basis for Deep Network Regularization
An important problem in training deep networks with high capacity is to ensure that the trained network works well when presented with new inputs outside the training dataset.
Atia, George, Rahmani, Mostafa
core +1 more source
This paper investigates uncertainty quantification (UQ) techniques in multi-class classification of chest X-ray images (COVID-19, Pneumonia, and Normal).
Albert Whata +3 more
doaj +1 more source
Super-resolution and uncertainty estimation from sparse sensors of dynamical physical systems
The goal of this study is to leverage emerging machine learning (ML) techniques to develop a framework for the global reconstruction of system variables from potentially scarce and noisy observations and to explore the epistemic uncertainty of these ...
Adam M. Collins +5 more
doaj +1 more source
An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks [PDF]
Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models "forget" how to perform the first task.
Bengio, Yoshua +4 more
core
Defensive Dropout for Hardening Deep Neural Networks under Adversarial Attacks
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is, adversarial examples, obtained by adding delicately crafted distortions onto original legal inputs, can mislead a DNN to classify them as any target labels.
Chin, Peter +6 more
core +1 more source
Deep learning has recently been utilized with great success in a large number of diverse application domains, such as visual and face recognition, natural language processing, speech recognition, and handwriting identification.
Nebojsa Bacanin +6 more
doaj +1 more source

