Results 71 to 80 of about 141,184 (170)

Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode

open access: yesApplied Sciences, 2019
Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of ...
Hyeyoung Park, Kwanyong Lee
doaj   +1 more source

Variance Reduced Stochastic Gradient Descent with Neighbors

open access: yes, 2015
Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its slow convergence can be a computational bottleneck. Variance reduction techniques such as SAG, SVRG and SAGA have been proposed to overcome this weakness, achieving linear ...
Hofmann, Thomas   +3 more
core   +1 more source

A Stochastic Gradient Descent Approach for Stochastic Optimal Control

open access: yesEast Asian Journal on Applied Mathematics, 2020
Summary: In this work, we introduce a stochastic gradient descent approach to solve the stochastic optimal control problem through stochastic maximum principle. The motivation that drives our method is the gradient of the cost functional in the stochastic optimal control problem is under expectation, and numerical calculation of such an expectation ...
Archibald, Richard   +2 more
openaire   +2 more sources

Semi-Cyclic Stochastic Gradient Descent

open access: yes, 2019
We consider convex SGD updates with a block-cyclic structure, i.e. where each cycle consists of a small number of blocks, each with many samples from a possibly different, block-specific, distribution. This situation arises, e.g., in Federated Learning where the mobile devices available for updates at different times during the day have different ...
Eichner, Hubert   +4 more
openaire   +2 more sources

Scaling of hardware-compatible perturbative training algorithms

open access: yesAPL Machine Learning
In this work, we explore the capabilities of multiplexed gradient descent (MGD), a scalable and efficient perturbative zeroth-order training method for estimating the gradient of a loss function in hardware and training it via stochastic gradient descent.
B. G. Oripov   +3 more
doaj   +1 more source

Stochastic Gradient Descent Revisited

open access: yes
Stochastic gradient descent (SGD) has been a go-to algorithm for nonconvex stochastic optimization problems arising in machine learning. Its theory however often requires a strong framework to guarantee convergence properties. We hereby present a full scope convergence study of biased nonconvex SGD, including weak convergence, function-value ...
openaire   +2 more sources

Stochastic gradient descent optimisation for convolutional neural network for medical image segmentation. [PDF]

open access: yesOpen Life Sci, 2023
Nagendram S   +7 more
europepmc   +1 more source

Random coordinate descent: A simple alternative for optimizing parameterized quantum circuits

open access: yesPhysical Review Research
Variational quantum algorithms rely on the optimization of parameterized quantum circuits in noisy settings. The commonly used back-propagation procedure in classical machine learning is not directly applicable in this setting due to the collapse of ...
Zhiyan Ding   +4 more
doaj   +1 more source

Performance Analysis of an Optical System for FSO Communications Utilizing Combined Stochastic Gradient Descent Optimization Algorithm

open access: yesApplied System Innovation
Wavefront aberrations caused by thermal flows or arising from the quality of optical components can significantly impair wireless communication links. Such aberrations may result in an increased error rate in the received signal, leading to data loss in ...
Ilya Galaktionov, Vladimir Toporovsky
doaj   +1 more source

Home - About - Disclaimer - Privacy