Results 71 to 80 of about 141,184 (170)
Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode
Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of ...
Hyeyoung Park, Kwanyong Lee
doaj +1 more source
Variance Reduced Stochastic Gradient Descent with Neighbors
Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its slow convergence can be a computational bottleneck. Variance reduction techniques such as SAG, SVRG and SAGA have been proposed to overcome this weakness, achieving linear ...
Hofmann, Thomas +3 more
core +1 more source
A Stochastic Gradient Descent Approach for Stochastic Optimal Control
Summary: In this work, we introduce a stochastic gradient descent approach to solve the stochastic optimal control problem through stochastic maximum principle. The motivation that drives our method is the gradient of the cost functional in the stochastic optimal control problem is under expectation, and numerical calculation of such an expectation ...
Archibald, Richard +2 more
openaire +2 more sources
Semi-Cyclic Stochastic Gradient Descent
We consider convex SGD updates with a block-cyclic structure, i.e. where each cycle consists of a small number of blocks, each with many samples from a possibly different, block-specific, distribution. This situation arises, e.g., in Federated Learning where the mobile devices available for updates at different times during the day have different ...
Eichner, Hubert +4 more
openaire +2 more sources
Scaling of hardware-compatible perturbative training algorithms
In this work, we explore the capabilities of multiplexed gradient descent (MGD), a scalable and efficient perturbative zeroth-order training method for estimating the gradient of a loss function in hardware and training it via stochastic gradient descent.
B. G. Oripov +3 more
doaj +1 more source
Stochastic Gradient Descent Revisited
Stochastic gradient descent (SGD) has been a go-to algorithm for nonconvex stochastic optimization problems arising in machine learning. Its theory however often requires a strong framework to guarantee convergence properties. We hereby present a full scope convergence study of biased nonconvex SGD, including weak convergence, function-value ...
openaire +2 more sources
Stochastic gradient descent optimisation for convolutional neural network for medical image segmentation. [PDF]
Nagendram S +7 more
europepmc +1 more source
Random coordinate descent: A simple alternative for optimizing parameterized quantum circuits
Variational quantum algorithms rely on the optimization of parameterized quantum circuits in noisy settings. The commonly used back-propagation procedure in classical machine learning is not directly applicable in this setting due to the collapse of ...
Zhiyan Ding +4 more
doaj +1 more source
Wavefront aberrations caused by thermal flows or arising from the quality of optical components can significantly impair wireless communication links. Such aberrations may result in an increased error rate in the received signal, leading to data loss in ...
Ilya Galaktionov, Vladimir Toporovsky
doaj +1 more source
A SIEVE STOCHASTIC GRADIENT DESCENT ESTIMATOR FOR ONLINE NONPARAMETRIC REGRESSION IN SOBOLEV ELLIPSOIDS. [PDF]
Zhang T, Simon N.
europepmc +1 more source

