Results 41 to 50 of about 142,362 (315)

Damped Newton Stochastic Gradient Descent Method for Neural Networks Training

open access: yesMathematics, 2021
First-order methods such as stochastic gradient descent (SGD) have recently become popular optimization methods to train deep neural networks (DNNs) for good generalization; however, they need a long training time.
Jingcheng Zhou   +3 more
doaj   +1 more source

Counterexamples for Noise Models of Stochastic Gradients

open access: yesExamples and Counterexamples, 2023
Stochastic Gradient Descent (SGD) is a widely used, foundational algorithm in data science and machine learning. As a result, analyses of SGD abound making use of a variety of assumptions, especially on the noise behavior of the stochastic gradients ...
Vivak Patel
doaj   +1 more source

Adaptive Optical Closed-Loop Control Based on the Single-Dimensional Perturbation Descent Algorithm

open access: yesSensors, 2023
Modal-free optimization algorithms do not require specific mathematical models, and they, along with their other benefits, have great application potential in adaptive optics.
Bo Chen   +4 more
doaj   +1 more source

Randomized Stochastic Gradient Descent Ascent

open access: yes, 2021
An increasing number of machine learning problems, such as robust or adversarial variants of existing algorithms, require minimizing a loss function that is itself defined as a maximum. Carrying a loop of stochastic gradient ascent (SGA) steps on the (inner) maximization problem, followed by an SGD step on the (outer) minimization, is known as Epoch ...
Sebbouh, Othmane   +2 more
openaire   +3 more sources

Perbandingan Teknik Klasifikasi Dalam Data Mining Untuk Bank Direct Marketing

open access: yesJurnal Teknologi Informasi dan Ilmu Komputer, 2018
Klasifikasi merupakan teknik dalam data mining untuk mengelompokkan data berdasarkan keterikatan data terhadap  data sampel. Pada penelitian ini, kami melakukan perbandingan 9 teknik klasifikasi untuk mengklasifikasi respon pelanggan pada dataset Bank ...
Irvi Oktanisa, Ahmad Afif Supianto
doaj   +1 more source

An efficient algorithm for data parallelism based on stochastic optimization

open access: yesAlexandria Engineering Journal, 2022
Deep neural network models can achieve greater performance in numerous machine learning tasks by raising the depth of the model and the amount of training data samples.
Khalid Abdulaziz Alnowibet   +3 more
doaj   +1 more source

A Fixed-Point of View on Gradient Methods for Big Data

open access: yesFrontiers in Applied Mathematics and Statistics, 2017
Interpreting gradient methods as fixed-point iterations, we provide a detailed analysis of those methods for minimizing convex objective functions. Due to their conceptual and algorithmic simplicity, gradient methods are widely used in machine learning ...
Alexander Jung
doaj   +1 more source

Distributed stochastic gradient descent for link prediction in signed social networks

open access: yesEURASIP Journal on Advances in Signal Processing, 2019
This paper considers the link prediction problem defined over a signed social network, where the relationship between any two network users can be either positive (friends) or negative (foes).
Han Zhang, Gang Wu, Qing Ling
doaj   +1 more source

Recent Advances in Stochastic Gradient Descent in Deep Learning

open access: yesMathematics, 2023
In the age of artificial intelligence, the best approach to handling huge amounts of data is a tremendously motivating and hard problem. Among machine learning models, stochastic gradient descent (SGD) is not only simple but also very effective.
Yingjie Tian, Yuqi Zhang, Haibin Zhang
doaj   +1 more source

Accelerated Backpressure Algorithm

open access: yes, 2013
We develop an Accelerated Back Pressure (ABP) algorithm using Accelerated Dual Descent (ADD), a distributed approximate Newton-like algorithm that only uses local information.
Jadbabaie, Ali   +2 more
core   +1 more source

Home - About - Disclaimer - Privacy