Results 1 to 10 of about 140,150 (171)
Stochastic gradient descent for optimization for nuclear systems [PDF]
The use of gradient descent methods for optimizing k-eigenvalue nuclear systems has been shown to be useful in the past, but the use of k-eigenvalue gradients have proved computationally challenging due to their stochastic nature.
Austin Williams +5 more
doaj +2 more sources
Semi-Stochastic Gradient Descent Methods [PDF]
In this paper we study the problem of minimizing the average of a large number of smooth convex loss functions. We propose a new method, S2GD (Semi-Stochastic Gradient Descent), which runs for one or several epochs in each of which a single full gradient
Jakub Konečný, Peter Richtárik
doaj +4 more sources
Byzantine Stochastic Gradient Descent
This paper studies the problem of distributed stochastic optimization in an adversarial setting where, out of the $m$ machines which allegedly compute stochastic gradients every iteration, an $\alpha$-fraction are Byzantine, and can behave arbitrarily ...
Alistarh, Dan-Adrian +8 more
core +2 more sources
Stochastic Gradient Descent for Kernel-Based Maximum Correntropy Criterion [PDF]
Maximum correntropy criterion (MCC) has been an important method in machine learning and signal processing communities since it was successfully applied in various non-Gaussian noise scenarios.
Tiankai Li +3 more
doaj +2 more sources
Implicit Stochastic Gradient Descent Method for Cross-Domain Recommendation System [PDF]
The previous recommendation system applied the matrix factorization collaborative filtering (MFCF) technique to only single domains. Due to data sparsity, this approach has a limitation in overcoming the cold-start problem.
Nam D. Vo, Minsung Hong, Jason J. Jung
doaj +2 more sources
Data-Dependent Stability of Stochastic Gradient Descent
We establish a data-dependent notion of algorithmic stability for Stochastic Gradient Descent (SGD), and employ it to develop novel generalization bounds.
Kuzborskij, Ilja, Lampert, Christoph H.
core +2 more sources
PAC–Bayes Guarantees for Data-Adaptive Pairwise Learning [PDF]
We study the generalization properties of stochastic optimization methods under adaptive data sampling schemes, focusing on the setting of pairwise learning, which is central to tasks like ranking, metric learning, and AUC maximization.
Sijia Zhou, Yunwen Lei, Ata Kabán
doaj +2 more sources
Stochastic gradient descent for wind farm optimization [PDF]
It is important to optimize wind turbine positions to mitigate potential wake losses. To perform this optimization, atmospheric conditions, such as the inflow speed and direction, are assigned probability distributions according to measured data, which ...
J. Quick +4 more
doaj +1 more source
Communication-Censored Distributed Stochastic Gradient Descent [PDF]
This paper develops a communication-efficient algorithm to solve the stochastic optimization problem defined over a distributed network, aiming at reducing the burdensome communication in applications such as distributed machine learning.Different from the existing works based on quantization and sparsification, we introduce a communication-censoring ...
Weiyu Li +4 more
openaire +3 more sources
Byzantine-Resilient Decentralized Stochastic Gradient Descent [PDF]
Decentralized learning has gained great popularity to improve learning efficiency and preserve data privacy. Each computing node makes equal contribution to collaboratively learn a Deep Learning model. The elimination of centralized Parameter Servers (PS) can effectively address many issues such as privacy, performance bottleneck and single-point ...
Shangwei Guo +6 more
openaire +3 more sources

