Results 1 to 10 of about 324,225 (177)

Asynchronous Parallel Stochastic Quasi-Newton Methods. [PDF]

open access: yesParallel Comput, 2021
Although first-order stochastic algorithms, such as stochastic gradient descent, have been the main force to scale up machine learning models, such as deep neural nets, the second-order quasi-Newton methods start to draw attention due to their effectiveness in dealing with ill-conditioned optimization problems.
Tong Q, Liang G, Cai X, Zhu C, Bi J.
europepmc   +7 more sources

New Results on Superlinear Convergence of Classical Quasi-Newton Methods. [PDF]

open access: yesJ Optim Theory Appl, 2021
We present a new theoretical analysis of local superlinear convergence of classical quasi-Newton methods from the convex Broyden class. As a result, we obtain a significant improvement in the currently known estimates of the convergence rates for these ...
Rodomanov A, Nesterov Y.
europepmc   +3 more sources

On the Convergence Rate of Quasi-Newton Methods on Strongly Convex Functions with Lipschitz Gradient

open access: yesMathematics, 2023
The main results of the study of the convergence rate of quasi-Newton minimization methods were obtained under the assumption that the method operates in the region of the extremum of the function, where there is a stable quadratic representation of the ...
Vladimir Krutikov   +3 more
doaj   +2 more sources

Faster Stochastic Quasi-Newton Methods [PDF]

open access: yesIEEE Transactions on Neural Networks and Learning Systems, 2022
Stochastic optimization methods have become a class of popular optimization tools in machine learning. Especially, stochastic gradient descent (SGD) has been widely used for machine learning problems such as training neural networks due to low per-iteration computational complexity.
Qingsong Zhang   +3 more
openaire   +5 more sources

Greedy Quasi-Newton Methods with Explicit Superlinear Convergence [PDF]

open access: yesSIAM Journal on Optimization, 2021
In this paper, we study greedy variants of quasi-Newton methods. They are based on the updating formulas from a certain subclass of the Broyden family. In particular, this subclass includes the well-known DFP, BFGS and SR1 updates. However, in contrast to the classical quasi-Newton methods, which use the difference of successive iterates for updating ...
Rodomanov, Anton, Nesterov, Yurii
openaire   +4 more sources

Decentralized Quasi-Newton Methods [PDF]

open access: yesIEEE Transactions on Signal Processing, 2017
We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a variation of the BFGS quasi-Newton method for solving decentralized optimization problems. The D-BFGS method is of interest in problems that are not well conditioned, making first order decentralized methods ineffective, and in which second order information is not ...
Eisen, Mark   +2 more
openaire   +4 more sources

Matrix Transformations and Quasi-Newton Methods [PDF]

open access: yesInternational Journal of Mathematics and Mathematical Sciences, 2007
We first recall some properties of infinite tridiagonal matrices considered as matrix transformations in sequence spaces of the forms sξ, sξ∘, sξ(c), or lp(ξ).
Boubakeur Benahmed   +2 more
doaj   +4 more sources

Quasi-Newton Methods, Motivation and Theory [PDF]

open access: yesSIAM Review, 1977
This paper is an attempt to motivate and justify quasi-Newton methods as useful modifications of Newton's method for general and gradient nonlinear systems of equations. References are given to ample numerical justification; here we give an overview of many of the important theoretical results and each is accompanied by sufficient discussion to make ...
Dennis, J. E. jun., More, Jorge J.
openaire   +4 more sources

Stochastic Quasi-Newton Methods

open access: yesProceedings of the IEEE, 2020
Large-scale data science trains models for data sets containing massive numbers of samples. Training is often formulated as the solution of empirical risk minimization problems that are optimization programs whose complexity scales with the number of elements in the data set.
Aryan Mokhtari, Alejandro Ribeiro
openaire   +2 more sources

Provably Convergent Plug-and-Play Quasi-Newton Methods [PDF]

open access: yesSIAM Journal of Imaging Sciences, 2023
Plug-and-Play (PnP) methods are a class of efficient iterative methods that aim to combine data fidelity terms and deep denoisers using classical optimization algorithms, such as ISTA or ADMM, with applications in inverse problems and imaging.
Hongwei Tan   +3 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy