Results 221 to 230 of about 324,327 (273)
Some of the next articles are maybe not open access.

Quasi-Newton Methods

Linear and Nonlinear Programming, 2006
J. Nocedal, S. J. Wright
semanticscholar   +5 more sources

Quasi-Newton Methods for Saddle Point Problems and Beyond

Neural Information Processing Systems, 2021
This paper studies quasi-Newton methods for solving strongly-convex-strongly-concave saddle point problems (SPP). We propose greedy and random Broyden family updates for SPP, which have explicit local superlinear convergence rate of ${\mathcal O}\big ...
Chengchang Liu, Luo Luo
semanticscholar   +1 more source

Quasi-Newton Methods

2021
In Chap. 6, multidimensional optimization methods were considered in which the search for the minimizer is carried out by using a set of conjugate directions. An important feature of some of these methods (e.g., the Fletcher–Reeves and Powell’s methods) is that explicit expressions for the second derivatives of \(f(\mathbf{x})\) are not required ...
Andreas Antoniou, Wu-Sheng Lu
openaire   +1 more source

Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization

Mathematical Programming Computation, 2021
We consider unconstrained stochastic optimization problems with no available gradient information. Such problems arise in settings from derivative-free simulation optimization to reinforcement learning. We propose an adaptive sampling quasi-Newton method
Raghu Bollapragada, Stefan M. Wild
semanticscholar   +1 more source

Quasi-Newton Methods

2008
In this chapter we take another approach toward the development of methods lying somewhere intermediate to steepest descent and Newton’s method. Again working under the assumption that evaluation and use of the Hessian matrix is impractical or costly, the idea underlying quasi-Newton methods is to use an approximation to the inverse Hessian in place of
David G. Luenberger, Yinyu Ye
openaire   +2 more sources

Approximate quasi-Newton methods

Mathematical Programming, 1990
Newton-like iterative methods for nonlinear equations on Banach spaces are considered. It is proved how the local convergence behaviour of the quasi-Newton method in the infinite dimensional setting is affected by the refinement strategy. Applications to boundary value problems and integral equations are included.
Kelley, C. T., Sachs, E. W.
openaire   +1 more source

Quasi-Newton parallel geometry optimization methods

The Journal of Chemical Physics, 2010
Algorithms for parallel unconstrained minimization of molecular systems are examined. The overall framework of minimization is the same except for the choice of directions for updating the quasi-Newton Hessian. Ideally these directions are chosen so the updated Hessian gives steps that are same as using the Newton method.
Steven K, Burger, Paul W, Ayers
openaire   +2 more sources

Quasi-Newton Methods

2019
The Quasi-Newton methods do not compute the Hessian of nonlinear functions. The Hessian is updated by analyzing successive gradient vectors instead. The Quasi-Newton algorithm was first proposed by William C. Davidon, a physicist while working at Argonne National Laboratory, United States in 1959.
Shashi Kant Mishra, Bhagwat Ram
openaire   +1 more source

A Survey of Quasi-Newton Equations and Quasi-Newton Methods for Optimization

Annals of Operations Research, 2001
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Xu, Chengxian, Zhang, Jianzhong
openaire   +1 more source

Home - About - Disclaimer - Privacy