A Three-Term Conjugate Gradient Method with Sufficient Descent Property for Unconstrained Optimization [PDF]
Conjugate gradient methods are widely used for solving large-scale unconstrained optimization problems, because they do not need the storage of matrices.
Hager W. W. +4 more
core +1 more source
Optimal Conditioning of Quasi-Newton Methods [PDF]
Quasi-Newton methods accelerate gradient methods for minimizing a function by approximating the inverse Hessian matrix of the function. Several papers in recent literature have dealt with the generation of classes of approximating matrices as a function of a scalar parameter.
Shanno, D. F., Kettler, P. C.
openaire +2 more sources
A non-Secant quasi-Newton Method for Unconstrained Nonlinear Optimization
The Secant equation has long been the foundation of quasi-Newton methods, as updated Hessian approximations satisfy the equation with each iteration. Several publications have lately focused on modified versions of the Secant relation, with promising ...
Issam A.R. Moghrabi
doaj +1 more source
Machine Learning in Quasi-Newton Methods
In this article, we consider the correction of metric matrices in quasi-Newton methods (QNM) from the perspective of machine learning theory. Based on training information for estimating the matrix of the second derivatives of a function, we formulate a ...
Vladimir Krutikov +4 more
doaj +1 more source
Quasi-Newton methods for atmospheric chemistry simulations: implementation in UKCA UM vn10.8 [PDF]
A key and expensive part of coupled atmospheric chemistry–climate model simulations is the integration of gas-phase chemistry, which involves dozens of species and hundreds of reactions.
E. Esentürk +12 more
doaj +1 more source
The CG-BFGS method for unconstrained optimization problems [PDF]
In this paper we present a new search direction known as the CG-BFGS method, which uses the search direction of the conjugate gradient method approach in the quasi-Newton methods.
Ibrahim, Mohd Asrul Hery +3 more
core +1 more source
Regularization of limited memory quasi-Newton methods for large-scale nonconvex minimization [PDF]
This paper deals with regularized Newton methods, a flexible class of unconstrained optimization algorithms that is competitive with line search and trust region methods and potentially combines attractive elements of both.
D. Steck, C. Kanzow
semanticscholar +1 more source
Positive Definiteness of Symmetric Rank 1 (H-Version) Update for Unconstrained Optimization
Several attempts have been made to modify the quasi-Newton condition in order to obtain rapid convergence with complete properties (symmetric and positive definite) of the inverse of Hessian matrix (second derivative of the objective function).
Saad Shakir Mahmood +2 more
doaj +1 more source
Structured Quasi-Newton Methods for Optimization with Orthogonality Constraints [PDF]
In this paper, we study structured quasi-Newton methods for optimization problems with orthogonality constraints. Note that the Riemannian Hessian of the objective function requires both the Euclidean Hessian and the Euclidean gradient. In particular, we
Jiang Hu +4 more
semanticscholar +1 more source
Continual Learning With Quasi-Newton Methods
Catastrophic forgetting remains a major challenge when neural networks learn tasks sequentially. Elastic Weight Consolidation (EWC) attempts to address this problem by introducing a Bayesian-inspired regularization loss to preserve knowledge of ...
Steven Vander Eeckt, Hugo Van Hamme
doaj +1 more source

