Results 261 to 270 of about 1,150,989 (314)
Some of the next articles are maybe not open access.
Block-conjugate-gradient method
Physical Review D, 1989It is shown that by using the block-conjugate-gradient method several, say {ital s}, columns of the inverse Kogut-Susskind fermion matrix can be found simultaneously, in less time than it would take to run the standard conjugate-gradient algorithm {ital s} times. The method improves in efficiency relative to the standard conjugate-gradient algorithm as
openaire +2 more sources
Adaptive Conditional Gradient Method
Journal of Optimization Theory and Applications, 2019zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire +3 more sources
Numerische Mathematik, 1963
The CG-algorithm is an iterative method to solve linear systems $$Ax + b = 0$$ (1) where A is a symmetric and positive definite coefficient matrix of order n. The method has been described first by Stiefel and Hesteness [1, 2] and additional information is contained in [3] and [4]. The notations used here coincide partially with those used in
openaire +2 more sources
The CG-algorithm is an iterative method to solve linear systems $$Ax + b = 0$$ (1) where A is a symmetric and positive definite coefficient matrix of order n. The method has been described first by Stiefel and Hesteness [1, 2] and additional information is contained in [3] and [4]. The notations used here coincide partially with those used in
openaire +2 more sources
2019
Our interest in the conjugate gradient methods is twofold. First, they are among the most useful techniques to solve a large system of linear equations. Second, they can be adopted to solve large nonlinear optimization problems. In the previous chapters, we studied two important methods for finding a minimum point of real-valued functions of n real ...
Shashi Kant Mishra, Bhagwat Ram
openaire +1 more source
Our interest in the conjugate gradient methods is twofold. First, they are among the most useful techniques to solve a large system of linear equations. Second, they can be adopted to solve large nonlinear optimization problems. In the previous chapters, we studied two important methods for finding a minimum point of real-valued functions of n real ...
Shashi Kant Mishra, Bhagwat Ram
openaire +1 more source
2006
The endeavour to solve systems of linear algebraic systems is already two thousand years old. In the paper we consider the conjugate gradient method that is (theoretically) finite but, in practice, it can be treated as an iterative method. We survey a known modification of the method, the preconditioned conjugate gradient method, that may converge ...
openaire +3 more sources
The endeavour to solve systems of linear algebraic systems is already two thousand years old. In the paper we consider the conjugate gradient method that is (theoretically) finite but, in practice, it can be treated as an iterative method. We survey a known modification of the method, the preconditioned conjugate gradient method, that may converge ...
openaire +3 more sources
2011
A policy gradient method is a reinforcement learning approach that directly optimizes a parametrized control policy by gradient descent. It belongs to the class of policy search techniques that maximize the expected return of a policy in a fixed policy class while traditional value function approximation approaches derive policies from a value function.
Peters, J., Bagnell, J.
openaire +2 more sources
A policy gradient method is a reinforcement learning approach that directly optimizes a parametrized control policy by gradient descent. It belongs to the class of policy search techniques that maximize the expected return of a policy in a fixed policy class while traditional value function approximation approaches derive policies from a value function.
Peters, J., Bagnell, J.
openaire +2 more sources
2014
One of the newest approaches in general nonsmooth optimization is to use gradient sampling algorithms developed by Burke, Lewis, and Overton. The gradient sampling method is a method for minimizing an objective function that is locally Lipschitz continuous and smooth on an open dense subset of \(\mathbb {R}^n\).
Adil Bagirov +2 more
openaire +1 more source
One of the newest approaches in general nonsmooth optimization is to use gradient sampling algorithms developed by Burke, Lewis, and Overton. The gradient sampling method is a method for minimizing an objective function that is locally Lipschitz continuous and smooth on an open dense subset of \(\mathbb {R}^n\).
Adil Bagirov +2 more
openaire +1 more source
2017
The gradient method belongs to the direct optimization methods , characterized by the fact that the extreme is found without any prior indication of the necessary existence conditions. Several results which refer to the first order approximation of the gradient method are presented in this chapter. The complexity of the treatment is gradually increased,
openaire +1 more source
The gradient method belongs to the direct optimization methods , characterized by the fact that the extreme is found without any prior indication of the necessary existence conditions. Several results which refer to the first order approximation of the gradient method are presented in this chapter. The complexity of the treatment is gradually increased,
openaire +1 more source
2014
In this chapter, we introduce two discrete gradient methods that can be considered as semi-derivative free methods in a sense that they do not use subgradient information and they do not approximate the subgradient but at the end of the solution process (i.e., near the optimal point). The introduced methods are the original discrete gradient method for
Adil Bagirov +2 more
openaire +1 more source
In this chapter, we introduce two discrete gradient methods that can be considered as semi-derivative free methods in a sense that they do not use subgradient information and they do not approximate the subgradient but at the end of the solution process (i.e., near the optimal point). The introduced methods are the original discrete gradient method for
Adil Bagirov +2 more
openaire +1 more source

