Results 11 to 20 of about 311 (57)
Derivative-free optimization and filter methods to solve nonlinear constrained problems [PDF]
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives,
Correia, Aldina +3 more
core +1 more source
This article aims to provide a modified Noor iterative scheme to approximate the fixed points of generalized nonexpansive mappings with property (EE) called MN-iteration.
Paimsang Papinwich, Thianwan Tanakit
doaj +1 more source
In a Hilbert framework, we introduce continuous and discrete dynamical systems which aim at solving inclusions governed by structured monotone operators $A=\partial\Phi+B$, where $\partial\Phi$ is the subdifferential of a convex lower semicontinuous ...
Abbas, Boushra, Attouch, Hedy
core +1 more source
The conjugate gradient (CG) method is recognized for resolving unconstrained optimization problems because of its efficiency, robustness, and minimal memory demands.
Masmali Sultanah +4 more
doaj +1 more source
An augmented lagrangian interior point method using diretions of negative curvature [PDF]
We describe an efficient implementation of an interior-point algorithm for non-convex problems that uses directions of negative curvature. These directions should ensure convergence to second-order KKT points and improve the computational efficiency of ...
Moguerza, Javier M. +1 more
core +1 more source
Splitting methods with variable metric for KL functions
We study the convergence of general abstract descent methods applied to a lower semicontinuous nonconvex function f that satisfies the Kurdyka-Lojasiewicz inequality in a Hilbert space.
Frankel, Pierre +2 more
core +1 more source
The conjugate gradient (CG) method is widely employed for solving unconstrained optimization problems due to its independence from second derivatives or their approximations. It has found extensive applications in fields such as image restoration, neural
Ahmad Alhawarat +4 more
doaj +1 more source
Convergence rate for a Gauss collocation method applied to constrained optimal control
A local convergence rate is established for a Gauss orthogonal collocation method applied to optimal control problems with control constraints. If the Hamiltonian possesses a strong convexity property, then the theory yields convergence for problems ...
Hager, William W. +4 more
core +1 more source
Adjoint-based predictor-corrector sequential convex programming for parametric nonlinear optimization [PDF]
This paper proposes an algorithmic framework for solving parametric optimization problems which we call adjoint-based predictor-corrector sequential convex programming.
Diehl, M., Dinh, Q. Tran, Savorgnan, C.
core
Uniform Convergence of the Newton Method for Aubin Continuous Maps [PDF]
* This work was supported by National Science Foundation grant DMS 9404431.In this paper we prove that the Newton method applied to the generalized equation y ∈ f(x) + F(x) with a C^1 function f and a set-valued map F acting in Banach spaces, is locally ...
Dontchev, Asen
core

