Results 1 to 10 of about 1,159 (105)
In this work, a new class of spectral conjugate gradient (CG) method is proposed for solving unconstrained optimization models. The search direction of the new method uses the ZPRP and JYJLL CG coefficients.
Fevi Novkaniza+3 more
doaj +2 more sources
The exactness of the ℓ1 penalty function for a class of mathematical programs with generalized complementarity constraints [PDF]
In a mathematical program with generalized complementarity constraints (MPGCC), complementarity relation is imposed between each pair of variable blocks.
Yukuan Hu, Xin Liu
doaj +2 more sources
The Bregman Proximal Average [PDF]
We provide a proximal average with repect to a 1-coercive Legendre function. In the sense of Bregman distance, the Bregman envelope of the proximal average is a convex combination of Bregman envelopes of individual functions. The Bregman proximal mapping
Xianfu Wang, Heinz H. Bauschke
semanticscholar +1 more source
In this paper, a new class of nonconvex nonsmooth multiobjective programming problems with both inequality and equality constraints defined in a real Banach space is considered.
T. Antczak, R. Verma
semanticscholar +1 more source
KT-E-invexity in E-differentiable vector optimization problems
In this paper, a new concept of generalized convexity is introduced for E-differentiable vector optimization problems. Namely, the concept of KT-E-invexity is defined for (not necessarily) differentiable vector optimization problems in which the ...
Najeeb Abdulaleem
semanticscholar +1 more source
An improved Jaya optimization algorithm with ring topology and population size reduction
An improved variant of the Jaya optimization algorithm, called Jaya2, is proposed to enhance the performance of the original Jaya sacrificing its algorithmic design. The proposed approach arranges the solutions in a ring topology to reduce the likelihood
Omran Mahamed G. H., Iacca Giovanni
doaj +1 more source
Robust Optimization (RO) arises in two stages of optimization, first level for maximizing over the uncertain data and second level for minimizing over the feasible set.
M. Barro+3 more
semanticscholar +1 more source
Solving polyhedral d.c. optimization problems via concave minimization [PDF]
The problem of minimizing the difference of two convex functions is called polyhedral d.c. optimization problem if at least one of the two component functions is polyhedral.
Dahl, Simeon vom, Löhne, Andreas
core +2 more sources
A new conjugate gradient method for acceleration of gradient descent algorithms
An accelerated of the steepest descent method for solving unconstrained optimization problems is presented. which propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an new formula.
Rahali Noureddine+2 more
doaj +1 more source
Outcome space range reduction method for global optimization of sum of affine ratios problem
Many algorithms for globally solving sum of affine ratios problem (SAR) are based on equivalent problem and branch-and-bound framework. Since the exhaustiveness of branching rule leads to a significant increase in the computational burden for solving the
Jiao Hongwei+3 more
doaj +1 more source