Results 11 to 20 of about 7,060 (144)
A family of global convergent inexact secant methods for nonconvex constrained optimization
We present a family of new inexact secant methods in association with Armijo line search technique for solving nonconvex constrained optimization. Different from the existing inexact secant methods, the algorithms proposed in this paper need not compute ...
Zhujun Wang, Li Cai, Zheng Peng
doaj +1 more source
Nonconvex Penalized Regularization for Robust Sparse Recovery in the Presence of
Nonconvex penalties have recently received considerable attention in sparse recovery based on Gaussian assumptions. However, many sparse recovery problems occur in the presence of impulsive noises. This paper is concerned with the analysis and comparison
Yunyi Li +5 more
doaj +1 more source
A Two-stage Method for Inverse Medium Scattering [PDF]
We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from near-field scattered data.
Bakushinsky +30 more
core +1 more source
A UV-Method for a Class of Constrained Minimized Problems of Maximum Eigenvalue Functions
In this paper, we apply the UV-algorithm to solve the constrained minimization problem of a maximum eigenvalue function which is the composite function of an affine matrix-valued mapping and its maximum eigenvalue.
Wei Wang +3 more
doaj +1 more source
AbstractIn this paper, the convex nonsmooth optimization problem with fuzzy objective function and both inequality and equality constraints is considered. The Karush–Kuhn–Tucker necessary optimality conditions are proved for such a nonsmooth extremum problem. Further, the exact $$l_{1}$$ l 1
openaire +1 more source
Lagrange optimality system for a class of nonsmooth convex optimization [PDF]
In this paper, we revisit the augmented Lagrangian method for a class of nonsmooth convex optimization. We present the Lagrange optimality system of the augmented Lagrangian associated with the problems, and establish its connections with the standard ...
Jin, Bangti, Takeuchi, Tomoya
core +2 more sources
We introduce Bella, a locally superlinearly convergent Bregman forward backward splitting method for minimizing the sum of two nonconvex functions, one of which satisfying a relative smoothness condition and the other one possibly nonsmooth.
Ahookhosh, Masoud +2 more
core +1 more source
A unifying theory of exactness of linear penalty functions II: parametric penalty functions
In this article we develop a general theory of exact parametric penalty functions for constrained optimization problems. The main advantage of the method of parametric penalty functions is the fact that a parametric penalty function can be both smooth ...
Dolgopolik, M. V.
core +1 more source
Theoretical and practical convergence of a self-adaptive penalty algorithm for constrained global optimization [PDF]
This paper proposes a self-adaptive penalty function and presents a penalty-based algorithm for solving nonsmooth and nonconvex constrained optimization problems.
A Baykasoğlu +34 more
core +1 more source
Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l1/l2 Regularization [PDF]
The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context.
Chouzenoux, Emilie +4 more
core +3 more sources

