Results 1 to 10 of about 7,060 (144)
On Shor's r-Algorithm for Problems with Constraints
Introduction. Nonsmooth optimization problems arise in a wide range of applications, including engineering, finance, and deep learning, where activation functions often have discontinuous derivatives, such as ReLU.
Vladimir Norkin, Anton Kozyriev
doaj +1 more source
Quaternions have appeared in many practical fields, such as image processing and data mining, and so on. This paper focuses on designing an efficient quaternion-valued neurodynamic approach (QNA) based on multi-agent systems to solve nonsmooth convex ...
Guocheng Li +3 more
doaj +1 more source
Use of the Shor’s r-Algorithm in Linear Robust Optimization Problems
The paper is devoted to the description of a new approach to the construction of algorithms for solving linear programming problems (LP-problems), in which the number of constraints is much greater than the number of variables.
P. Stetsyuk +3 more
doaj +1 more source
GNU Octave and Python Implementation of Shor's r-Algorithm with Adaptive Step Control
r-algorithms, or subgradient methods with dilation of space in the direction of the difference of two sequential subgradients, were proposed by N.Z.Shor in 1970 in his doctoral thesis. Respective software implementations proved to be competitive with the
Petro Stetsyuk +2 more
doaj +1 more source
Gradient-based Regularization Parameter Selection for Problems With Nonsmooth Penalty Functions [PDF]
In high-dimensional and/or non-parametric regression problems, regularization (or penalization) is used to control model complexity and induce desired structure. Each penalty has a weight parameter that indicates how strongly the structure corresponding to that penalty should be enforced.
Feng, Jean, Simon, Noah
openaire +2 more sources
An Infeasible Bundle Method for Nonsmooth Convex Constrained Optimization without a Penalty Function or a Filter [PDF]
From the abstract: In this paper, we demonstrate that the use of a parametrized penalty function in nonsmooth convex optimization can be avoided without using the relatively complex filter methods. We propose an approach which appears to be more direct and easier to implement, in the sense that it is closer in the spirit and structure to the well ...
Sagastizábal, Claudia, Solodov, Mikhail
openaire +2 more sources
Generalized Nonconvex Nonsmooth Low-Rank Minimization [PDF]
As surrogate functions of $L_0$-norm, many nonconvex penalty functions have been proposed to enhance the sparse vector recovery. It is easy to extend these nonconvex penalty functions on singular values of a matrix to enhance low-rank matrix recovery ...
Lin, Zhouchen +3 more
core +2 more sources
Penalty-free method for nonsmooth constrained optimization via radial basis functions
Summary: We consider a general class of nonlinear constrained optimization problems, where derivatives of the objective function and constraints are unavailable. This property of problems can often impede the performance of optimization algorithms. Most algorithms usually determine a quasi-Newton direction and then use line search techniques.
Rahmanpour, Fardin +2 more
openaire +1 more source
A New Unified Path to Smoothing Nonsmooth Exact Penalty Function for the Constrained Optimization
We propose a new unified path to approximately smoothing the nonsmooth exact penalty function in this paper. Based on the new smooth penalty function, we give a penalty algorithm to solve the constrained optimization problem, and discuss the convergence of the algorithm under mild conditions.
openaire +2 more sources
Shape Parameter Estimation [PDF]
Performance of machine learning approaches depends strongly on the choice of misfit penalty, and correct choice of penalty parameters, such as the threshold of the Huber function.
Aravkin, Aleksandr Y. +2 more
core +3 more sources

