Results 41 to 50 of about 118,678 (124)

Multiobjective Codesign Optimization of a Planar Pneumatic Artificial Muscle‐Based Snake‐Like Robot for Enhanced Agility and Energy Efficiency

open access: yesAdvanced Robotics Research, EarlyView.
A codesign multiobjective optimization framework was developed to enhance the morphology and controller of a snake‐like robot driven by artificial muscles. It improved planar locomotion, agility, and power efficiency. The approach optimized link geometry and controller gains, revealing that shorter muscles near joints and longer linkages maximize ...
Ayla Valles, Mahdi Haghshenas‐Jaryani
wiley   +1 more source

Some diagonal preconditioners for limited memory quasi-Newton method for large Scale optimization [PDF]

open access: yes, 2013
One of the well-known methods in solving large scale unconstrained optimization is limited memory quasi-Newton (LMQN) method. This method is derived from a subproblem in low dimension so that the storage requirement as well as the computation cost can be
Abu Hassan, Malik   +3 more
core  

IMRO: a proximal quasi-Newton method for solving $l_1$-regularized least square problem

open access: yes, 2017
We present a proximal quasi-Newton method in which the approximation of the Hessian has the special format of "identity minus rank one" (IMRO) in each iteration. The proposed structure enables us to effectively recover the proximal point.
Karimi, Sahar, Vavasis, Stephen
core   +1 more source

Do optimization methods in deep learning applications matter? [PDF]

open access: yes, 2020
With advances in deep learning, exponential data growth and increasing model complexity, developing efficient optimization methods are attracting much research attention.
Kiran, Mariam, Ozyildirim, Buse Melis
core   +1 more source

Fractal basins of convergence of libration points in the planar Copenhagen problem with a repulsive quasi-homogeneous Manev-type potential

open access: yes, 2018
The Newton-Raphson basins of convergence, corresponding to the coplanar libration points (which act as attractors), are unveiled in the Copenhagen problem, where instead of the Newtonian potential and forces, a quasi-homogeneous potential created by two ...
Aggarwal, Rajiv   +4 more
core   +1 more source

Some results on affine Deligne-Lusztig varieties

open access: yes, 2018
The study of affine Deligne-Lusztig varieties originally arose from arithmetic geometry, but many problems on affine Deligne-Lusztig varieties are purely Lie-theoretic in nature.
He, Xuhua
core   +1 more source

A quasi-Newton proximal splitting method [PDF]

open access: yes, 2012
A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise ...
Becker, Stephen, Fadili, M. Jalal
core   +4 more sources

Improved New Two-Spectral Conjugate Gradient Iterative Technique for Large Scale Optimization

open access: yesMağallaẗ Al-kūfaẗ Al-handasiyyaẗ
Numerous strategies have been proposed in the field of unconstrained optimization to address various optimization challenges, particularly those associated with large-scale systems. Among the classical methods, Newton and Quasi-Newton approaches are well-
Radhwan Basem Thanoon   +1 more
doaj   +1 more source

Multipoint secant and interpolation methods with nonmonotone line search for solving systems of nonlinear equations

open access: yes, 2017
Multipoint secant and interpolation methods are effective tools for solving systems of nonlinear equations. They use quasi-Newton updates for approximating the Jacobian matrix.
Burdakov, Oleg, Kamandi, Ahmad
core   +1 more source

A Combined Quasi-Newton-Type Method Using 4th-Order Directional Derivatives for Solving Degenerate Problems of Unconstrained Optimization

open access: yesCybernetics and Computer Technologies
Introduction. Methods of unconstrained optimization play a significant role in machine learning [1–6]. When solving practical problems in machine learning, such as tuning nonlinear regression models, the extremum point of the chosen optimality criterion is often degenerate, which significantly complicates its search.
openaire   +2 more sources

Home - About - Disclaimer - Privacy