Results 1 to 10 of about 147,801 (260)
A New Sparse Quasi-Newton Update Method
Based on the idea of maximum determinant positive definite matrix completion, Yamashita proposed a sparse quasi-Newton update, called MCQN, for unconstrained optimization problems with sparse Hessian structures.
Minghou Cheng, Yu-Hong Dai, Rui Diao
doaj +2 more sources
Faster Stochastic Quasi-Newton Methods [PDF]
Stochastic optimization methods have become a class of popular optimization tools in machine learning. Especially, stochastic gradient descent (SGD) has been widely used for machine learning problems such as training neural networks due to low per-iteration computational complexity.
Qingsong Zhang +3 more
openaire +3 more sources
Asynchronous parallel stochastic Quasi-Newton methods [PDF]
Although first-order stochastic algorithms, such as stochastic gradient descent, have been the main force to scale up machine learning models, such as deep neural nets, the second-order quasi-Newton methods start to draw attention due to their effectiveness in dealing with ill-conditioned optimization problems.
Tong, Qianqian +4 more
openaire +3 more sources
Studies on modified limited-memory BFGS method in full waveform inversion
Full waveform inversion (FWI) is a non-linear optimization problem based on full-wavefield modeling to obtain quantitative information of subsurface structure by minimizing the difference between the observed seismic data and the predicted wavefield. The
Meng-Xue Dai +3 more
doaj +1 more source
Partial Davidon, Fletcher and Powell (DFP) of quasi newton method for unconstrained optimization
The nonlinear Quasi-newton methods is widely used in unconstrained optimization. However, In this paper, we developing new quasi-Newton method for solving unconstrained optimization problems.
Basheer M. Salih +2 more
doaj +1 more source
A Combined Conjugate Gradient Quasi-Newton Method with Modification BFGS Formula
The conjugate gradient and Quasi-Newton methods have advantages and drawbacks, as although quasi-Newton algorithm has more rapid convergence than conjugate gradient, they require more storage compared to conjugate gradient algorithms.
Mardeen Sh. Taher, Salah G. Shareef
doaj +1 more source
Partial Pearson-two (PP2) of quasi newton method for unconstrained optimization
In this paper, we developing new quasi-Newton method for solving unconstrained optimization problems .The nonlinear Quasi-newton methods is widely used in unconstrained optimization[1]. However,.
Basheer M. Salih +2 more
doaj +1 more source
Two Modified QN-Algorithms for Solving Unconstrained Optimization Problems [PDF]
This paper presents two modified Quasi-Newton algorithms which are designed for solving nonlinear unconstrained optimization problems. These algorithms are based onĀ different techniques namely: Quasi-Newton conditions on quadratic and non-quadratic ...
Abbas Al-Bayati, Basim Hassan
doaj +1 more source
Quasi-Newton-based nonlinear finite element methods were extensively studied in the 1970s and 1980s. However, they have almost disappeared due to their poorer convergence performance than the Newton-Raphson method.
Yasunori YUSA +2 more
doaj +1 more source
Continual Learning with Quasi-Newton Methods [PDF]
<p>In this paper, we propose CSQN, a new Continual Learning (CL) method which considers Quasi-Newton methods, more specifically, Sampled Quasi-Newton methods, to extend EWC.</p> <p>EWC uses a Bayesian framework to estimate which parameters are important to previous tasks, and it punishes changes made to these parameters.
Steven Vander Eeckt, Hugo Van Hamme
openaire +3 more sources

