Results 11 to 20 of about 39,793 (262)
Matrix Transformations and Quasi-Newton Methods [PDF]
We first recall some properties of infinite tridiagonal matrices considered as matrix transformations in sequence spaces of the forms sξ, sξ∘, sξ(c), or lp(ξ).
Boubakeur Benahmed +2 more
doaj +4 more sources
Asynchronous Parallel Stochastic Quasi-Newton Methods. [PDF]
Although first-order stochastic algorithms, such as stochastic gradient descent, have been the main force to scale up machine learning models, such as deep neural nets, the second-order quasi-Newton methods start to draw attention due to their effectiveness in dealing with ill-conditioned optimization problems.
Tong Q, Liang G, Cai X, Zhu C, Bi J.
europepmc +5 more sources
Faster Stochastic Quasi-Newton Methods [PDF]
Stochastic optimization methods have become a class of popular optimization tools in machine learning. Especially, stochastic gradient descent (SGD) has been widely used for machine learning problems such as training neural networks due to low per-iteration computational complexity.
Qingsong Zhang +3 more
openaire +3 more sources
Self-Scaling Variable Metric in Constrained Optimization [PDF]
In this paper, we investigated of a new self-scaling by use quasi-Newton method and conjugate gradient method. The new algorithm satisfies a quasi-newton condition and mutually conjugate, and practically proved its efficiency when compared with the well ...
Eman Hamed, Marwa Hamad
doaj +1 more source
A New Globally Convergent Self-Scaling Vm Algorithm for Convex and Nonconvex Optimization [PDF]
In unconstrained optimization, the original quasi-Newton condition where is the difference of the gradients at two successive iterations. Li and Fukushima proposed a modified BFGS methods based on a new Quasi –Newton equation where , where is a
Abbas Y. AL-Bayati, Basim A. Hassan
doaj +1 more source
A New Theoretical Result for Quasi-Newton Formulae for Unconstrained Optimization [PDF]
The recent measure function of Byrd and Nocedal [3] is considered and simple proofs of some its properties are given. It is then shown that the AL-Bayati (1991) formulae satisfy a least change property with respect to this new measure .The new formula ...
Basim Hassan
doaj +1 more source
Quasi-Newton particle Metropolis-Hastings [PDF]
Particle Metropolis-Hastings enables Bayesian parameter inference in general nonlinear state space models (SSMs). However, in many implementations a random walk proposal is used and this can result in poor mixing if not tuned correctly using tedious ...
Dahlin, Johan +2 more
core +1 more source
Deep Neural Networks Training by Stochastic Quasi-Newton Trust-Region Methods
While first-order methods are popular for solving optimization problems arising in deep learning, they come with some acute deficiencies. To overcome these shortcomings, there has been recent interest in introducing second-order information through quasi-
Mahsa Yousefi, Ángeles Martínez
doaj +1 more source
Continual Learning with Quasi-Newton Methods [PDF]
<p>In this paper, we propose CSQN, a new Continual Learning (CL) method which considers Quasi-Newton methods, more specifically, Sampled Quasi-Newton methods, to extend EWC.</p> <p>EWC uses a Bayesian framework to estimate which parameters are important to previous tasks, and it punishes changes made to these parameters.
Steven Vander Eeckt, Hugo Van Hamme
openaire +3 more sources
On optimal solution error covariances in variational data assimilation problems [PDF]
The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find unknown parameters such as distributed model coefficients or boundary conditions. The equation for the optimal solution error
(Funder), Scottish Founding Council via GRPE +3 more
core +3 more sources

