Results 91 to 100 of about 824,222 (268)
Stability, convergence, and robustness of deterministic multivariable self-tuning control
Self-tuning control is an important approach to intelligent control system design because this kind of control system uses online parameter estimation (or learning) to derive the model of the plant, and as a result of model parameter estimation (or ...
ZHAO Li, ZHANG Wei-cun, CHU Tian-guang
doaj +1 more source
Characterizing Stability Properties in Games with Strategic Substitutes [PDF]
In games with strategic substitutes (GSS), convergence of the best response dynamic starting from the inf (or sup) of the strategy space is equivalent to global stability (convergence of every adaptive dynamic to the same pure strategy Nash equilibrium).
Sunanda Roy, Tarun Sabarwal
core
This paper proposes two projector‐based Hopfield neural network (HNN) estimators for online, constrained parameter estimation under time‐varying data, additive disturbances, and slowly drifting physical parameters. The first is a constraint‐aware HNN that enforces linear equalities and inequalities (via slack neurons) and continuously tracks the ...
Miguel Pedro Silva
wiley +1 more source
GENERIC STABILITY AND MODES OF CONVERGENCE
Abstract We expand the study of generic stability in three different directions. Generic stability is best understood as a property of types in $NIP$ theories in classical logic. In this article, we make attempts to generalize our understanding to Keisler measures instead of types, arbitrary theories instead of
openaire +2 more sources
Current Tracking Adaptive Control of Brushless DC Motors
In this paper, the current tracking for Brushless Direct Current motors is approached considering uncertainty in the parameters of the motor's model. An adaptive control scheme to compensate electrical parameters uncertainty is proposed without requiring any knowledge of the mechanical parameters.
Fernanda Ramos‐García +3 more
wiley +1 more source
On the Stability and Convergence of Stochastic Gradient Descent with Momentum
While momentum-based methods, in conjunction with the stochastic gradient descent, are widely used when training machine learning models, there is little theoretical understanding on the generalization error of such methods.
Khisti, Ashish +2 more
core
Stability and convergence of monotonic algorithms
AbstractMonotonic (numerical) algorithms in a partially ordered metric space are considered. Using a termination criterion that is appropriate to the monotonicity, one finds that consistency and local stability imply convergence. Applications to interval analysis are presented. The numerical differentiation of a function is treated as an example.
openaire +1 more source
On Convergence and Stability of GANs
Analysis of convergence and mode collapse by studying GAN training process as regret minimization.
Kodali, Naveen +3 more
openaire +2 more sources
This work introduces an adaptive human pilot model that captures pilot time‐delay effects in adaptive control systems. The model enables the prediction of pilot–controller interactions, facilitating safer integration and improved design of adaptive controllers for piloted applications.
Abdullah Habboush, Yildiray Yildiz
wiley +1 more source
A new multi-step BDF energy stable technique for the extended Fisher-Kolmogorov equation
The multi-step backward difference formulas of order k (BDF-k) for 3 ≤ k ≤ 5 are proposed for solving the extended Fisher–Kolmogorov equation. Based upon the careful discrete gradient structures of the BDF-k formulas, the suggested numerical schemes are
Qihang Sun +4 more
doaj +1 more source

