Results 301 to 310 of about 6,813,126 (333)
OAS Deep Q-Learning-Based Fast and Smooth Control Method for Traffic Signal Transition in Urban Arterial Tidal Lanes. [PDF]
Dong L, Xie X, Lu J, Feng L, Zhang L.
europepmc +1 more source
Some of the next articles are maybe not open access.
Related searches:
Related searches:
IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2023
A permutation flow-shop scheduling problem (PFSP) has been studied for a long time due to its significance in real-life applications. This work proposes an improved artificial bee colony (ABC) algorithm with $Q$ -learning, named QABC, for solving it ...
Hanxiao Li +4 more
semanticscholar +1 more source
A permutation flow-shop scheduling problem (PFSP) has been studied for a long time due to its significance in real-life applications. This work proposes an improved artificial bee colony (ABC) algorithm with $Q$ -learning, named QABC, for solving it ...
Hanxiao Li +4 more
semanticscholar +1 more source
IEEE Transactions on Industrial Informatics, 2021
The electric vehicles’ (EVs) rapid growth can potentially lead power grids to face new challenges due to load profile changes. To this end, a new method is presented to forecast the EV charging station loads with machine learning techniques.
M. Dabbaghjamanesh +2 more
semanticscholar +1 more source
The electric vehicles’ (EVs) rapid growth can potentially lead power grids to face new challenges due to load profile changes. To this end, a new method is presented to forecast the EV charging station loads with machine learning techniques.
M. Dabbaghjamanesh +2 more
semanticscholar +1 more source
IEEE Transactions on Neural Networks, 2000
This paper develops the theory of quad-Q-learning which is a new learning algorithm that evolved from Q-learning. Quad-Q-learning is applicable to problems that can be solved by "divide and conquer" techniques. Quad-Q-learning concerns an autonomous agent that learns without supervision to act optimally to achieve specified goals.
C, Clausen, H, Wechsler
openaire +2 more sources
This paper develops the theory of quad-Q-learning which is a new learning algorithm that evolved from Q-learning. Quad-Q-learning is applicable to problems that can be solved by "divide and conquer" techniques. Quad-Q-learning concerns an autonomous agent that learns without supervision to act optimally to achieve specified goals.
C, Clausen, H, Wechsler
openaire +2 more sources
Q-Learning: Theory and Applications
Annual Review of Statistics and Its Application, 2020Q-learning, originally an incremental algorithm for estimating an optimal decision strategy in an infinite-horizon decision problem, now refers to a general class of reinforcement learning methods widely used in statistics and artificial intelligence. In
Jesse Clifton, Eric B. Laber
semanticscholar +1 more source
A Q-Learning-Based Topology-Aware Routing Protocol for Flying Ad Hoc Networks
IEEE Internet of Things Journal, 2021Flying ad hoc networks (FANETs) have emanated over the last few years for numerous civil and military applications. Owing to underlying attributes, such as a dynamic topology, node mobility in 3-D space, and the limited energy of unmanned aerial vehicles
Muhammad Yeasir Arafat, S. Moh
semanticscholar +1 more source
Analysis of Q-Learning Like Algorithms Through Evolutionary Game Dynamics
IEEE Transactions on Circuits and Systems - II - Express Briefs, 2022Based on two-player two-action and three-action game models, this brief studies the dynamics of Q-learning and Frequency Adjusted Q-(FAQ-) learning algorithms in multi-agent systems, and discloses the underlying mechanisms of these algorithms through the
Yiming Shi, Zhihai Rong
semanticscholar +1 more source
Optimal Tracking Control of Nonlinear Multiagent Systems Using Internal Reinforce Q-Learning
IEEE Transactions on Neural Networks and Learning Systems, 2021In this article, a novel reinforcement learning (RL) method is developed to solve the optimal tracking control problem of unknown nonlinear multiagent systems (MASs).
Zhinan Peng +5 more
semanticscholar +1 more source

