Results 21 to 30 of about 1,629,210 (315)

Weakly Coupled Deep Q-Networks

open access: yes, 2023
We propose weakly coupled deep Q-networks (WCDQN), a novel deep reinforcement learning algorithm that enhances performance in a class of structured problems called weakly coupled Markov decision processes (WCMDP). WCMDPs consist of multiple independent subproblems connected by an action space constraint, which is a structural property that frequently ...
Shar, Ibrahim El, Jiang, Daniel R.
openaire   +2 more sources

Autonomous Penetration Testing Based on Improved Deep Q-Network

open access: yesApplied Sciences, 2021
Penetration testing is an effective way to test and evaluate cybersecurity by simulating a cyberattack. However, the traditional methods deeply rely on domain expert knowledge, which requires prohibitive labor and time costs.
Shicheng Zhou   +4 more
semanticscholar   +1 more source

Deep Q‐network implementation for simulated autonomous vehicle control

open access: yesIET Intelligent Transport Systems, 2021
Deep reinforcement learning is poised to be a revolutionised step towards newer possibilities in solving navigation and autonomous vehicle control tasks.
Yang Thee Quek   +4 more
doaj   +1 more source

Learning Negotiating Behavior Between Cars in Intersections using Deep Q-Learning [PDF]

open access: yes, 2018
This paper concerns automated vehicles negotiating with other vehicles, typically human driven, in crossings with the goal to find a decision algorithm by learning typical behaviors of other vehicles.
Ali, Mohammad   +4 more
core   +2 more sources

The use of adversaries for optimal neural network training [PDF]

open access: yes, 2018
B-decay data from the Belle experiment at the KEKB collider have a substantial background from $e^{+}e^{-}\to q \bar{q}$ events. To suppress this we employ deep neural network algorithms.
Hawthorne-Gonzalvez, Anton   +1 more
core   +2 more sources

B-APFDQN: A UAV Path Planning Algorithm Based on Deep Q-Network and Artificial Potential Field

open access: yesIEEE Access, 2023
Deep Q-network (DQN) is one of the standard methods to solve the Unmanned Aerial Vehicle (UAV) path planning problem. However, the way agent deepens its cognition of the environment through frequent random trial-and-error leads to slow convergence.
Fuchen Kong   +3 more
semanticscholar   +1 more source

Visual servoing with deep reinforcement learning for rotor unmanned helicopter

open access: yesInternational Journal of Advanced Robotic Systems, 2022
Visual servoing is a key approach to achieve visual control for the rotor unmanned helicopter. The challenges of the inaccurate matrix estimation and the target loss restrict the performance of the visual servoing control systems.
Chunyang Hu, Wenping Cao, Bin Ning
doaj   +1 more source

Deep Reinforcement Learning- based load balancing strategy for multiple controllers in SDN

open access: yese-Prime: Advances in Electrical Engineering, Electronics and Energy, 2022
In Software-Defined Network (SDN) with multiple controllers, static mapping relationship between switches and controllers may cause some controllers to be overloaded, while some controller resources are underutilized.
Min Xiang   +3 more
doaj   +1 more source

Distribution Network Reconfiguration Based on NoisyNet Deep Q-Learning Network

open access: yesIEEE Access, 2021
The distribution network reconfiguration (DNR) aims at minimizing the power losses and improving the voltage profile. Traditional model-based methods exactly need the network parameters to derive the optimal configuration of the distribution network ...
Beibei Wang   +4 more
doaj   +1 more source

Ad Hoc-Obstacle Avoidance-Based Navigation System Using Deep Reinforcement Learning for Self-Driving Vehicles

open access: yesIEEE Access, 2023
In this research, a novel navigation algorithm for self-driving vehicles that avoids collisions with pedestrians and ad hoc obstacles is described. The proposed algorithm predicts the locations of ad hoc obstacles and wandering pedestrians by using an ...
N. S. Manikandan   +2 more
doaj   +1 more source

Home - About - Disclaimer - Privacy