Results 41 to 50 of about 792,416 (284)
We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning ...
Silver, Tom +3 more
openaire +2 more sources
Residual Reinforcement Learning from Demonstrations
Residual reinforcement learning (RL) has been proposed as a way to solve challenging robotic tasks by adapting control actions from a conventional feedback controller to maximize a reward signal. We extend the residual formulation to learn from visual inputs and sparse rewards using demonstrations.
Alakuijala, Minttu +4 more
openaire +2 more sources
Periodic residual learning for crowd flow forecasting
Crowd flow forecasting, which aims to predict the crowds entering or leaving certain regions, is a fundamental task in smart cities. One of the key properties of crowd flow data is periodicity: a pattern that occurs at regular time intervals, such as a weekly pattern.
Wang, Chengxin, Liang, Yuxuan, Tan, Gary
openaire +2 more sources
Dog Nose-Print Identification Using Deep Neural Networks
Recently, there has been rapid growth in the number of people who own companion pets (cats and dogs) due to low birth rates, an increasingly aging population, and an increasing number of single-person households.
Han Byeol Bae, Daehyun Pak, Sangyoun Lee
doaj +1 more source
Cascade Residual Learning: A Two-stage Convolutional Neural Network for Stereo Matching
Leveraging on the recent developments in convolutional neural networks (CNNs), matching dense correspondence from a stereo pair has been cast as a learning problem, with performance exceeding traditional approaches.
Pang, Jiahao +4 more
core +1 more source
Compnet: A New Scheme for Single Image Super Resolution Based on Deep Convolutional Neural Network
The features produced by the layers of a neural network become increasingly more sparse as the network gets deeper and consequently, the learning capability of the network is not further enhanced as the number of layers is increased.
Alireza Esmaeilzehi +2 more
doaj +1 more source
MRU-NET: A U-Shaped Network for Retinal Vessel Segmentation
Fundus blood vessel image segmentation plays an important role in the diagnosis and treatment of diseases and is the basis of computer-aided diagnosis.
Hongwei Ding +3 more
doaj +1 more source
Learning Residual Images for Face Attribute Manipulation
Face attributes are interesting due to their detailed description of human faces. Unlike prior researches working on attribute prediction, we address an inverse and more challenging problem called face attribute manipulation which aims at modifying a ...
Liu, Rujie, Shen, Wei
core +1 more source
Layer decomposition to separate an input image into base and detail layers has been steadily used for image restoration. Existing residual networks based on an additive model require residual layers with a small output range for fast convergence and ...
Chang-Hwan Son
doaj +1 more source
Learning Residual Finite-State Automata Using Observation Tables
We define a two-step learner for RFSAs based on an observation table by using an algorithm for minimal DFAs to build a table for the reversal of the language in question and showing that we can derive the minimal RFSA from it after some simple ...
Anna Kasprzik +2 more
core +2 more sources

