Results 21 to 30 of about 7,675,115 (371)

Convolutional Neural Networks for Sentence Classification [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2014
We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks.
Yoon Kim
semanticscholar   +1 more source

The World as a Neural Network [PDF]

open access: yesEntropy, 2020
We discuss a possibility that the entire universe on its most fundamental level is a neural network. We identify two different types of dynamical degrees of freedom: “trainable” variables (e.g., bias vector or weight matrix) and “hidden” variables (e.g., state vector of neurons).
openaire   +6 more sources

Aggregated Residual Transformations for Deep Neural Networks [PDF]

open access: yesComputer Vision and Pattern Recognition, 2016
We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology.
Saining Xie   +4 more
semanticscholar   +1 more source

Neural network approximation [PDF]

open access: yesActa Numerica, 2021
Neural networks (NNs) are the method of choice for building learning algorithms. They are now being investigated for other numerical tasks such as solving high-dimensional partial differential equations. Their popularity stems from their empirical success on several challenging learning problems (computer chess/Go, autonomous navigation, face ...
Ronald A. DeVore   +2 more
openaire   +2 more sources

The pedunculopontine tegmental nucleus as a motor and cognitive interface between the cerebellum and basal ganglia

open access: yesFrontiers in Neuroanatomy, 2016
As an important component of ascending activating systems, brainstem cholinergic neurons in the pedunculopontine tegmental nucleus (PPTg) are involved in the regulation of motor control (locomotion, posture and gaze) and cognitive processes (attention ...
Fumika Mori   +7 more
doaj   +1 more source

Reward and behavioral factors contributing to the tonic activity of monkey pedunculopontine tegmental nucleus neurons during saccade tasks

open access: yesFrontiers in Systems Neuroscience, 2016
The pedunculopontine tegmental nucleus (PPTg) in the brainstem plays a role in controlling reinforcement learning and executing conditioned behavior. We previously examined activity of PPTg neurons in monkeys during a reward-conditioned, visually guided ...
Ken-ichi Okada   +4 more
doaj   +1 more source

Enhanced academic motivation in university students following a 2-week online gratitude journal intervention

open access: yesBMC Psychology, 2021
Background Past studies have associated gratitude interventions with a host of positive outcomes. However, there is a dearth of research regarding the impact such interventions have on the academic motivation of university students, thought to be a ...
Norberto Eiji Nawa, Noriko Yamagishi
doaj   +1 more source

Graph Convolutional Neural Networks for Web-Scale Recommender Systems [PDF]

open access: yesKnowledge Discovery and Data Mining, 2018
Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items
Rex Ying   +5 more
semanticscholar   +1 more source

Adaptive Learning Gabor Filter for Finger-Vein Recognition

open access: yesIEEE Access, 2019
Presently, finger-vein recognition is a new research direction in the field of biometric recognition. The Gabor filter has been extensively used for finger-vein recognition; however, its parameters are difficult to adjust.
Yakun Zhang   +5 more
doaj   +1 more source

Speech recognition with deep recurrent neural networks [PDF]

open access: yesIEEE International Conference on Acoustics, Speech, and Signal Processing, 2013
Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is ...
Alex Graves   +2 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy