Results 1 to 10 of about 37,616 (181)

Modeling Interval Timing by Recurrent Neural Nets [PDF]

open access: yesFrontiers in Integrative Neuroscience, 2019
The purpose of this study was to take a new approach in showing how the central nervous system might encode time at the supra-second level using recurrent neural nets (RNNs).
Theodore Raphan   +5 more
doaj   +5 more sources

Bayesian error propagation for neural-net based parameter inference

open access: yesThe Open Journal of Astrophysics, 2022
Neural nets have become popular to accelerate parameter inferences, especially for the upcoming generation of galaxy surveys in cosmology. As neural nets are approximative by nature, a recurrent question has been how to propagate the neural net's ...
Daniela Grandón, Elena Sellentin
doaj   +1 more source

Sea Fog Dissipation Prediction in Incheon Port and Haeundae Beach Using Machine Learning and Deep Learning

open access: yesSensors, 2021
Sea fog is a natural phenomenon that reduces the visibility of manned vehicles and vessels that rely on the visual interpretation of traffic. Fog clearance, also known as fog dissipation, is a relatively under-researched area when compared with fog ...
Jin Hyun Han   +5 more
doaj   +1 more source

Novel multi‐domain attention for abstractive summarisation

open access: yesCAAI Transactions on Intelligence Technology, 2023
The existing abstractive text summarisation models only consider the word sequence correlations between the source document and the reference summary, and the summary generated by models lacks the cover of the subject of source document due to models ...
Chunxia Qu   +4 more
doaj   +1 more source

Mack-Net model: Blending Mack’s model with Recurrent Neural Networks

open access: yesExpert Systems with Applications, 2022
In general insurance companies, a correct estimation of liabilities plays a key role due to its impact on management and investing decisions. Since the Financial Crisis of 2007-2008 and the strengthening of regulation, the focus is not only on the total reserve but also on its variability, which is an indicator of the risk assumed by the company. Thus,
Eduardo Ramos-Pérez   +2 more
openaire   +3 more sources

A deep LSTM‐CNN based on self‐attention mechanism with input data reduction for short‐term load forecasting

open access: yesIET Generation, Transmission & Distribution, 2023
Numerous studies on short‐term load forecasting (STLF) have used feature extraction methods to increase the model's accuracy by incorporating multidimensional features containing time, weather and distance information.
Shiyan Yi   +4 more
doaj   +1 more source

Brain inspired neuronal silencing mechanism to enable reliable sequence identification

open access: yesScientific Reports, 2022
Real-time sequence identification is a core use-case of artificial neural networks (ANNs), ranging from recognizing temporal events to identifying verification codes.
Shiri Hodassman   +7 more
doaj   +1 more source

The Context-Dependent Additive Recurrent Neural Net [PDF]

open access: yesProceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2018
Contextual sequence mapping is one of the fundamental problems in Natural Language Processing (NLP). Here, instead of relying solely on the information presented in the text, the learning agents have access to a strong external signal given to assist the learning process.
Quan Hung Tran   +5 more
openaire   +1 more source

A multivariate natural gas load forecasting method based on residual recurrent neural network

open access: yesElectronics Letters, 2023
Current natural gas load forecasting encounters with the conundrum of unsatisfying accuracy and interpretability. To address the challenge, a multi‐variate forecasting method is proposed, which contains three phases: First, an integrate history‐climate ...
Xueqing Ni   +3 more
doaj   +1 more source

On Learning Interpreted Languages with Recurrent Models

open access: yesComputational Linguistics, 2022
Can recurrent neural nets, inspired by human sequential data processing, learn to understand language? We construct simplified data sets reflecting core properties of natural language as modeled in formal syntax and semantics: recursive syntactic ...
Denis Paperno
doaj   +1 more source

Home - About - Disclaimer - Privacy