Results 291 to 300 of about 144,099 (322)
Information Entropy of Biometric Data in a Recurrent Neural Network with Low Connectivity. [PDF]
Dominguez-Carreta D +4 more
europepmc +1 more source
Learning in Probabilistic Boolean Networks via Structural Policy Gradients. [PDF]
Rivera Torres PJ.
europepmc +1 more source
Primate-informed neural network for visual decision-making. [PDF]
Su J +6 more
europepmc +1 more source
An open problem: Why are motif-avoidant attractors so rare in asynchronous Boolean networks? [PDF]
Pastva S +4 more
europepmc +1 more source
Some of the next articles are maybe not open access.
Related searches:
Related searches:
Neural Computation, 2001
Attractor networks, which map an input space to a discrete output space, are useful for pattern completion—cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins.
Zemel, Richard S., Mozer, Michael C.
openaire +2 more sources
Attractor networks, which map an input space to a discrete output space, are useful for pattern completion—cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins.
Zemel, Richard S., Mozer, Michael C.
openaire +2 more sources
WIREs Cognitive Science, 2009
AbstractAn attractor network is a network of neurons with excitatory interconnections that can settle into a stable pattern of firing. This article shows how attractor networks in the cerebral cortex are important for long‐term memory, short‐term memory, attention, and decision making.
openaire +2 more sources
AbstractAn attractor network is a network of neurons with excitatory interconnections that can settle into a stable pattern of firing. This article shows how attractor networks in the cerebral cortex are important for long‐term memory, short‐term memory, attention, and decision making.
openaire +2 more sources
Journal of Cosmology and Astroparticle Physics, 2022
Abstract Inflationary α-attractor models can be naturally implemented in supergravity with hyperbolic geometry. They have stable predictions for observables, such as ns = 1 - 2/Ne , assuming that the potential in terms of the original geometric variables, as well as its derivatives, are not ...
Kallosh, Renata, Linde, Andrei
openaire +1 more source
Abstract Inflationary α-attractor models can be naturally implemented in supergravity with hyperbolic geometry. They have stable predictions for observables, such as ns = 1 - 2/Ne , assuming that the potential in terms of the original geometric variables, as well as its derivatives, are not ...
Kallosh, Renata, Linde, Andrei
openaire +1 more source

