Results 11 to 20 of about 34,284 (266)

Contractive De-noising Auto-encoder [PDF]

open access: yes, 2014
Auto-encoder is a special kind of neural network based on reconstruction. De-noising auto-encoder (DAE) is an improved auto-encoder which is robust to the input by corrupting the original data first and then reconstructing the original input by ...
C.C. Chang   +6 more
core   +2 more sources

Self-Supervised Variational Auto-Encoders [PDF]

open access: yesEntropy, 2021
Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), which utilizes deterministic and discrete ...
Ioannis Gatopoulos, Jakub M. Tomczak
openaire   +5 more sources

AVAE: Adversarial Variational Auto Encoder [PDF]

open access: yes2020 25th International Conference on Pattern Recognition (ICPR), 2021
Among the wide variety of image generative models, two models stand out: Variational Auto Encoders (VAE) and Generative Adversarial Networks (GAN). GANs can produce realistic images, but they suffer from mode collapse and do not provide simple ways to get the latent representation of an image.
Plumerault, Antoine   +2 more
openaire   +3 more sources

Radon–Sobolev Variational Auto-Encoders [PDF]

open access: yesNeural Networks, 2021
The quality of generative models (such as Generative adversarial networks and Variational Auto-Encoders) depends heavily on the choice of a good probability distance. However some popular metrics like the Wasserstein or the Sliced Wasserstein distances, the Jensen-Shannon divergence, the Kullback-Leibler divergence, lack convenient properties such as ...
openaire   +3 more sources

Transform invariant auto-encoder [PDF]

open access: yes2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017
6 pages, 17 figures, to be published in IROS ...
Matsuo, Tadashi   +2 more
openaire   +2 more sources

Ornstein Auto-Encoders [PDF]

open access: yesProceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019
We propose the Ornstein auto-encoder (OAE), a representation learning model for correlated data. In many interesting applications, data have nested structures. Examples include the VGGFace and MNIST datasets. We view such data consist of i.i.d. copies of a stationary random process, and seek a latent space representation of the observed sequences. This
Youngwon Choi, Joong-Ho Won
openaire   +1 more source

Keystroke Dynamics using Auto Encoders [PDF]

open access: yes2019 International Conference on Cyber Security and Protection of Digital Services (Cyber Security), 2019
In the modern day and age, credential based authentication systems no longer provide the level of security that many organisations and their services require. The level of trust in passwords has plummeted in recent years, with waves of cyber attacks predicated on compromised and stolen credentials.
Patel, Yogesh   +4 more
openaire   +1 more source

A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification

open access: yesSensors, 2020
This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm.
Ahmad M. Karim   +5 more
doaj   +1 more source

Surrogate‐based model using auto‐encoder for optimising multi‐band antennas

open access: yesIET Microwaves, Antennas & Propagation, 2022
This paper suggests an optimisation method to design multi‐band antenna using artificial neural networks. The proposed network, surrogate‐based model using auto‐encoder (SBM‐AE), is composed of two parts, ordinary neural network and auto‐encoder.
Kwi Seob Um, Nam Jik Kim, Seo Weon Heo
doaj   +1 more source

Information Theoretic-Learning auto-encoder [PDF]

open access: yes2016 International Joint Conference on Neural Networks (IJCNN), 2016
We propose Information Theoretic-Learning (ITL) divergence measures for variational regularization of neural networks. We also explore ITL-regularized autoencoders as an alternative to variational autoencoding bayes, adversarial autoencoders and generative adversarial networks for randomly generating sample data without explicitly defining a partition ...
Santana, Eder   +2 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy