Results 31 to 40 of about 13,859 (190)
ViT-AE++: Improving Vision Transformer Autoencoder for Self-supervised Medical Image Representations
Self-supervised learning has attracted increasing attention as it learns data-driven representation from data without annotations. Vision transformer-based autoencoder (ViT-AE) by He et al. (2021) is a recent self-supervised learning technique that employs a patch-masking strategy to learn a meaningful latent space. In this paper, we focus on improving
Prabhakar, Chinmay +5 more
openaire +2 more sources
Missing-Insensitive Short-Term Load Forecasting Leveraging Autoencoder and LSTM
In most deep learning-based load forecasting, an intact dataset is required. Since many real-world datasets contain missing values for various reasons, missing imputation using deep learning is actively studied.
Kyungnam Park +3 more
doaj +1 more source
Robust, Deep and Inductive Anomaly Detection
PCA is a classical statistical technique whose simplicity and maturity has seen it find widespread use as an anomaly detection technique. However, it is limited in this regard by being sensitive to gross perturbations of the input, and by seeking a ...
Chalapathy, Raghavendra +2 more
core +1 more source
Neural network models, such as BP, LSTM, etc., support only numerical inputs, so data preprocessing needs to be carried out on the categorical variables to convert them into numerical data.
Yiying Wang +4 more
doaj +1 more source
Denoising Autoencoders for fast Combinatorial Black Box Optimization
Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Autoencoders (AE) are generative stochastic networks with these desired properties.
Bengio Y. +6 more
core +1 more source
Efficient modeling of high-dimensional data requires extracting only relevant dimensions through feature learning. Unsupervised feature learning has gained tremendous attention due to its unbiased approach, no need for prior knowledge or expensive manual
Chathurika S. Wickramasinghe +2 more
doaj +1 more source
VIGAN: Missing View Imputation with Generative Adversarial Networks
In an era when big data are becoming the norm, there is less concern with the quantity but more with the quality and completeness of the data. In many disciplines, data are collected from heterogeneous sources, resulting in multi-view or multi-modal ...
Bi, Jinbo +5 more
core +1 more source
Solving Data Overlapping Problem Using A Class‐Separable Extreme Learning Machine Auto‐Encoder
The overlapping and imbalanced data in classification present key challenges. Class‐separable extreme learning machine auto‐encoding (CS‐ELM‐AE) is proposed, which is an enhancement of ELM‐AE that better handles overlapping data by clustering points from the same class together. Applying oversampling addresses imbalanced data.
Ekkarat Boonchieng, Wanchaloem Nadda
wiley +1 more source
Health Prognostics Classification with Autoencoders for Predictive Maintenance of HVAC Systems
Buildings’ heating, ventilation, and air-conditioning (HVAC) systems account for significant global energy use. Proper maintenance can minimize their environmental footprint and enhance the quality of the indoor environment.
Ruiqi Tian +2 more
doaj +1 more source
Zero-bias autoencoders and the benefits of co-adapting features [PDF]
Regularized training of an autoencoder typically results in hidden unit biases that take on large negative values. We show that negative biases are a natural result of using a hidden layer whose responsibility is to both represent the input data and act ...
Konda, Kishore +2 more
core

