Results 101 to 110 of about 119,888 (330)
This study generates high‐fidelity synthetic longitudinal records for a million‐patient diabetes cohort, successfully replicating clinical predictive performance. However, deeper analysis reveals algorithmic biases and trajectory inconsistencies that escape standard quality metrics. These findings challenge current validation norms, demonstrating why a
Francisco Ortuño +5 more
wiley +1 more source
Zero-bias autoencoders and the benefits of co-adapting features [PDF]
Regularized training of an autoencoder typically results in hidden unit biases that take on large negative values. We show that negative biases are a natural result of using a hidden layer whose responsibility is to both represent the input data and act ...
Konda, Kishore +2 more
core
Processsing Simple Geometric Attributes with Autoencoders
Image synthesis is a core problem in modern deep learning, and many recent architectures such as autoencoders and Generative Adversarial networks produce spectacular results on highly complex data, such as images of faces or landscapes.
Almansa, Andrés +3 more
core +3 more sources
Solid Harmonic Wavelet Bispectrum for Image Analysis
The Solid Harmonic Wavelet Bispectrum (SHWB), a rotation‐ and translation‐invariant descriptor that captures higher‐order (phase) correlations in signals, is introduced. Combining wavelet scattering, bispectral analysis, and group theory, SHWB achieves interpretable, data‐efficient representations and demonstrates competitive performance across texture,
Alex Brown +3 more
wiley +1 more source
Structural health monitoring (SHM) in fiber-reinforced polymer (FRP) composites is essential to ensure safety and reliability during service, particularly in critical industries such as aerospace and wind energy. Traditional methods of analyzing Acoustic
Serafeim Moustakidis +6 more
doaj +1 more source
Gaussian Process Prior Variational Autoencoders [PDF]
Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent and ...
Casale, Francesco Paolo +4 more
core +1 more source
Enforcing distributions of latent variables in neural networks is an active subject. It is vital in all kinds of generative models, where we want to be able to interpolate between points in the latent space, or sample from it. Modern generative AutoEncoders (AE) like WAE, SWAE, CWAE add a regularizer to the standard (deterministic) AE, which allows to ...
Maciej Mikulski, Jaroslaw Duda 0001
openaire +2 more sources
Decoding Naturalistic Episodic Memory with Artificial Intelligence and Brain‐Machine Interface
Episodic memory weaves together what, where, and when of experience into a personal narrative. Cutting‐edge AI models may decode this intricate process in real‐life settings, revealing how neural activity encodes naturalistic memories. By merging AI with brain–machine interfaces, researchers are edging closer to mapping and even engineering memory ...
Dong Song
wiley +1 more source
Autoencoders are dimension reduction models in the field of machine learning which can be thought of as a neural network counterpart of principal components analysis (PCA).
Roy Cerqueti +3 more
doaj +1 more source
Autoencoding any Data through Kernel Autoencoders
This paper investigates a novel algorithmic approach to data representation based on kernel methods. Assuming that the observations lie in a Hilbert space X, the introduced Kernel Autoencoder (KAE) is the composition of mappings from vector-valued Reproducing Kernel Hilbert Spaces (vv-RKHSs) that minimizes the expected reconstruction error.
Pierre Laforgue +2 more
openaire +3 more sources

