Results 131 to 140 of about 103,799 (334)
Variational Autoencoder+Deep Deterministic Policy Gradient addresses low‐light failures of infrared depth sensing for indoor robot navigation. Stage 1 pretrains an attention‐enhanced Variational Autoencoder (Convolutional Block Attention Module+Feature Pyramid Network) to map dark depth frames to a well‐lit reconstruction, yielding a 128‐D latent code ...
Uiseok Lee +7 more
wiley +1 more source
Content-aware robust semantic transmission of images over wireless channels with GANs
Semantic Communication (SemCom) can significantly reduce the transmitted data volume and keep robustness. Task-oriented SemCom of images aims to convey the implicit meaning of source messages correctly, rather than achieving precise bit-by-bit ...
Xuyang Chen +5 more
doaj +1 more source
Large‐scale Hopfield neural networks (HNNs) for associative computing are implemented using vertical NAND (VNAND) flash memory. The proposed VNAND HNN with the asynchronous update scenario achieve robust image restoration performance despite fabrication variations, while significantly reducing chip area (≈117× smaller than resistive random‐access ...
Jin Ho Chang +4 more
wiley +1 more source
The topic discussed was the intelligent design of network multimedia using BD and virtual AI technology. First of all, the authors gave a brief overview of its relevant research background, and then comprehensively analysed the advantages and disadvantages of previous scholars' research on network multimedia.
Xin Zhang
wiley +1 more source
Stochastic quantization associated with the $\exp(Φ)_2$-quantum field model driven by space-time white noise on the torus [PDF]
Masato Hoshino +2 more
openalex +1 more source
Calibration‐Free Electromyography Motor Intent Decoding Using Large‐Scale Supervised Pretraining
Calibration‐free electromyography motor intent decoding is enabled through large‐scale supervised pretraining across heterogeneous datasets. A Spatially Aware Feature‐learning Transformer processes variable channel counts and electrode geometries, allowing transfer across users and recording setups. On a held‐out benchmark, fine‐tuned cross‐user models
Alexander E. Olsson +3 more
wiley +1 more source
Quantization‐aware training creates resource‐efficient structured state space sequential S4(D) models for ultra‐long sequence processing in edge AI hardware. Including quantization during training leads to efficiency gains compared to pure post‐training quantization.
Sebastian Siegel +5 more
wiley +1 more source
Deep Learning Models in Speech Recognition: Measuring GPU Energy Consumption, Impact of Noise and Model Quantization for Edge Deployment [PDF]
Aditya Chakravarty
openalex +1 more source

