Results 71 to 80 of about 178,091 (333)
A snapshot of parallelism in distributed deep learning training
The accelerated development of applications related to artificial intelligence has generated the creation of increasingly complex neural network models with enormous amounts of parameters, currently reaching up to trillions of parameters.
Hairol Romero-Sandí+2 more
doaj +1 more source
A kinematically Bifurcated Metamaterial for Integrated Logic Operation and Computing
A family of 2n‐side kinematic polygonal modules with n decoupled inputs and 2n extreme configurations via kinematic bifurcation is proposed. It allows integrating seven basic logic gates on a quadrilateral module. Moreover, a minimized Parallel Computing Sum of Products function is developed, enabling all 2‐bit arithmetic (including division ...
Kaili Xi+6 more
wiley +1 more source
Pipeline Parallelism With Elastic Averaging
To accelerate the training speed of massive DNN models on large-scale datasets, distributed training techniques, including data parallelism and model parallelism, have been extensively studied.
Bongwon Jang, In-Chul Yoo, Dongsuk Yook
doaj +1 more source
In this paper, a novel floating gate transistor (BP/POx/WSe2) is developed, which enables rich synaptic functionality under optoelectronic conditions and can mimic human visual memory. By introducing a two‐path convolutional neural network that synergistically fuses optical and electronic inputs, it can achieve efficient feature extraction and weight ...
Yuxuan Zeng+13 more
wiley +1 more source
How to Test Triboelectric Nanogenerators: Key Factors for Standardized Performance Evaluation
This paper is a guide on how to test triboelectric nanogenerators (TENGs) so that results can be reliably compared across different laboratories. It explores the many factors (fabrication, mechanical, electrical, and environmental) that can affect TENG testing results and provides recommendations for best practice in testing and performance evaluation.
Daniel M. Mulvihill+10 more
wiley +1 more source
A Logical Model and Data Placement Strategies for MEMS Storage Devices
MEMS storage devices are new non-volatile secondary storages that have outstanding advantages over magnetic disks. MEMS storage devices, however, are much different from magnetic disks in the structure and access characteristics.
Kim, Min-Soo+3 more
core +1 more source
Parallelization of a wave propagation application using a data parallel compiler [PDF]
The paper presents the parallelization process of a wave propagation application using the PANDORE environment. The PANDORE environment has been designed to facilitate the programming of data distributed applications for distributed memory computers or clusters of workstations.
André, Françoise+3 more
openaire +3 more sources
Designing Memristive Materials for Artificial Dynamic Intelligence
Key characteristics required of memristors for realizing next‐generation computing, along with modeling approaches employed to analyze their underlying mechanisms. These modeling techniques span from the atomic scale to the array scale and cover temporal scales ranging from picoseconds to microseconds. Hardware architectures inspired by neural networks
Youngmin Kim, Ho Won Jang
wiley +1 more source
Massively-Parallel Lossless Data Decompression
Today's exponentially increasing data volumes and the high cost of storage make compression essential for the Big Data industry. Although research has concentrated on efficient compression, fast decompression is critical for analytics queries that ...
Kaldewey, Tim+4 more
core +1 more source
This article presents the artificial synapse based on strontium titanate thin films via spin‐coating followed by forming gas annealing to introduce oxygen vacancies. Characterizations (X‐ray photoelectron spectroscopy, electron paramagnetic resonance, Ultraviolet photoelectron spectroscopy (UPS)) confirm increased oxygen vacancies and downward energy ...
Fandi Chen+16 more
wiley +1 more source