Results 71 to 80 of about 158,911 (285)
Strategies and Principles of Distributed Machine Learning on Big Data
The rise of big data has led to new demands for machine learning (ML) systems to learn complex models, with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as ...
Eric P. Xing +3 more
doaj +1 more source
Designing Memristive Materials for Artificial Dynamic Intelligence
Key characteristics required of memristors for realizing next‐generation computing, along with modeling approaches employed to analyze their underlying mechanisms. These modeling techniques span from the atomic scale to the array scale and cover temporal scales ranging from picoseconds to microseconds. Hardware architectures inspired by neural networks
Youngmin Kim, Ho Won Jang
wiley +1 more source
BISMO: A Scalable Bit-Serial Matrix Multiplication Overlay for Reconfigurable Computing
Matrix-matrix multiplication is a key computational kernel for numerous applications in science and engineering, with ample parallelism and data locality that lends itself well to high-performance implementations.
Rasnayake, Lahiru +2 more
core +1 more source
A snapshot of parallelism in distributed deep learning training
The accelerated development of applications related to artificial intelligence has generated the creation of increasingly complex neural network models with enormous amounts of parameters, currently reaching up to trillions of parameters.
Hairol Romero-Sandí +2 more
doaj +1 more source
This article presents the artificial synapse based on strontium titanate thin films via spin‐coating followed by forming gas annealing to introduce oxygen vacancies. Characterizations (X‐ray photoelectron spectroscopy, electron paramagnetic resonance, Ultraviolet photoelectron spectroscopy (UPS)) confirm increased oxygen vacancies and downward energy ...
Fandi Chen +16 more
wiley +1 more source
Benefits of Open Quantum Systems for Quantum Machine Learning
Quantum machine learning (QML), poised to transform data processing, faces challenges from environmental noise and dissipation. While traditional efforts seek to combat these hindrances, this perspective proposes harnessing them for potential advantages. Surprisingly, under certain conditions, noise and dissipation can benefit QML.
María Laura Olivera‐Atencio +2 more
wiley +1 more source
Pipeline Parallelism With Elastic Averaging
To accelerate the training speed of massive DNN models on large-scale datasets, distributed training techniques, including data parallelism and model parallelism, have been extensively studied.
Bongwon Jang, In-Chul Yoo, Dongsuk Yook
doaj +1 more source
Capacitive, charge‐domain compute‐in‐memory (CIM) stores weights as capacitance,eliminating DC sneak paths and IR‐drop, yielding near‐zero standbypower. In this perspective, we present a device to systems level performance analysis of most promising architectures and predict apathway for upscaling capacitive CIM for sustainable edge computing ...
Kapil Bhardwaj +2 more
wiley +1 more source
A Logical Model and Data Placement Strategies for MEMS Storage Devices
MEMS storage devices are new non-volatile secondary storages that have outstanding advantages over magnetic disks. MEMS storage devices, however, are much different from magnetic disks in the structure and access characteristics.
Kim, Min-Soo +3 more
core +1 more source
Parallelization of a wave propagation application using a data parallel compiler [PDF]
The paper presents the parallelization process of a wave propagation application using the PANDORE environment. The PANDORE environment has been designed to facilitate the programming of data distributed applications for distributed memory computers or clusters of workstations.
André, Françoise +3 more
openaire +3 more sources

