Results 1 to 10 of about 158,911 (285)

Algebraic data parallelism implementation in HPF

open access: diamondLietuvos Matematikos Rinkinys, 2002
There is not abstract.
Valdona Judickaitė   +2 more
doaj   +6 more sources

An efficient algorithm for data parallelism based on stochastic optimization

open access: goldAlexandria Engineering Journal, 2022
Deep neural network models can achieve greater performance in numerous machine learning tasks by raising the depth of the model and the amount of training data samples.
Khalid A. Alnowibet   +3 more
openalex   +3 more sources

Skeletons for data parallelism in p31

open access: bronze, 1997
This paper addresses the application of a skeleton/template compiling strategy to structured data parallel computations. In particular, we discuss how data parallelism is expressed and implemented in p31, a structured parallel language based on skeletons.
DANELUTTO, MARCO   +2 more
openaire   +4 more sources

Model Parallelism With Subnetwork Data Parallelism [PDF]

open access: green
Distributed pre-training of large models at scale often imposes heavy memory demands on individual nodes and incurs significant intra-node communication costs. We propose a novel alternative approach that reduces the memory requirements by training small, structured subnetworks of the model on separate workers.
Singh, Vaibhav   +3 more
openaire   +3 more sources

Quantum data parallelism in quantum neural networks

open access: goldPhysical Review Research
Quantum neural networks hold promise for achieving lower generalization error bounds and enhanced computational efficiency in processing certain datasets.
Sixuan Wu, Yue Zhang, Jian Li
openalex   +2 more sources

Accelerating Distributed SGD With Group Hybrid Parallelism

open access: yesIEEE Access, 2021
The scale of model parameters and datasets is rapidly growing for high accuracy in various areas. To train a large-scale deep neural network (DNN) model, a huge amount of computation and memory is required; therefore, a parallelization technique for ...
Kyung-No Joo, Chan-Hyun Youn
doaj   +1 more source

SHAT: A Novel Asynchronous Training Algorithm That Provides Fast Model Convergence in Distributed Deep Learning

open access: yesApplied Sciences, 2021
The recent unprecedented success of deep learning (DL) in various fields is underlied by its use of large-scale data and models. Training a large-scale deep neural network (DNN) model with large-scale data, however, is time-consuming.
Yunyong Ko, Sang-Wook Kim
doaj   +1 more source

Relating data-parallelism and (and-) parallelism in logic programs [PDF]

open access: yesComputer Languages, 1995
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages. Traditional (and-) parallel models offer generality, being able to exploit parallelism in a large class of programs ...
Hermenegildo, Manuel V.   +1 more
openaire   +4 more sources

Fiber Clustering Acceleration With a Modified Kmeans++ Algorithm Using Data Parallelism

open access: yesFrontiers in Neuroinformatics, 2021
Fiber clustering methods are typically used in brain research to study the organization of white matter bundles from large diffusion MRI tractography datasets.
Isaac Goicovich   +8 more
doaj   +1 more source

Achieving new SQL query performance levels through parallel execution in SQL Server [PDF]

open access: yesE3S Web of Conferences, 2023
This article provides an in-depth look at implementing parallel SQL query processing using the Microsoft SQL Server database management system. It examines how parallelism can significantly accelerate query execution by leveraging multi-core processors ...
Nuriev Marat   +3 more
doaj   +1 more source

Home - About - Disclaimer - Privacy