Results 1 to 10 of about 178,091 (333)

Fiber Clustering Acceleration With a Modified Kmeans++ Algorithm Using Data Parallelism [PDF]

open access: yesFrontiers in Neuroinformatics, 2021
Fiber clustering methods are typically used in brain research to study the organization of white matter bundles from large diffusion MRI tractography datasets.
Isaac Goicovich   +8 more
doaj   +2 more sources

Algebraic data parallelism implementation in HPF

open access: diamondLietuvos Matematikos Rinkinys, 2002
There is not abstract.
Valdona Judickaitė   +2 more
doaj   +5 more sources

An efficient algorithm for data parallelism based on stochastic optimization

open access: goldAlexandria Engineering Journal, 2022
Deep neural network models can achieve greater performance in numerous machine learning tasks by raising the depth of the model and the amount of training data samples.
Khalid Abdulaziz Alnowibet   +3 more
doaj   +2 more sources

Skeletons for data parallelism in p31

open access: bronze, 1997
This paper addresses the application of a skeleton/template compiling strategy to structured data parallel computations. In particular, we discuss how data parallelism is expressed and implemented in p31, a structured parallel language based on skeletons.
Marco Danelutto   +2 more
openalex   +4 more sources

Quantum data parallelism in quantum neural networks

open access: goldPhysical Review Research
Quantum neural networks hold promise for achieving lower generalization error bounds and enhanced computational efficiency in processing certain datasets.
Sixuan Wu, Yue Zhang, Jian Li
doaj   +2 more sources

Adaptive memory reservation strategy for heavy workloads in the Spark environment [PDF]

open access: yesPeerJ Computer Science
The rise of the Internet of Things (IoT) and Industry 2.0 has spurred a growing need for extensive data computing, and Spark emerged as a promising Big Data platform, attributed to its distributed in-memory computing capabilities.
Bohan Li   +6 more
doaj   +3 more sources

Accelerating Distributed SGD With Group Hybrid Parallelism

open access: yesIEEE Access, 2021
The scale of model parameters and datasets is rapidly growing for high accuracy in various areas. To train a large-scale deep neural network (DNN) model, a huge amount of computation and memory is required; therefore, a parallelization technique for ...
Kyung-No Joo, Chan-Hyun Youn
doaj   +1 more source

SHAT: A Novel Asynchronous Training Algorithm That Provides Fast Model Convergence in Distributed Deep Learning

open access: yesApplied Sciences, 2021
The recent unprecedented success of deep learning (DL) in various fields is underlied by its use of large-scale data and models. Training a large-scale deep neural network (DNN) model with large-scale data, however, is time-consuming.
Yunyong Ko, Sang-Wook Kim
doaj   +1 more source

Relating data-parallelism and (and-) parallelism in logic programs [PDF]

open access: yesComputer Languages, 1995
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages. Traditional (and-) parallel models offer generality, being able to exploit parallelism in a large class of programs ...
Hermenegildo, Manuel V.   +1 more
openaire   +4 more sources

Achieving new SQL query performance levels through parallel execution in SQL Server [PDF]

open access: yesE3S Web of Conferences, 2023
This article provides an in-depth look at implementing parallel SQL query processing using the Microsoft SQL Server database management system. It examines how parallelism can significantly accelerate query execution by leveraging multi-core processors ...
Nuriev Marat   +3 more
doaj   +1 more source

Home - About - Disclaimer - Privacy