Results 1 to 10 of about 21,831,698 (267)

Fiber Clustering Acceleration With a Modified Kmeans++ Algorithm Using Data Parallelism [PDF]

open access: yesFrontiers in Neuroinformatics, 2021
Fiber clustering methods are typically used in brain research to study the organization of white matter bundles from large diffusion MRI tractography datasets.
Isaac Goicovich   +8 more
doaj   +3 more sources

Relating data—parallelism and (and—) parallelism in logic programs [PDF]

open access: goldComputer Languages, 1996
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages.
Ali   +62 more
core   +5 more sources

Algebraic data parallelism implementation in HPF

open access: diamondLietuvos Matematikos Rinkinys, 2002
There is not abstract.
Valdona Judickaitė   +2 more
doaj   +4 more sources

The Parallelism Motifs of Genomic Data Analysis [PDF]

open access: yesPhilosophical Transactions of the Royal Society A, 2020
Genomic data sets are growing dramatically as the cost of sequencing continues to decline and small sequencing devices become available. Enormous community databases store and share this data with the research community, but some of these genomic data ...
Awan, Muaaz   +13 more
core   +4 more sources

Parallelism Strategies for Big Data Delayed Transfer Entropy Evaluation [PDF]

open access: goldAlgorithms, 2019
Generated and collected data have been rising with the popularization of technologies such as Internet of Things, social media, and smartphone, leading big data term creation. One class of big data hidden information is causality.
Jonas R. Dourado   +2 more
doaj   +2 more sources

A note on data-parallelism and (and-parallel) prolog

open access: green, 1994
Abstract is not available.
Hermenegildo, Manuel V.   +1 more
openaire   +3 more sources

Accelerating Distributed SGD With Group Hybrid Parallelism

open access: yesIEEE Access, 2021
The scale of model parameters and datasets is rapidly growing for high accuracy in various areas. To train a large-scale deep neural network (DNN) model, a huge amount of computation and memory is required; therefore, a parallelization technique for ...
Kyung-No Joo, Chan-Hyun Youn
doaj   +1 more source

SHAT: A Novel Asynchronous Training Algorithm That Provides Fast Model Convergence in Distributed Deep Learning

open access: yesApplied Sciences, 2021
The recent unprecedented success of deep learning (DL) in various fields is underlied by its use of large-scale data and models. Training a large-scale deep neural network (DNN) model with large-scale data, however, is time-consuming.
Yunyong Ko, Sang-Wook Kim
doaj   +1 more source

Communication Optimization Schemes for Accelerating Distributed Deep Learning Systems

open access: yesApplied Sciences, 2020
In a distributed deep learning system, a parameter server and workers must communicate to exchange gradients and parameters, and the communication cost increases as the number of workers increases.
Jaehwan Lee   +4 more
doaj   +1 more source

A Relay-Assisted Communication Scheme for Collaborative On-Device CNN Execution Considering Hybrid Parallelism

open access: yesIEEE Access, 2023
Deep learning (DL) has gained increasing prominence in latency-critical artificial intelligence (AI) applications. Due to the intensive computational requirements of these applications, cloud-centric approaches have been attempted to address this issue ...
Emre Kilcioglu   +2 more
doaj   +1 more source

Home - About - Disclaimer - Privacy