Results 21 to 30 of about 21,441,590 (245)

Novel VLSI Architectures and Micro-Cell Libraries for Subscalar Computations

open access: yesIEEE Access, 2022
Parallelism is the key to enhancing the throughput of computing structures. However, it is well established that the presence of data-flow dependencies adversely impacts the exploitation of such parallelism. This paper presents a case for a new computing
Kumar Sambhav Pandey, Hitesh Shrimali
doaj   +1 more source

Integrating parallelism and asynchrony for high-performance software development [PDF]

open access: yesE3S Web of Conferences, 2023
This article delves into the crucial roles of parallelism and asynchrony in the development of high-performance software programs. It provides an insightful exploration into how these methodologies enhance computing systems' efficiency and performance ...
Zaripova Rimma   +2 more
doaj   +1 more source

Distributed Machine Learning Using Data Parallelism on Mobile Platform

open access: yesJ. Mobile Multimedia, 2020
Machine learning has many challenges, and one of them is to deal with large datasets, because the size of them grows continuously year by year. One solution to this problem is data parallelism. This paper investigates the expansion of data parallelism to
M. Szabó
semanticscholar   +1 more source

SingleCaffe: An Efficient Framework for Deep Learning on a Single Node

open access: yesIEEE Access, 2018
Deep learning (DL) is currently the most promising approach in complicated applications such as computer vision and natural language processing. It thrives with large neural networks and large datasets.
Chenxu Wang   +5 more
doaj   +1 more source

Towards accelerating model parallelism in distributed deep learning systems.

open access: yesPLoS ONE, 2023
Modern deep neural networks cannot be often trained on a single GPU due to large model size and large data size. Model parallelism splits a model for multiple GPUs, but making it scalable and seamless is challenging due to different information sharing ...
Hyeonseong Choi   +3 more
doaj   +1 more source

TAPP: DNN Training for Task Allocation through Pipeline Parallelism Based on Distributed Deep Reinforcement Learning

open access: yesApplied Sciences, 2021
The rapid development of artificial intelligence technology has made deep neural networks (DNNs) widely used in various fields. DNNs have been continuously growing in order to improve the accuracy and quality of the models.
Yingchi Mao   +4 more
doaj   +1 more source

Exploiting path parallelism in logic programming [PDF]

open access: yes, 1995
This paper presents a novel parallel implementation of Prolog. The system is based on Multipath, a novel execution model for Prolog that implements a partial breadth-first search of the SLD-tree.
González Colás, Antonio María   +1 more
core   +1 more source

Parallelization and Locality Optimization for Red-Black Gauss-Seidel Stencil [PDF]

open access: yesJisuanji kexue, 2022
Stencil is a common cyclic nested computing model,which is widely used in many scientific and engineering simulation applications,such as computational electromagnetism,weather simulation,geophysics,ocean simulation and so on.With the deve-lopment of ...
JI Ying-rui, YUAN Liang, ZHANG Yun-quan
doaj   +1 more source

Modular design of data-parallel graph algorithms [PDF]

open access: yes, 2013
Amorphous Data Parallelism has proven to be a suitable vehicle for implementing concurrent graph algorithms effectively on multi-core architectures.
Christianson, B.   +2 more
core   +1 more source

Relating data—parallelism and (and—) parallelism in logic programs [PDF]

open access: yes, 1996
Much work has been done in the áreas of and-parallelism and data parallelism in Logic Programs. Such work has proceeded to a certain extent in an independent fashion. Both types of parallelism offer advantages and disadvantages.
Ali   +62 more
core   +2 more sources

Home - About - Disclaimer - Privacy