Results 1 to 10 of about 3,075,967 (270)

torcpy: Supporting task parallelism in Python

open access: yesSoftwareX, 2020
Task-based parallelism has been established as one of the main forms of code parallelization, where asynchronous tasks are launched and distributed across the processing units of a local machine, a cluster or a supercomputer.
P.E. Hadjidoukas   +5 more
doaj   +6 more sources

Task Parallelism-Aware Deep Neural Network Scheduling on Multiple Hybrid Memory Cube-Based Processing-in-Memory

open access: yesIEEE Access, 2021
Processing-in-memory (PIM) comprises computational logic in the memory domain. It is the most promising solution to alleviate the memory bandwidth problem in deep neural network (DNN) processing.
Young Sik Lee, Tae Hee Han
doaj   +2 more sources

Automatic Inference of Task Parallelism in Task-Graph-Based Actor Models

open access: yesIEEE Access, 2018
Automatic inference of task level parallelism is fundamental for ensuring many kinds of safety and liveness properties of parallel applications. For example, two tasks running in parallel may be involved in data races when they have conflicting memory ...
Abu Naser Masud   +2 more
doaj   +2 more sources

Adaptive memory reservation strategy for heavy workloads in the Spark environment [PDF]

open access: yesPeerJ Computer Science
The rise of the Internet of Things (IoT) and Industry 2.0 has spurred a growing need for extensive data computing, and Spark emerged as a promising Big Data platform, attributed to its distributed in-memory computing capabilities.
Bohan Li   +6 more
doaj   +3 more sources

Generalized Task Parallelism [PDF]

open access: yesACM Transactions on Architecture and Code Optimization, 2015
Existing approaches to automatic parallelization produce good results in specific domains. Yet, it is unclear how to integrate their individual strengths to match the demands and opportunities of complex software. This lack of integration has both practical reasons, as integrating those largely differing approaches into one compiler would impose an ...
Kevin Streit   +4 more
semanticscholar   +3 more sources

Extracting task-level parallelism [PDF]

open access: bronzeACM Transactions on Programming Languages and Systems, 1995
Automatic detection of task-level parallelism (also referred to as functional, DAG, unstructured, or thread parallelism) at various levels of program granularity is becoming increasingly important for parallelizing and back-end compilers.
Milind Girkar   +1 more
openalex   +3 more sources

Task parallel assembly language for uncompromising parallelism

open access: yesProceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation, 2021
Achieving parallel performance and scalability involves making compromises between parallel and sequential computation. If not contained, the overheads of parallelism can easily outweigh its benefits, sometimes by orders of magnitude.
Mike Rainey   +4 more
semanticscholar   +3 more sources

Multi-Satellite Task Parallelism via Priority-Aware Decomposition and Dynamic Resource Mapping

open access: goldMathematics
Multi-satellite collaborative computing has achieved task decomposition and collaborative execution through inter-satellite links (ISLs), which has significantly improved the efficiency of task execution and system responsiveness.
Shangpeng Wang   +4 more
doaj   +2 more sources

Elastic Tasks: Unifying Task Parallelism and SPMD Parallelism with an Adaptive Runtime

open access: yesEuropean Conference on Parallel Processing, 2015
In this paper, we introduce elastic tasks, a new high-level parallel programming primitive that can be used to unify task parallelism and SPMD parallelism in a common adaptive scheduling framework. Elastic tasks are internally parallel tasks and can run on a single worker or expand to take over multiple workers.
A. Sbîrlea, Kunal Agrawal, Vivek Sarkar
semanticscholar   +3 more sources

Adaptive Parallelism for OpenMP Task Parallel Programs [PDF]

open access: green, 2000
We present a system that allows task parallel OpenMP programs to execute on a network of workstations (NOW) with a variable number of nodes. Such adaptivity, generally called adaptive parallelism, is important in a multi-user NOW environment, enabling the system to expand the computation onto idle nodes or withdraw from otherwise occupied nodes.
A. Scherer   +2 more
openalex   +3 more sources

Home - About - Disclaimer - Privacy