Results 1 to 10 of about 269,484 (233)

Worksharing Tasks: An Efficient Way to Exploit Irregular and Fine-Grained Loop Parallelism [PDF]

open access: green2019 IEEE 26th International Conference on High Performance Computing, Data, and Analytics (HiPC), Hyderabad, India, 2019, pp. 383-394, 2020
Shared memory programming models usually provide worksharing and task constructs. The former relies on the efficient fork-join execution model to exploit structured parallelism; while the latter relies on fine-grained synchronization among tasks and a flexible data-flow execution model to exploit dynamic, irregular, and nested parallelism.
Marcos Maroñas   +4 more
arxiv   +3 more sources

Adaptive memory reservation strategy for heavy workloads in the Spark environment [PDF]

open access: yesPeerJ Computer Science
The rise of the Internet of Things (IoT) and Industry 2.0 has spurred a growing need for extensive data computing, and Spark emerged as a promising Big Data platform, attributed to its distributed in-memory computing capabilities.
Bohan Li   +6 more
doaj   +3 more sources

torcpy: Supporting task parallelism in Python

open access: yesSoftwareX, 2020
Task-based parallelism has been established as one of the main forms of code parallelization, where asynchronous tasks are launched and distributed across the processing units of a local machine, a cluster or a supercomputer.
P.E. Hadjidoukas   +5 more
doaj   +5 more sources

CppSs -- a C++ Library for Efficient Task Parallelism [PDF]

open access: greenarXiv, 2015
We present the C++ library CppSs (C++ super-scalar), which provides efficient task-parallelism without the need for special compilers or other software. Any C++ compiler that supports C++11 is sufficient. CppSs features different directionality clauses for defining data dependencies.
Steffen Brinkmann, José Gracia
arxiv   +3 more sources

Extracting task-level parallelism [PDF]

open access: bronzeACM Transactions on Programming Languages and Systems, 1995
Automatic detection of task-level parallelism (also referred to as functional, DAG, unstructured, or thread parallelism) at various levels of program granularity is becoming increasingly important for parallelizing and back-end compilers.
Milind Girkar   +1 more
openalex   +3 more sources

Multi-Satellite Task Parallelism via Priority-Aware Decomposition and Dynamic Resource Mapping

open access: goldMathematics
Multi-satellite collaborative computing has achieved task decomposition and collaborative execution through inter-satellite links (ISLs), which has significantly improved the efficiency of task execution and system responsiveness.
Shangpeng Wang   +4 more
doaj   +2 more sources

Adaptive Parallelism for OpenMP Task Parallel Programs [PDF]

open access: green, 2000
We present a system that allows task parallel OpenMP programs to execute on a network of workstations (NOW) with a variable number of nodes. Such adaptivity, generally called adaptive parallelism, is important in a multi-user NOW environment, enabling the system to expand the computation onto idle nodes or withdraw from otherwise occupied nodes.
A. Scherer   +2 more
openalex   +3 more sources

Task parallelism and high-performance languages [PDF]

open access: greenIEEE Parallel & Distributed Technology: Systems & Applications, 1996
The definition of High Performance Fortran (HPF) is a significant event in the maturation of parallel computing: it represents the first parallel language that has gained widespread support from vendors and users. The subject of this paper is to incorporate support for task parallelism.
Ian Foster
openalex   +4 more sources

Integrating task and data parallelism [PDF]

open access: bronzeProceedings of the 1993 ACM/IEEE conference on Supercomputing - Supercomputing '93, 1993
Many models of concurrency and concurrent programming have been proposed; most can be categorized as either task-parallel (based on functional decomposition) or data-parallel (based on data decomposition). Task-parallel models are most effective for expressing irregular computations; data-parallel models are most effective for expressing regular ...
Ian Foster, Carl Kesselman
  +7 more sources

Jet: Fast quantum circuit simulations with parallel task-based tensor-network contraction [PDF]

open access: yesQuantum, 2022
We introduce a new open-source software library $Jet$, which uses task-based parallelism to obtain speed-ups in classical tensor-network simulations of quantum circuits.
Trevor Vincent   +6 more
doaj   +1 more source

Home - About - Disclaimer - Privacy