Results 1 to 10 of about 110,838 (332)
TAPP: DNN Training for Task Allocation through Pipeline Parallelism Based on Distributed Deep Reinforcement Learning [PDF]
The rapid development of artificial intelligence technology has made deep neural networks (DNNs) widely used in various fields. DNNs have been continuously growing in order to improve the accuracy and quality of the models.
Yingchi Mao +4 more
doaj +2 more sources
Task parallel assembly language for uncompromising parallelism [PDF]
Achieving parallel performance and scalability involves making compromises between parallel and sequential computation. If not contained, the overheads of parallelism can easily outweigh its benefits, sometimes by orders of magnitude. Today, we expect programmers to implement this compromise by optimizing their code manually.
Mike Rainey +6 more
openalex +2 more sources
Generalized Task Parallelism [PDF]
Existing approaches to automatic parallelization produce good results in specific domains. Yet, it is unclear how to integrate their individual strengths to match the demands and opportunities of complex software. This lack of integration has both practical reasons, as integrating those largely differing approaches into one compiler would impose an ...
Kevin Streit +4 more
openalex +3 more sources
Extracting task-level parallelism [PDF]
Automatic detection of task-level parallelism (also referred to as functional, DAG, unstructured, or thread parallelism) at various levels of program granularity is becoming increasingly important for parallelizing and back-end compilers.
Milind Girkar +1 more
openalex +2 more sources
Adaptive memory reservation strategy for heavy workloads in the Spark environment [PDF]
The rise of the Internet of Things (IoT) and Industry 2.0 has spurred a growing need for extensive data computing, and Spark emerged as a promising Big Data platform, attributed to its distributed in-memory computing capabilities.
Bohan Li +6 more
doaj +3 more sources
torcpy: Supporting task parallelism in Python
Task-based parallelism has been established as one of the main forms of code parallelization, where asynchronous tasks are launched and distributed across the processing units of a local machine, a cluster or a supercomputer.
P.E. Hadjidoukas +5 more
doaj +2 more sources
Increasing the degree of parallelism using speculative execution in task-based runtime systems [PDF]
Task-based programming models have demonstrated their efficiency in the development of scientific applications on modern high-performance platforms. They allow delegation of the management of parallelization to the runtime system (RS), which is in charge
Bérenger Bramas
doaj +3 more sources
The Eureka Programming Model for Speculative Task Parallelism [PDF]
In this paper, we describe the Eureka Programming Model (EuPM) that simplifies the expression of speculative parallel tasks, and is especially well suited for parallel search and optimization applications.
Imam, Shams, Sarkar, Vivek
core +4 more sources
Elastic Tasks: Unifying Task Parallelism and SPMD Parallelism with an Adaptive Runtime [PDF]
In this paper, we introduce elastic tasks, a new high-level parallel programming primitive that can be used to unify task parallelism and SPMD parallelism in a common adaptive scheduling framework. Elastic tasks are internally parallel tasks and can run on a single worker or expand to take over multiple workers.
Alina Sbîrlea +2 more
openalex +2 more sources
Runtime-adaptive generalized task parallelism
Part of the work presented in this thesis was performed in the context of the SoftwareCluster project EMERGENT (http://www.software-cluster.org). It was funded by the German Federal Ministry of Education and Research (BMBF) under grant no. “01IC10S01”.
Kevin Streit
openalex +2 more sources

