Results 291 to 300 of about 3,076,045 (348)
Some of the next articles are maybe not open access.
Numerical Algorithms and Parallel Tasking.
1983Abstract : The long term goals of this research activity are derived from concurrent computing with emphasis on numerical algorithms that support a variety of scientific applications. Among the applications of immediate interest are signal processing, high statistical methods such as the bootstrap.
E. R. Ducot, V. C. Klema
openaire +1 more source
On Scheduling Parallel Tasks at Twilight [PDF]
We consider the problem of processing a given number of tasks on a given number of processors as quickly as possible when only vague information about the processing time of a task is available before it is completed. Whenever a processor is idle, it can be assigned, at the price of a certain overhead, a portion, called a chunk, of the unassigned tasks.
openaire +1 more source
Scalable computing with parallel tasks [PDF]
Recent and future parallel clusters and supercomputers use SMPs and multi-core processors as basic nodes, providing a huge amount of parallel resources. These systems often have hierarchically structured interconnection networks combining computing resources at different levels, starting with the interconnect within multi-core processors up to the ...
Gudula Rünger+2 more
openaire +1 more source
Ada tasking and parallel processors
Proceedings of the conference on Tri-Ada '89 Ada technology in context: application, development, and deployment - TRI-Ada '89, 1989This paper describes the creation of a shared memory multiprocessor runtime that uses the Ada language tasking model as the basis for CPU allocation. The modifications to an Ada runtime targeted to a single MIL-STD-1750A processor to create a multiprocessor Ada runtime are described. Some performance information is given.
M. Linnig, D. Forinash
openaire +2 more sources
Pygion: Flexible, Scalable Task-Based Parallelism with Python
2019 IEEE/ACM Parallel Applications Workshop, Alternatives To MPI (PAW-ATM), 2019Dynamic languages provide the flexibility needed to implement expressive support for task-based parallel program- ming constructs. We present Pygion, a Python interface for the Legion task-based programming system, and show that it can provide features ...
Elliott Slaughter, A. Aiken
semanticscholar +1 more source
1992
Die Theorie der parallelen Prozesse, die miteinander kooperieren und konkurrieren, indem sie Signale (Semaphore) austauschen, wurde 1968 von Dijkstra begrundet. Aus der Systemprogrammierung stammen Bezeichnungen wie TASK, ENTRY, CALL, PRIORITY und time-sharing.
openaire +2 more sources
Die Theorie der parallelen Prozesse, die miteinander kooperieren und konkurrieren, indem sie Signale (Semaphore) austauschen, wurde 1968 von Dijkstra begrundet. Aus der Systemprogrammierung stammen Bezeichnungen wie TASK, ENTRY, CALL, PRIORITY und time-sharing.
openaire +2 more sources
List scheduling of parallel tasks
Information Processing Letters, 1991We are concerned with nonreconfigurable nonpreemptive scheduling policies, i.e., once a task has been started, its scheduled parallelism cannot be changed, nor can the task be preempted.
Qingzhou Wang, Kam-Hoi Cheng
openaire +2 more sources
Integrating task and data parallelism with task HPF
2000An abstract is not available.
S. CIARPAGLINI+4 more
openaire +6 more sources
FunctionFlow: coordinating parallel tasks
Frontiers of Computer Science, 2018With the growing popularity of task-based parallel programming, nowadays task-parallel programming libraries and languages are still with limited support for coordinating parallel tasks. Such limitation forces programmers to use additional independent components to coordinate the parallel tasks — the components can be third-party libraries or ...
Xuepeng Fan, Hai Jin, Xiaofei Liao
openaire +2 more sources
Scheduling interval ordered tasks in parallel
Journal of Algorithms, 1993We present the first NC algorithm for schedulingnunit length tasks onmidentical processors for the case where the precedence constraint is an interval order. Our algorithm runs on a priority concurrent read, concurrent write parallel random access machine inO(log2n) withO(n5) processors, or inO(log3n) time withO(n4) processors. The algorithm constructs
Sivaprakasam Sunder, Xin He
openaire +3 more sources