Results 201 to 210 of about 3,115,284 (271)
Some of the next articles are maybe not open access.

Task-Parallel Programming with Constrained Parallelism

IEEE Conference on High Performance Extreme Computing, 2022
Task graph programming model (TGPM) has become central to a wide range of scientific computing applications because it enables top-down optimization of parallelism that governs the macro-scale performance.
Tsung-Wei Huang, L. Hwang
semanticscholar   +1 more source

Efficiently Supporting Dynamic Task Parallelism on Heterogeneous Cache-Coherent Systems

International Symposium on Computer Architecture, 2020
Manycore processors, with tens to hundreds of tiny cores but no hardware-based cache coherence, can offer tremendous peak throughput on highly parallel programs while being complexity and energy efficient.
Moyang Wang, T. Ta, Lin Cheng, C. Batten
semanticscholar   +1 more source

On Scheduling Parallel Tasks at Twilight [PDF]

open access: possibleTheory of Computing Systems, 2000
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire   +2 more sources

GPU-based Collaborative Filtering Recommendation System using Task parallelism approach

2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), 2018 2nd International Conference on, 2018
Collaborative filtering is one among the top most preferred techniques when implementing recommendation systems. In recent times, more interest has turned towards parallel GPU-based implementation of collaborative filtering algorithms.
N. Sivaramakrishnan, V. Subramaniyaswamy
semanticscholar   +1 more source

Parallelization using task parallel library with task-based programming model

2014 IEEE 5th International Conference on Software Engineering and Service Science, 2014
In order to reduce the complexity of traditional multithreaded parallel programming, this paper explores a new task-based parallel programming using the Microsoft .NET Task Parallel Library (TPL). Firstly, this paper proposes a custom data partitioning optimization method to achieve an efficient data parallelism, and applies it to the matrix ...
Nasser Giacaman   +4 more
openaire   +2 more sources

Integrating task parallelism with actors

ACM SIGPLAN Notices, 2012
This paper introduces a unified concurrent programming model combining the previously developed Actor Model (AM) and the task-parallel Async-Finish Model (AFM). With the advent of multi-core computers, there is a renewed interest in programming models that can support a wide range of parallel programming patterns.
Vivek Sarkar, Shams Imam
openaire   +2 more sources

The design of a task parallel library

Proceedings of the 24th ACM SIGPLAN conference on Object oriented programming systems languages and applications, 2009
The Task Parallel Library (TPL) is a library for .NET that makes it easy to take advantage of potential parallelism in a program. The library relies heavily on generics and delegate expressions to provide custom control structures expressing structured parallelism such as map-reduce in user programs.
Wolfram Schulte   +2 more
openaire   +2 more sources

A programming model for deterministic task parallelism [PDF]

open access: possibleProceedings of the 2011 ACM SIGPLAN Workshop on Memory Systems Performance and Correctness, 2011
The currently dominant programming models to write software for multicore processors use threads that run over shared memory. However, as the core count increases, cache coherency protocols get very complex and ineffective, and maintaining a shared memory abstraction becomes expensive and impractical.
Pratikakis, P.   +3 more
openaire   +2 more sources

ABSS: An Adaptive Batch-Stream Scheduling Module for Dynamic Task Parallelism on Chiplet-based Multi-Chip Systems

ACM Transactions on Parallel Computing
Thanks to the recognition and promotion of chiplet-based High-Performance Computing (HPC) system design technology by semiconductor industry/market leaders, chiplet-based multi-chip systems have gradually become the mainstream. Unfortunately, programming
Qinyun Cai   +5 more
semanticscholar   +1 more source

A Novel Inference Algorithm for Large Sparse Neural Network using Task Graph Parallelism

IEEE Conference on High Performance Extreme Computing, 2020
The ever-increasing size of modern deep neural network (DNN) architectures has put increasing strain on the hardware needed to implement them. Sparsified DNNs can greatly reduce memory costs and increase throughput over standard DNNs, if the loss of ...
Dian-Lun Lin, Tsung-Wei Huang
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy