Results 251 to 260 of about 3,227,937 (329)
Some of the next articles are maybe not open access.
GPU-based Collaborative Filtering Recommendation System using Task parallelism approach
2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), 2018 2nd International Conference on, 2018Collaborative filtering is one among the top most preferred techniques when implementing recommendation systems. In recent times, more interest has turned towards parallel GPU-based implementation of collaborative filtering algorithms.
N. Sivaramakrishnan, V. Subramaniyaswamy
semanticscholar +1 more source
ACM Transactions on Parallel Computing
Thanks to the recognition and promotion of chiplet-based High-Performance Computing (HPC) system design technology by semiconductor industry/market leaders, chiplet-based multi-chip systems have gradually become the mainstream. Unfortunately, programming
Qinyun Cai +5 more
semanticscholar +1 more source
Thanks to the recognition and promotion of chiplet-based High-Performance Computing (HPC) system design technology by semiconductor industry/market leaders, chiplet-based multi-chip systems have gradually become the mainstream. Unfortunately, programming
Qinyun Cai +5 more
semanticscholar +1 more source
A Novel Inference Algorithm for Large Sparse Neural Network using Task Graph Parallelism
IEEE Conference on High Performance Extreme Computing, 2020The ever-increasing size of modern deep neural network (DNN) architectures has put increasing strain on the hardware needed to implement them. Sparsified DNNs can greatly reduce memory costs and increase throughput over standard DNNs, if the loss of ...
Dian-Lun Lin, Tsung-Wei Huang
semanticscholar +1 more source
Extracting task-level parallelism
ACM Transactions on Programming Languages and Systems, 1995Automatic detection of task-level parallelism (also referred to as functional, DAG, unstructured, or thread parallelism) at various levels of program granularity is becoming increasingly important for parallelizing and back-end compilers.
Milind Girkar +1 more
openaire +1 more source
FunctionFlow: coordinating parallel tasks
Frontiers of Computer Science, 2018With the growing popularity of task-based parallel programming, nowadays task-parallel programming libraries and languages are still with limited support for coordinating parallel tasks. Such limitation forces programmers to use additional independent components to coordinate the parallel tasks — the components can be third-party libraries or ...
Xuepeng Fan, Xiaofei Liao, Hai Jin
openaire +1 more source
Integrating Asynchronous Task Parallelism and Data-centric Atomicity
Principles and Practice of Programming in Java, 2016Processor design has turned toward parallelism and heterogeneous cores to achieve performance and energy efficiency. Developers find high-level languages attractive as they use abstraction to offer productivity and portability over these hardware ...
Vivek Kumar +2 more
semanticscholar +1 more source
Mixed Parallel Programming Models Using Parallel Tasks
2010Parallel programming models using parallel tasks have shown to be successful for increasing scalability on medium-size homogeneous parallel systems. Several investigations have shown that these programming models can be extended to hierarchical and heterogeneous systems which will dominate in the future.
Joerg Duemmler +2 more
openaire +1 more source
Complexity of Scheduling Parallel Task Systems
SIAM Journal on Discrete Mathematics, 1989Summary: One of of the assumptions made in classical scheduling theory is that a task is always executed by one processor at a time. With the advances in parallel algorithms, this assumption may not be valid for future task systems. In this paper, a new model of task systems is studied, the so- called Parallel Task System, in which a task can be ...
Du, Jianzhong, Leung, Joseph Y.-T.
openaire +2 more sources
2015 IEEE International Conference on Data Mining, 2015
In this paper, we develop parallel algorithms for a family of regularized multi-task methods which can model task relations under the regularization framework. Since those multi-task methods cannot be parallelized directly, we use the FISTA algorithm, which in each iteration constructs a surrogate function of the original problem by utilizing the ...
openaire +1 more source
In this paper, we develop parallel algorithms for a family of regularized multi-task methods which can model task relations under the regularization framework. Since those multi-task methods cannot be parallelized directly, we use the FISTA algorithm, which in each iteration constructs a surrogate function of the original problem by utilizing the ...
openaire +1 more source
Scheduling interval ordered tasks in parallel
Journal of Algorithms, 1993Summary: We present the first NC algorithm for scheduling \(n\) unit length tasks on \(m\) identical processors for the case where the precedence constraint is an interval order. Our algorithm runs on a priority concurrent read, concurrent write parallel random acces machine in \(O(\log^2n)\) time with \(O(n^5)\) processors, or in \(O(\log^3n)\) time ...
Sunder, Sivaprakansam, He, Xin
openaire +1 more source

