Results 11 to 20 of about 3,115,284 (271)

Task Parallelism-Aware Deep Neural Network Scheduling on Multiple Hybrid Memory Cube-Based Processing-in-Memory

open access: yesIEEE Access, 2021
Processing-in-memory (PIM) comprises computational logic in the memory domain. It is the most promising solution to alleviate the memory bandwidth problem in deep neural network (DNN) processing.
Young Sik Lee, Tae Hee Han
doaj   +2 more sources

Automatic Inference of Task Parallelism in Task-Graph-Based Actor Models

open access: yesIEEE Access, 2018
Automatic inference of task level parallelism is fundamental for ensuring many kinds of safety and liveness properties of parallel applications. For example, two tasks running in parallel may be involved in data races when they have conflicting memory ...
Abu Naser Masud   +2 more
doaj   +2 more sources

Generalized Task Parallelism [PDF]

open access: yesACM Transactions on Architecture and Code Optimization, 2015
Existing approaches to automatic parallelization produce good results in specific domains. Yet, it is unclear how to integrate their individual strengths to match the demands and opportunities of complex software. This lack of integration has both practical reasons, as integrating those largely differing approaches into one compiler would impose an ...
Sebastian Hack   +4 more
openaire   +3 more sources

The Eureka Programming Model for Speculative Task Parallelism [PDF]

open access: yesEuropean Conference on Object-Oriented Programming, 2015
In this paper, we describe the Eureka Programming Model (EuPM) that simplifies the expression of speculative parallel tasks, and is especially well suited for parallel search and optimization applications.
Imam, Shams, Sarkar, Vivek
core   +3 more sources

Scalable Task Parallelism for NUMA

open access: greenInternational Conference on Parallel Architectures and Compilation Techniques, 2016
Andi Drebes   +4 more
openalex   +3 more sources

Secure Deep Neural Network Models Publishing Against Membership Inference Attacks Via Training Task Parallelism

open access: yesIEEE Transactions on Parallel and Distributed Systems, 2021
Vast data and computing resources are commonly needed to train deep neural networks, causing an unaffordable price for individual users. Motivated by the increasing demands of deep learning applications, sharing well-trained models becomes popular.
Yunlong Mao   +5 more
semanticscholar   +1 more source

Toward Efficient Similarity Search under Edit Distance on Hybrid Architectures

open access: yesInformation, 2022
Edit distance is the most widely used method to quantify similarity between two strings. We investigate the problem of similarity search under edit distance.
Madiha Khalid   +2 more
doaj   +1 more source

Resolving ambiguous polarity stripping ellipsis structures in Persian

open access: yesGlossa, 2021
Previous studies have shown that English speakers use a range of factors including locality, information structure, and semantic parallelism to interpret clausal ellipsis structures. Yet, the relative importance of each factor is currently underexplored.
Jesse Harris, Vahideh Rasekhi
doaj   +2 more sources

Exploiting Task Parallelism with OpenCL: A Case Study

open access: yesJournal of Signal Processing Systems, 2018
While data parallelism aspects of OpenCL have been of primary interest due to the massively data parallel GPUs being on focus, OpenCL also provides powerful capabilities to describe task parallelism.
P. Jääskeläinen   +8 more
semanticscholar   +2 more sources

Optimizing Iterative Data-Flow Scientific Applications Using Directed Cyclic Graphs

open access: yesIEEE Access, 2023
Data-flow programming models have become a popular choice for writing parallel applications as an alternative to traditional work-sharing parallelism. They are better suited to write applications with irregular parallelism that can present load imbalance.
David Alvarez, Vicenc Beltran
doaj   +1 more source

Home - About - Disclaimer - Privacy