Results 71 to 80 of about 21,831,698 (267)
Data-parallel polygonization [PDF]
Data-parallel algorithms are presented for polygonizing a collection of line segments represented by a data-parallel bucket PMR quadtree, a data-parallel R-tree, and a data-parallel R+-tree. Such an operation is useful in a geographic information system (GIS).
Erik G. Hoel, Hanan Samet
openaire +1 more source
Users always care about performance. Although often it’s just a matter of making sure the software is doing only what it should, there are many cases where it is vital to get down to the metal and leverage the fundamental characteristics of the processor.
openaire +2 more sources
Stream parallelism with ordered data constraints on multi-core systems
It is often a challenge to keep input/output tasks/results in order for parallel computations over data streams, particularly when stateless task operators are replicated to increase parallelism when there are irregular tasks.
Dalvan Griebler +3 more
semanticscholar +1 more source
Study on Distributed Training Optimization Based on Hybrid Parallel [PDF]
Large-scale neural network training is a hot topic in the field of deep learning,and distributed training stands out as one of the most effective methods for training large neural networks across multiple nodes.Distributed training typically involves ...
XU Jinlong, LI Pengfei, LI Jianan, CHEN Biaoyuan, GAO Wei, HAN Lin
doaj +1 more source
Symmetries in data parallelism [PDF]
A comprehensive formalization of data-parallel (DP) symmetries in an imperative language paradigm without nesting is presented, which includes translational, affine and access symmetries. A subtyping system which takes these symmetries into account is discussed.
openaire +1 more source
A Pipeline-Based ODE Solving Framework
The traditional parallel solving methods of ordinary differential equations (ODE) are mainly classified into task-parallelism, data-parallelism, and instruction-level parallelism.
Ruixia Cao, Shangjun Hou, Lin Ma
doaj +1 more source
Model Parallelism With Subnetwork Data Parallelism
Pre-training large neural networks at scale imposes heavy memory demands on accelerators and often requires costly communication. We introduce Subnetwork Data Parallelism (SDP), a distributed training framework that partitions a model into structured subnetworks trained across workers without exchanging activations.
Singh, Vaibhav +3 more
openaire +2 more sources
Massive parallelism in the future of science [PDF]
Massive parallelism appears in three domains of action of concern to scientists, where it produces collective action that is not possible from any individual agent's behavior.
Denning, Peter J.
core +1 more source
Process-Oriented Parallel Programming with an Application to Data-Intensive Computing [PDF]
We introduce process-oriented programming as a natural extension of object-oriented programming for parallel computing. It is based on the observation that every class of an object-oriented language can be instantiated as a process, accessible via a ...
Givelberg, Edward
core
Exploitation of APL data parallelism on a shared-memory MIMD machine [PDF]
Dz-Ching Ju, Wai‐Mee Ching
openalex +3 more sources

