Results 311 to 320 of about 21,055,470 (353)
Data Parallel Programming [PDF]
SIMD computers operate as data parallel computers by having the same instruction executed by different processing elements but on different data and all in a synchronous fashion. In an SIMD machine the synchronization is built into the hardware, meaning that the processing elements operate in lock-step fashion.
openaire +1 more source
Some of the next articles are maybe not open access.
Related searches:
Related searches:
Data parallel fault simulation
Proceedings of ICCD '95 International Conference on Computer Design. VLSI in Computers and Processors, 1999Fault simulation is a compute intensive problem. Data parallel simulation on multiple processors is one method to reduce fault simulation time. We discuss a novel technique to partition the fault set for data parallel fault simulation. When applied statically, the technique can scale well for up to eight processors. The fault set partitioning technique
M.B. Amin, B. Vinnakota
openaire +3 more sources
COMPCON Spring '91 Digest of Papers, 2002
The author examines a method of applying parallelism to data-moving operations to enhance performance so that they may fit into today's maintenance windows. She specifically discusses converting the algorithm used to load an alternate key file (index) from serial to parallel using Tandem's Non-Stop SQL.
openaire +2 more sources
The author examines a method of applying parallelism to data-moving operations to enhance performance so that they may fit into today's maintenance windows. She specifically discusses converting the algorithm used to load an alternate key file (index) from serial to parallel using Tandem's Non-Stop SQL.
openaire +2 more sources
Optimizing for parallelism and data locality
Proceedings of the 6th international conference on Supercomputing - ICS '92, 1992Previous research has used program transformation to introduce parallelism and to exploit data locality. Unfortunately, these two objectives have usually been considered independently. This work explores the trade-offs between effectively utilizing parallelism and memory hierarchy on shared-memory multiprocessors.
Ken Kennedy, Kathryn S. McKinley
openaire +2 more sources
1990
Supercomputers with a performance of a trillion floating-point operations per second, or more, can be produced in state-of-the-art MOS technologies. Such computers will have tens of thousands of processors interconnected by a network of bounded degree.
openaire +2 more sources
Supercomputers with a performance of a trillion floating-point operations per second, or more, can be produced in state-of-the-art MOS technologies. Such computers will have tens of thousands of processors interconnected by a network of bounded degree.
openaire +2 more sources
On Data Parallelism of Erasure Coding in Distributed Storage Systems
IEEE International Conference on Distributed Computing Systems, 2017Jun Li, Baochun Li
semanticscholar +1 more source
Providing high‐level self‐adaptive abstractions for stream parallelism on multicores
Software - Practice and Experience, 2021Adriano Vogel+2 more
exaly
Intercontinental genomic parallelism in multiple three-spined stickleback adaptive radiations
Nature Ecology and Evolution, 2021Muayad Mahmud+2 more
exaly
Partitioning functions for stateful data parallelism in stream processing
The VLDB journal, 2014B. Gedik
semanticscholar +1 more source
Bitwise data parallelism in regular expression matching
International Conference on Parallel Architectures and Compilation Techniques, 2014R. Cameron+6 more
semanticscholar +1 more source