Results 181 to 190 of about 1,063 (218)
Some of the next articles are maybe not open access.
Data prefetching by dependence graph precomputation
ACM SIGARCH Computer Architecture News, 2001Data cache misses reduce the performance of wide-issue processors by stalling the data supply to the processor. Prefetching data by predicting the miss address is one way to tolerate the cache miss latencies. But current applications with irregular access patterns make it difficult to accurately predict the address sufficiently early to ...
M. Annavaram, J.M. Patel, E.S. Davidson
openaire +1 more source
A Taxonomy of Data Prefetching Mechanisms
2008 International Symposium on Parallel Architectures, Algorithms, and Networks (i-span 2008), 2008Data prefetching has been considered an effective way to mask data access latency caused by cache misses and to bridge the performance gap between processor and memory. With hardware and/or software support, data prefetching brings data closer to a processor before it is actually needed.
Surendra Byna, Yong Chen, Xian-He Sun
openaire +1 more source
Maintaining Cache Coherence through Compiler-Directed Data Prefetching
Journal of Parallel and Distributed Computing, 1998zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Lim, Hock-Beng, Yew, Pen-Chung
openaire +1 more source
Data prefetching with co-operative caching
Proceedings. Fifth International Conference on High Performance Computing (Cat. No. 98EX238), 2002Recent research in data cache prefetching is found to be selective in nature: achieving high prediction accuracy over a set of selected references such as array access with constant strides. As a result, for applications where the memory latency is mainly due to data accesses in the set of non selected references of a program, they lose their ...
null Chi-Hung Chi, S.L. Lau
openaire +1 more source
Data Prefetching for Heterogeneous Hadoop Cluster
2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS), 2019Hadoop is an open source implementation of MapReduce. Performance of Hadoop is affected by the overhead of communication during the transmission of large datasets to the computing node. In a heterogeneous cluster if a map task wants to process the data, which is not present in the local disk then the data transmission overhead occurs.
D C Vinutha, G T Raju
openaire +1 more source
A compiler-assisted data prefetch controller
Proceedings 1999 IEEE International Conference on Computer Design: VLSI in Computers and Processors (Cat. No.99CB37040), 2003Data prefetching has been proposed as a means of hiding the memory access latencies of data referencing patterns that defeat caching strategies. Prefetching techniques that either use special cache logic to issue prefetches or that rely on the processor to issue prefetch requests typically involve some compromise between accuracy and instruction ...
S.P. Vander Wiel, D.J. Lilja
openaire +1 more source
Data prefetching for digital alpha
1999Some of the current microprocessors provide a prefetch instruction, but either the instruction is treated as a NOP, (e.g. Digital Alpha EV4/5), or only a small number of outstanding prefetches is permitted (e.g. MIPS R10K). This paper discusses the design and implementation of the hardware support required to fully support the prefetch instruction for ...
openaire +1 more source
A Hardware Scheme for Data Prefetching
2000Prefetching brings data into the cache before the processor expects it, thereby eliminating potential cache misses. There are two major prefetching schemes. In a software scheme, the compiler predicts memory access patterns and places prefetch instructions in the code. In a hardware scheme, hardware predicts memory access patterns at runtime and brings
Sathiamoorthy Manoharan, See-Mu Kim
openaire +1 more source
Data Cache Prefetching With Dynamic Adaptation
The Computer Journal, 2010Modern processors based on VLIW architecture rely heavily on software cache prefetching incorporated by the compiler. For accurate prefetching different factors such as latencies of the loop iterations need to be taken into account, which cannot be determined at (static) compile time.
openaire +1 more source
AMPP: An Adaptive Multilayer Perceptron Prefetcher for Irregular Data Prefetching
2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), 2023Juan Fang +4 more
openaire +1 more source

