Results 151 to 160 of about 12,875 (189)
Some of the next articles are maybe not open access.
Selective prefetching: prefetching when only required
42nd Midwest Symposium on Circuits and Systems (Cat. No.99CH36356), 2003Cache memories are commonly used to reduce the number of slower lower-level memory accesses, thereby improving the memory hierarchy performance. However, high cache miss-ratio can severely degrade system performance. It is therefore necessary to anticipate the cache misses to reduce their frequency.
R. Pendse, H. Katta
openaire +1 more source
To hardware prefetch or not to prefetch?
ACM SIGARCH Computer Architecture News, 2013Most hardware and software venders suggest disabling hardware prefetching in virtualized environments. They claim that prefetching is detrimental to application performance due to inaccurate prediction caused by workload diversity and VM interference on shared cache.
Hui Kang, Jennifer L. Wong
openaire +1 more source
Threaded prefetching: An adaptive instruction prefetch mechanism
Microprocessing and Microprogramming, 1993Abstract We propose and analyze an adaptive instruction prefetch scheme, called threaded prefetching, that makes use of history information to guide the prefetching. The scheme is based on the observation that control flow paths are likely to repeat themselves.
Seong Baeg Kim +6 more
openaire +1 more source
Proceedings of the 22nd annual international conference on Supercomputing, 2008
Loads that miss in L1 or L2 caches and waiting for their data at the head of the ROB cause significant slow down in the form of commit stalls. We identify that most of these commit stalls are caused by a small set of loads, referred to as LIMCOS (Loads Incurring Majority of COmmit Stalls).
R. Manikantan, R. Govindarajan
openaire +1 more source
Loads that miss in L1 or L2 caches and waiting for their data at the head of the ROB cause significant slow down in the form of commit stalls. We identify that most of these commit stalls are caused by a small set of loads, referred to as LIMCOS (Loads Incurring Majority of COmmit Stalls).
R. Manikantan, R. Govindarajan
openaire +1 more source
ACM SIGARCH Computer Architecture News, 1995
This paper focuses on extending the memory subsystem by integrating a prefetch buffer mechanism. Prefetching allows high-level application knowledge to increase memory performance, which is currently constraining the performance of most system. While prefetching does not reduce the latency of memory accesses, it hides this latency by overlapping memory
Michael K. Gschwind, Thomas J. Pietsch
openaire +1 more source
This paper focuses on extending the memory subsystem by integrating a prefetch buffer mechanism. Prefetching allows high-level application knowledge to increase memory performance, which is currently constraining the performance of most system. While prefetching does not reduce the latency of memory accesses, it hides this latency by overlapping memory
Michael K. Gschwind, Thomas J. Pietsch
openaire +1 more source
Proceedings of the 7th international conference on Supercomputing, 1993
A hardware prefetching mechanism named Speculative Prefetching is proposed. This scheme detects vector accesses issued by a load/store instruction and prefetches the corresponding data. The scheme requires no software add-on, and in some cases it is more powerful than software techniques for identifying regular accesses.
Y. Jégou, O. Temam
openaire +1 more source
A hardware prefetching mechanism named Speculative Prefetching is proposed. This scheme detects vector accesses issued by a load/store instruction and prefetches the corresponding data. The scheme requires no software add-on, and in some cases it is more powerful than software techniques for identifying regular accesses.
Y. Jégou, O. Temam
openaire +1 more source
Mobile Prefetching and Web Prefetching: A Systematic Literature Review
2022Today, we see that prefetching systems are widely used to decrease the network traffic, data access latency, energy consumption, and computing performance inefficiency of data-intensive operations. However, prefetching is a concept used in different computing and IT fields such as microprocessor design, micro-controller design, hard disk design ...
Tolga Buyuktanir, Mehmet S. Aktas
openaire +3 more sources
Proceedings of the 21st international conference on Parallel architectures and compilation techniques, 2012
Memory access latency is the primary performance bottleneck in modern computer systems. Prefetching data before it is needed by a processing core allows substantial performance gains by overlapping significant portions of memory latency with useful work.
Anurag Negi +4 more
openaire +1 more source
Memory access latency is the primary performance bottleneck in modern computer systems. Prefetching data before it is needed by a processing core allows substantial performance gains by overlapping significant portions of memory latency with useful work.
Anurag Negi +4 more
openaire +1 more source
Proceedings of the Thirty-First Hawaii International Conference on System Sciences, 1998
High latency of memory accesses is critical to the performance of shared memory multiprocessors. The technology trends indicate that this gap between processor and memory speeds is likely to increase in the future. To cope with memory latency problem two software-controlled techniques have been investigated: prefetching and remote write. Prefetching is
A. Milenkovic, V. Milutinovic
openaire +1 more source
High latency of memory accesses is critical to the performance of shared memory multiprocessors. The technology trends indicate that this gap between processor and memory speeds is likely to increase in the future. To cope with memory latency problem two software-controlled techniques have been investigated: prefetching and remote write. Prefetching is
A. Milenkovic, V. Milutinovic
openaire +1 more source
Algorithmica, 2002
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire +1 more source
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire +1 more source

