Results 141 to 150 of about 20,930 (183)
Cache memories are commonly used to reduce the cost of accessing data and instructions in memory. Misses in the cache can severely reduce system performance. It is therefore beneficial to try to anticipate cache misses in an attempt to reduce their frequency.
J.P. Casmira, D.R. Kaeli
openaire +2 more sources
Reducing Last Level Cache Pollution in NUMA Multicore Systems for Improving Cache Performance
Non-uniform memory architecture (NUMA) system has numerous nodes with shared last level cache (LLC). Their shared LLC has brought many benefits in the cache utilization. However, LLC can be seriously polluted by tasks that cause huge I/O traffic for a long time since inclusive cache architecture of LLC replaces valid cache line by back-invalidate. Many
Deukhyeon An +3 more
openaire +2 more sources
APP: adaptively protective policy against cache thrashing and pollution
Least Recently Used (LRU) is the most commonly used cache replacement policy; however, it suffers from two problems: i) cache thrashing, i.e., repeated references cause continuous page evictions due to a larger size of the working set than that of the cache, and ii) cache pollution, i.e., high reuse content gets evicted by items with low or no reuse ...
Saeid Montazeri Shahtouri +1 more
openaire +2 more sources
Cache Pollution Prevention Mechanism Based on Deep Reinforcement Learning in NDN
As a representative architecture of content- centric paradigms for the future Internet, named data networking (NDN) enables consumers to retrieve content duplicates from either the original server or intermediate routers. Each node of NDN is equipped with cache that buffers but not validates the data, making it vulnerable to various attacks.
Jie Zhou +3 more
openaire +2 more sources
Some of the next articles are maybe not open access.
Related searches:
Related searches:
Cache Pollution Prevention Mechanism Based on Cache Partition in V-NDN
2020 IEEE/CIC International Conference on Communications in China (ICCC), 2020The information-centric networking, which aims to solve the demand for distributing a large amount of content on the Internet, has proved to be a promising example for various network solutions, such as the Vehicular ad-hoc network (VANET). However, some problems are introduced when the named data networking is combined with V-NDN, such as the cache ...
Jie Zhou +3 more
openaire +1 more source
Proceedings of ICCD '95 International Conference on Computer Design. VLSI in Computers and Processors, 2002
The bandwidth mismatch of today's high speed processors and standard DRAMS is between a factor of 10 to 50. From 1995 to the year 2000 this mismatch is expected to grow to three orders of magnitude, necessitating greater emphasis for on-chip caches. Today on-chip caches typically consume from 20% to 50% of the total chip area and their cost is mostly a
S.J. Walsh, J.A. Board
openaire +1 more source
The bandwidth mismatch of today's high speed processors and standard DRAMS is between a factor of 10 to 50. From 1995 to the year 2000 this mismatch is expected to grow to three orders of magnitude, necessitating greater emphasis for on-chip caches. Today on-chip caches typically consume from 20% to 50% of the total chip area and their cost is mostly a
S.J. Walsh, J.A. Board
openaire +1 more source
Reducing cache pollution of prefetching in a small data cache
Proceedings 2001 IEEE International Conference on Computer Design: VLSI in Computers and Processors. ICCD 2001, 2002The need for a low power, high performance embedded processor has grown at a very fast pace in recent years. Embedded processors require smaller cache size for low power system-on-a-chip consideration. Decreasing cache size leads to reduced power consumption because a smaller cache has less capacitance from the bit array size as well as smaller drivers
P. Reungsang +4 more
openaire +1 more source
Cache pollution in Web proxy servers
Proceedings International Parallel and Distributed Processing Symposium, 2004Caching has been used for decades as an effective performance enhancing technique in computer systems. The Least Recently Used (LRU) cache replacement algorithm is a simple and widely used scheme. Proxy caching is a common approach to reduce network traffic and delay in many World Wide Web (WWW) applications.
R. Ayani +2 more
openaire +1 more source
Unpopular Addresses Should Not Pollute the Cache
2012 13th Symposium on Computer Systems, 2012The "popularity" of a memory word is the fraction of references to this word over the total of memory references in the execution of a program. In this paper we formally define the metric \emph{popularity of reference} and explain the reference patterns of the CommBench programs in terms of this metric.
Renato Carmo, Roberto A. Hexsel
openaire +1 more source
A Novel Cache Replacement Scheme against Cache Pollution Attack in Content-Centric Networks
2019 IEEE/CIC International Conference on Communications in China (ICCC), 2019In-network cache is a key service of content-centric networks. As an instance of content-centric networks, Named Data Networking (NDN) uses the traditional Least Recently Used (LRU) and Least Frequently Used (LFU) cache replacement strategy, which will cause large storage redundancy.
Ying Liu +4 more
openaire +1 more source

