Results 31 to 40 of about 133,285 (280)

Frozen Cache: Mitigating Filter Effect and Redundancy for Network of Caches

open access: yesIEEE Access, 2021
Information-Centric Networking (ICN) architecture leverages the network of caches’ idea to bring content closer to consumers to ultimately reduce the load on content servers and prevent unnecessary packet re-transmissions.
Saeid Montazeri Shahtouri   +2 more
doaj   +1 more source

Managing data caches using selective cache line replacement [PDF]

open access: yesInternational Journal of Parallel Programming, 1997
As processor performance continues to improve, more demands are being placed on the performance of the memory system. The caches employed in current processor designs are very similar to those described in early cache studies. In this paper, a detailed characterization of data cache behavior for individual load instructions is given.
Gary S. Tyson   +3 more
openaire   +1 more source

Endurance-aware cache line management for non-volatile caches [PDF]

open access: yesACM Transactions on Architecture and Code Optimization, 2014
Nonvolatile memories (NVMs) have the potential to replace low-level SRAM or eDRAM on-chip caches because NVMs save standby power and provide large cache capacity. However, limited write endurance is a common problem for NVM technologies, and today's cache management might result in unbalanced cache write traffic, causing heavily written cache blocks to
Wang, Jue   +3 more
openaire   +2 more sources

Static locality analysis for cache management [PDF]

open access: yes, 1997
Most memory references in numerical codes correspond to array references whose indices are affine functions of surrounding loop indices. These array references follow a regular predictable memory pattern that can be analysed at compile time.
González Colás, Antonio María   +2 more
core   +1 more source

Cache-Aided Interference Management using Hypercube Combinatorial Cache Designs [PDF]

open access: yesICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019
We consider a cache-aided interference network which consists of a library of $N$ files, $K_T$ transmitters and $K_R$ receivers (users), each equipped with a local cache of size $M_T$ and $M_R$ files respectively, and connected via a discrete-time additive white Gaussian noise channel. Each receiver requests an arbitrary file from the library.
Xiang Zhang 0019   +2 more
openaire   +2 more sources

Cache Management of Big Data in Equipment Condition Assessment

open access: yesMATEC Web of Conferences, 2016
Big data platform for equipment condition assessment is built for comprehensive analysis. The platform has various application demands. According to its response time, its application can be divided into offline, interactive and real-time types. For real-
Ma Yan   +4 more
doaj   +1 more source

Cache Partitioning + Loop Tiling: A Methodology for Effective Shared Cache Management [PDF]

open access: yes2017 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2017
In this paper, we present a new methodology that provides i) a theoretical analysis of the two most commonly used approaches for effective shared cache management (i.e., cache partitioning and loop tiling) and ii) a unified framework to fine tuning those two mechanisms in tandem (not separately).
Vasilios I. Kelefouras   +2 more
openaire   +3 more sources

Distributed Join Processing Between Streaming and Stored Big Data Under the Micro-Batch Model

open access: yesIEEE Access, 2019
In order to interpret, enrich, and analyze the streaming data, stream applications often access the data stored in an external database. Although there has been a lot of studies on stream processing, little attention has been paid so far to the join ...
Young-Ho Jeon, Ki-Hoon Lee, Ho-Jun Kim
doaj   +1 more source

C-AMTE: A location mechanism for flexible cache management in chip multiprocessors [PDF]

open access: yes, 2011
This paper describes Constrained Associative-Mapping-of-Tracking-Entries (C-AMTE), a scalable mechanism to facilitate flexible and efficient distributed cache management in large-scale chip multiprocessors (CMPs).
Beckmann   +8 more
core   +1 more source

FirepanIF: High Performance Host-Side Flash Cache Warm-Up Method in Cloud Computing

open access: yesApplied Sciences, 2020
In cloud computing, a shared storage server, which provides a network-attached storage device, is usually used for centralized data management. However, when multiple virtual machines (VMs) concurrently access the storage server through the network, the ...
Hyunchan Park, Munkyu Lee, Cheol-Ho Hong
doaj   +1 more source

Home - About - Disclaimer - Privacy