Results 151 to 160 of about 263,258 (207)
Some of the next articles are maybe not open access.
MeshUp: Stateless Cache Side-channel Attack on CPU Mesh
2022 IEEE Symposium on Security and Privacy (SP), 2022Cache side-channel attacks lead to severe security threats to the settings where a CPU is shared across users, e.g., in the cloud. The majority of attacks rely on sensing the micro-architectural state changes made by victims, but this assumption can be ...
Zhou Li
exaly +3 more sources
Optimizing CPU cache performance for Pregel-like graph computation
2015 31st IEEE International Conference on Data Engineering Workshops, 2015In-memory graph computation systems have been used to support many important applications, such as PageRank on the web graph and social network analysis. In this paper, we study the CPU cache performance of graph computation. We have implemented a graph computation system, called GraphLite, in C/C++ based on the description of Pregel.
Shimin Chen
exaly +3 more sources
Exploiting Persistent CPU Cache for Scalable Persistent Hash Index
2024 IEEE 40th International Conference on Data Engineering (ICDE)Byte-addressable persistent memory (PM) has been widely studied in the past few years. Recently, the emerging eADR technology further incorporates CPU cache into the persistence domain.
Bowen Zhang, Shengan Zheng, Liangxu Nie
exaly +3 more sources
Accelerating Concurrent Workloads with CPU Cache Partitioning
2018 IEEE 34th International Conference on Data Engineering (ICDE), 2018Modern microprocessors include a sophisticated hierarchy of caches to hide the latency of memory access and thereby speed up data processing. However, multiple cores within a processor usually share the same last-level cache. This can hurt performance, especially in concurrent workloads whenever a query suffers from cache pollution caused by another ...
Stefan Noll +3 more
openaire +2 more sources
On the yield of VLSI processors with on-chip CPU cache
IEEE Transactions on Computers, 1996Yield enhancement through the acceptance of partially good chips is a well-known technique [1–3]. In this paper we derive a yield model for single-chip VLSI processors with a partially good on-chip cache. Also, we investigate how the yield enhancement of VLSI processors with on-chip CPU cache relates with the number of acceptable faulty cache blocks ...
Dimitris Nikolos, Haridimos T. Vergos
openaire +2 more sources
Functional implementation techniques for CPU cache memories
IEEE Transactions on Computers, 1999As the performance gap between processors and main memory continues to widen, increasingly aggressive implementations of cache memories are needed to bridge the gap. In this paper, we consider some of the issues that are involved in the implementation of highly optimized cache memories and survey the techniques that can be used to help achieve the ...
Jih-Kwon Peir +2 more
openaire +2 more sources
Accelerate Your Graphic Program with GPU/CPU Cache
2008 International Conference on Cyberworlds, 2008This paper discusses how to optimize the digital graphic program with cache system used in GPU/CPU architecture to gain more FPS. Firstly, we introduce the basic principle of cache system summarily; secondly, we discuss the three main organization and mapping technologies of cache system in detail, and then compare these three cache mapping solutions ...
Likun Zhou, Dingfang Chen
openaire +2 more sources
CPU cache prefetching: Timing evaluation of hardware implementations
IEEE Transactions on Computers, 1998Prefetching into CPU caches has long been known to be effective in reducing the cache miss ratio, but known implementations of prefetching have been unsuccessful in improving CPU performance. The reasons for this are that prefetches interfere with normal cache operations by making cache address and data ports busy, the memory bus busy, the memory banks
John Tse, Alan Jay Smith
openaire +2 more sources
A Simple Cache Coherence Scheme for Integrated CPU-GPU Systems
2020 57th ACM/IEEE Design Automation Conference (DAC), 2020This paper presents a novel approach to accelerate applications running on integrated CPU-GPU systems. Many integrated CPU-GPU systems use cache-coherent shared memory to communicate. For example, after CPU produces data for GPU, the GPU may pull the data into its cache when it accesses the data.
Reza Pulungan
exaly +2 more sources
Heterogeneous Cache Hierarchy Management for Integrated CPU-GPU Architecture
2019 IEEE High Performance Extreme Computing Conference (HPEC), 2019Unlike the traditional CPU-GPU heterogeneous architecture where CPU and GPU have separate DRAM and memory address space, current heterogeneous CPU-GPU architectures integrate CPU and GPU in the same die and share the same last level cache (LLC) and memory. For the two-level cache hierarchy in which CPU and GPU have their own private L1 caches but share
Hao Wen 0002, Wei Zhang 0002
exaly +2 more sources

