Results 1 to 10 of about 62,963 (220)
Proposal New Cache Coherence Protocol to Optimize CPU Time through Simulation Caches [PDF]
The cache coherence is the most important issue that rapidly affected the performance of a multicore processor as a result of increasing the number of cores on chip multiprocessors and the shared memory program that will be run on these processors ...
Luma Fayeq Jalil +2 more
doaj +4 more sources
Two novel cache management mechanisms on CPU-GPU heterogeneous processors
Heterogeneous multicore processors that take full advantage of CPUs and GPUs within the same chip raise an emerging challenge for sharing a series of on-chip resources, particularly Last-Level Cache (LLC) resources.
Huijing Yang, Tingwen Yu
doaj +4 more sources
Reuse Cache for Heterogeneous CPU-GPU Systems [PDF]
It is generally observed that the fraction of live lines in shared last-level caches (SLLC) is very small for chip multiprocessors (CMPs). This can be tackled using promotion-based replacement policies like re-reference interval prediction (RRIP) instead of LRU, dead-block predictors, or reuse-based cache allocation schemes. In GPU systems, similar LLC
Shah, Tejas +3 more
openaire +3 more sources
On the Incomparability of Cache Algorithms in Terms of Timing Leakage [PDF]
Modern computer architectures rely on caches to reduce the latency gap between the CPU and main memory. While indispensable for performance, caches pose a serious threat to security because they leak information about memory access patterns of programs ...
Pablo Cañones, Boris Köpf, Jan Reineke
doaj +3 more sources
Finite Automata Implementations Considering CPU Cache
The finite automata are mathematical models for finite state systems. More general finite automaton is the nondeterministic finite automaton (NFA) that cannot be directly used.
J. Holub
doaj +3 more sources
Novel CPU cache architecture based on two-dimensional MTJ device with ferromagnetic Fe3GeTe2 [PDF]
With the development of Artificial Intelligence (AI) in recent years, the fields of computer, biology, medicine, and aerospace have demanded higher requirements for the processing and storage of information.
Shaopu Han, Yanfeng Jiang
doaj +2 more sources
Design and Analysis of On-Chip CPU Pipelined Caches [PDF]
The access time of the first level on-chip cache usually imposes the cycle time of high-performance VLSI processors. The only way to reduce the effect of cache access time on processor cycle time is the use of pipelined caches. A timing model for on-chip caches has recently been presented in [1].
C. Ninos, H. T. Vergos, D. Nikolos
openaire +2 more sources
Pipelined CPU-GPU Scheduling for Caches
Heterogeneous microprocessors integrate a CPU and GPU with a shared cache hierarchy on the same chip, affording low-overhead communication between the CPU and GPU's cores. Often times, large array data structures are communicated from the CPU to the GPU and back. While the on-chip cache hierarchy can support such CPU-GPU producer-consumer sharing, this
Gerzhoy, Daniel, Yeung, Donald
openaire +2 more sources
Optimizing CPU Cache Utilization in Cloud VMs with Accurate Cache Abstraction [PDF]
This paper shows that cache-based optimizations are often ineffective in cloud virtual machines (VMs) due to limited visibility into and control over provisioned caches. In public clouds, CPU caches can be partitioned or shared among VMs, but a VM is unaware of cache provisioning details.
Tofigh, Mani +5 more
openaire +3 more sources
Cache memory system for high performance CPU with 4GHz [PDF]
TIn this paper, we propose a high performance L1 cache structure on the high clock CPU of 4GHz. The proposed cache memory consists of three parts, i.e., a direct-mapped cache to support fast access time, a two-way set associative buffer to exploit temporal locality, and a buffer-select table.
Bo-Sung Jung, Jung-Hoon Lee
openaire +3 more sources

