Results 201 to 210 of about 62,963 (220)
Some of the next articles are maybe not open access.

Line (Block) Size Choice for CPU Cache Memories

IEEE Transactions on Computers, 1987
The line (block) size of a cache memory is one of the parameters that most strongly affects cache performance. In this paper, we study the factors that relate to the selection of a cache line size. Our primary focus is on the cache miss ratio, but we also consider influences such as logic complexity, address tags, line crossers, I/O overruns, etc.
openaire   +2 more sources

Interrupt Triggered Software Prefetching for Embedded CPU Instruction Cache

12th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS'06), 2006
In embedded systems, handling time-critical real-time tasks is a challenge. The software may not only multi-task to improve response time, but also support events and interrupts, forcing the system to balance multiple priorities. Further, pre-emptive task switching hampers efficient interrupt processing, leading to instruction cache misses.
K.W. Batcher, R.A. Walker
openaire   +1 more source

High-performance IP routing table lookup using CPU caching

IEEE INFOCOM '99. Conference on Computer Communications. Proceedings. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. The Future is Now (Cat. No.99CH36320), 1999
Wire-speed IP (Internet Protocol) routers require very fast routing table lookup for incoming IP packets. The routing table lookup operation is time consuming because the part of an IP address used in the lookup, i.e., the network address portion, is variable in length.
T. Chiueh, P. Pradhan
openaire   +1 more source

A CMOS RISC CPU with on-chip parallel cache

Proceedings of IEEE International Solid-State Circuits Conference - ISSCC '94, 2002
This CMOS CPU in a 0.55 /spl mu/m, 3-metal process integrates over 1.2 M transistors on a single chip. All circuitry on-chip operates at 140 MHz under typical conditions. All off-chip interfaces are cycled at the same frequency (with the exception of system bus interface, which is cycled at 120 MHz). Chip parameters are given. >
E. Rashid   +27 more
openaire   +1 more source

Heterogeneous Cache Hierarchy Management for Integrated CPU-GPU Architecture

2019 IEEE High Performance Extreme Computing Conference (HPEC), 2019
Unlike the traditional CPU-GPU heterogeneous architecture where CPU and GPU have separate DRAM and memory address space, current heterogeneous CPU-GPU architectures integrate CPU and GPU in the same die and share the same last level cache (LLC) and memory. For the two-level cache hierarchy in which CPU and GPU have their own private L1 caches but share
Hao Wen, Wei Zhang
openaire   +1 more source

A Simple Cache Coherence Scheme for Integrated CPU-GPU Systems

2020 57th ACM/IEEE Design Automation Conference (DAC), 2020
This paper presents a novel approach to accelerate applications running on integrated CPU-GPU systems. Many integrated CPU-GPU systems use cache-coherent shared memory to communicate. For example, after CPU produces data for GPU, the GPU may pull the data into its cache when it accesses the data.
Ardhi Wiratama Baskara Yudha   +3 more
openaire   +1 more source

Optimizing CPU cache performance for Pregel-like graph computation

2015 31st IEEE International Conference on Data Engineering Workshops, 2015
In-memory graph computation systems have been used to support many important applications, such as PageRank on the web graph and social network analysis. In this paper, we study the CPU cache performance of graph computation. We have implemented a graph computation system, called GraphLite, in C/C++ based on the description of Pregel.
Songjie Niu, Shimin Chen
openaire   +1 more source

Iterative cache simulation of embedded CPUs with trace stripping

Proceedings of the seventh international workshop on Hardware/software codesign - CODES '99, 1999
Trace-driven cache simulation is a time-consuming yet valuable procedure for evaluating the performance of embedded memory systems. In this paper we present a novel technique, called iterative cache simulation, to produce a variety of performance metrics for several different cache configurations. Compared with previous work in this field, our approach
Z. Wu, W. Wolf
openaire   +1 more source

Performance Analysis of Cache Memory in CPU

2023
Viraj Mankad   +3 more
openaire   +1 more source

MeshUp: Stateless Cache Side-channel Attack on CPU Mesh

2022 IEEE Symposium on Security and Privacy (SP), 2022
Junpeng Wan   +3 more
openaire   +1 more source

Home - About - Disclaimer - Privacy