Results 161 to 170 of about 263,258 (207)
Some of the next articles are maybe not open access.

Buffer on Last Level Cache for CPU and GPGPU Data Sharing

2014 IEEE Intl Conf on High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014
With the rapid growth in demand of massive data processing and the limitation of process development in microprocessor, GPGPU gains more and more attentions to provide huge power of data parallelism. Tightly-coupled CPU and GPGPU that share the LLC (last level cache) enables fine-grained workload offload between CPU and GPGPU. In the paper, we focus on
Minghui Wu
exaly   +2 more sources

Performance Analysis of Cache Memory in CPU

Communications in Computer and Information Science, 2023
Viraj Mankad, Virag Shah, Sachin Gajjar
exaly   +2 more sources

CPU Cache

Encyclopedia of Database Systems, 2009
openaire   +2 more sources

Memory Coherency Based CPU-Cache-FPGA Acceleration Architecture for Cloud Computing

International Conference on Information Science and Control Engineering, 2015
Hao Yang, Xiaolang Yan
exaly   +2 more sources

3D V-Cache: the Implementation of a Hybrid-Bonded 64MB Stacked Cache for a 7nm x86-64 CPU

IEEE International Solid-State Circuits Conference, 2022
AMD's V-Cache is a 3D stacked product that attaches additional cache onto a high-performance processor through hybrid bonding, a technology that offers significant bandwidth and power benefits over state-of-the-art uBump based approaches. V-Cache expands
J. Wuu   +9 more
semanticscholar   +1 more source

B-tree indexes and CPU caches

Proceedings 17th International Conference on Data Engineering, 2002
Since many existing techniques for exploiting CPU caches in the implementation of B-tree indexes have not been discussed in the literature, most of them are surveyed. Rather than providing a detailed performance evaluation for one or two of them on some specific contemporary hardware, the purpose is to survey and to make widely available this ...
Goetz Graefe, Per-Åke Larson
openaire   +1 more source

Pipelined CPU-GPU Scheduling for Caches

2021
Heterogeneous microprocessors integrate a CPU and GPU with a shared cache hierarchy on the same chip, affording low-overhead communication between the CPU and GPU's cores. Often times, large array data structures are communicated from the CPU to the GPU and back. While the on-chip cache hierarchy can support such CPU-GPU producer-consumer sharing, this
Gerzhoy, Daniel, Yeung, Donald
openaire   +1 more source

Home - About - Disclaimer - Privacy