Results 211 to 220 of about 132,994 (259)
Some of the next articles are maybe not open access.

Cache memory for microprocessors

ACM SIGARCH Computer Architecture News, 1981
A growth path for current microprocessors is suggested which includes bus enhancements and cache memories. The implications are examined, and several differences from the mainframe world are pointed out.
openaire   +1 more source

Coded Caching for Relay Networks: The Impact of Caching Memories

2020 IEEE Information Theory Workshop (ITW), 2021
Relay is a traditional key technology to improve the communication reliability and enlarge the covering range of service. Recently, coded caching schemes that reduce traffic congestion through coding and injecting duplicate data among users have attracted wide interests.
Shu-Jie Cao   +3 more
openaire   +1 more source

On-chip cache memory resilience

Proceedings Third IEEE International High-Assurance Systems Engineering Symposium (Cat. No.98EX231), 2002
This paper investigates the system-level impact of soft errors occurring in cache memory and proposes a novel cache-memory design approach for improving the soft-error resilience. Radiation experiments are conducted to quantify the severity of errors attributed to transients occurring in a cache memory subsystem.
Seung H. Hwang, Gwan S. Choi
openaire   +1 more source

Memory registration caching correctness

CCGrid 2005. IEEE International Symposium on Cluster Computing and the Grid, 2005., 2005
Fast and powerful networks are becoming more popular on clusters to support applications including message passing, file systems, and databases. These networks require special treatment by the operating system to obtain high throughput and low latency. In particular, application memory must be pinned and registered in advance of use.
Pete Wyckoff, Jiesheng Wu
openaire   +1 more source

In-Memory Caching Orchestration for Hadoop

2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2016
In this paper, we investigate techniques to effectively orchestrate HDFS in-memory caching for Hadoop. We first evaluate a degree of benefit which each of various MapReduce applications can get from in-memory caching, i.e. cache affinity. We then propose an adaptive cache local scheduling algorithm that adaptively adjusts the waiting time of a ...
Jaewon Kwak   +4 more
openaire   +1 more source

In-system testing of cache memories

Proceedings of 1995 IEEE International Test Conference (ITC), 2002
Caches embedded in microprocessor systems are implemented with limited observability and controllability. Hence they create many problems in testing. This paper gives a methodology of developing user test programs for data and instruction caches with various organization.
openaire   +1 more source

A One's Complement Cache Memory

1994 International Conference on Parallel Processing-Vol 1 (ICPP'94), 1994
Most of today's microprocessors have an on-chip cache to reduce average memory access latency. These on-chip caches generally have low associativity and small sizes. Cache line conflicts are the main source of cache misses which are essential to overall system performance.
Yang, Qing, Adina, Sridhar
openaire   +2 more sources

Logging in persistent memory

Proceedings of the International Symposium on Memory Systems, 2017
Persistent memory is a new class of memory that functions as a hybrid of traditional storage systems and main memory. It combines the benefits of both the data persistence property of storage and the fast load/store interface of memory. In order to maintain data persistence in memory, a widely used mechanism is logging - in addition to updating the ...
Mengjie Li, Matheus Ogleari, Jishen Zhao
openaire   +1 more source

Cache memories in dataflow architecture

Proceedings.Seventh IEEE Symposium on Parallel and Distributed Processing, 2002
The recent advance in dataflow processing - to combine the dataflow paradigm with the control flow paradigm - has brought out many new challenging issues. This hybrid organization has made it possible to study familiar control flow concepts within the framework of the dataflow architecture.
Krishna M. Kavi, Ali R. Hurson
openaire   +1 more source

Benchmarking the cache memory effect

1996
A new performance model of the memory hierarchy is first introduced, which describes all possible scenarios for the calculation process, including the important case when the cache memory is bypassed. A detailed study of each scenario is then given along with the derivation of corresponding formulae.
openaire   +1 more source

Home - About - Disclaimer - Privacy