Results 151 to 160 of about 127,524 (200)
Some of the next articles are maybe not open access.
Caching and Asynchronous Pages
2010Caching is the technique of storing an in-memory copy of some information that’s expensive to create. For example, you could cache the results of a complex query so that subsequent requests don’t need to access the database at all. Instead, they can grab the appropriate object directly from server memory—a much faster proposition.
Matthew MacDonald +2 more
openaire +2 more sources
A Unified Page Walk Buffer and Page Walk Cache
2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), 2020GPU enables shared virtual memory (SVM) to eliminate complex data transfer in GPU programming for programmers. However, SVM bring the expensive overhead of address translation due to the large number of address translation requests which are generated simultaneously in the GPU.
Dunbo Zhang +3 more
openaire +1 more source
The Effect Of Page Allocation On Caches
[1992] Proceedings the 25th Annual International Symposium on Microarchitecture MICRO 25, 1992Medium to large physically-indexed low-associativity caches, where physical page number bits index the cache, present two problems. First, cache miss rate varies between runs, as data location in the cache depends on the placement of virtual pages in physical memory.
William L. Lynch +2 more
openaire +1 more source
Cache performance improvement through on-demand, in-cache page clearing
Microprocessors and Microsystems, 1997Abstract Recent advances in VLSI technology have made it possible to use large enough caches to eliminate most of the “conventional” cache misses resulting from limited cache size and/or set associativity. As a result, other sources of “non-conventional” cache misses are becoming increasingly dominant in the makeup of the total cache misses.
Taejin Kim +5 more
openaire +1 more source
Hash, Don't Cache (the Page Table)
ACM SIGMETRICS Performance Evaluation Review, 2016Radix page tables as implemented in the x86-64 architecture incur a penalty of four memory references for address translation upon each TLB miss. These 4 references become 24 in virtualized setups, accounting for 5%--90% of the runtime and thus motivating chip vendors to incorporate page walk caches (PWCs).
Yaniv, Idan, Tsafrir, Dan
openaire +1 more source
Page associative caches on Futurebus
Microprocessors and Microsystems, 1988Abstract A cache scheme which uses page associative cache descriptors can offer advantages in terms of its impact on cache coherence in the presence of paged transactions, and on the use of local memory to minimize bus loading. It can also be used to preserve cache coherence when processor accesses are cached, i.e. a logical cache, in the presence of
openaire +1 more source

