Results 231 to 240 of about 5,122 (261)
Some of the next articles are maybe not open access.
Elastic-Cache: GPU Cache Architecture for Efficient Fine- and Coarse-Grained Cache-Line Management
2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2017GPUs provide high-bandwidth/low-latency on-chip shared memory and L1 cache to efficiently service a large number of concurrent memory requests (to contiguous memory space). To support warp-wide accesses to L1 cache, GPU L1 cache lines are very wide. However, such L1 cache architecture cannot always be efficiently utilized when applications generate ...
Bingchao Li +3 more
openaire +1 more source
Multiprocessor revolution and cache management
2012 3rd National Conference on Emerging Trends and Applications in Computer Science, 2012Advancement in semiconductor technology is allowing to pack more and more transistors on a single die, thus increasing the complexity of the systems. Chip density is continuing to increase approximately twice every two years. However, there is little instruction level parallelism left to improve the performance of a single processor.
openaire +1 more source
Networking, Cache, and Power Management
2011So far, we’ve looked at how to remove memory leaks and make our interfaces scroll and animate without much lag. The application is starting to perform and act like a polished app that is ready for prime time. What is the next step in our process? Networking is the next step.
Brandon Alexander +2 more
openaire +1 more source
Integrative oncology: Addressing the global challenges of cancer prevention and treatment
Ca-A Cancer Journal for Clinicians, 2022Jun J Mao,, Msce +2 more
exaly
Partition-Based Cache Replacement to Manage Shared L2 Caches
Chinese Journal of Electronics, 2014Juan Fang +4 more
openaire +1 more source

