Results 1 to 10 of about 62,688 (226)
Finite Automata Implementations Considering CPU Cache
The finite automata are mathematical models for finite state systems. More general finite automaton is the nondeterministic finite automaton (NFA) that cannot be directly used.
J. Holub
doaj +4 more sources
Two novel cache management mechanisms on CPU-GPU heterogeneous processors
Heterogeneous multicore processors that take full advantage of CPUs and GPUs within the same chip raise an emerging challenge for sharing a series of on-chip resources, particularly Last-Level Cache (LLC) resources.
Huijing Yang, Tingwen Yu
doaj +4 more sources
Evaluating associativity in CPU caches [PDF]
The authors present new and efficient algorithms for simulating alternative direct-mapped and set-associative caches and use them to quantify the effect of limited associativity on the cache miss ratio. They introduce an algorithm, forest simulation, for simulating alternative direct-mapped caches and generalize one, which they call all-associativity ...
M.D. Hill, A.J. Smith
openaire +2 more sources
Novel CPU cache architecture based on two-dimensional MTJ device with ferromagnetic Fe3GeTe2 [PDF]
With the development of Artificial Intelligence (AI) in recent years, the fields of computer, biology, medicine, and aerospace have demanded higher requirements for the processing and storage of information.
Shaopu Han, Yanfeng Jiang
doaj +2 more sources
On the Incomparability of Cache Algorithms in Terms of Timing Leakage [PDF]
Modern computer architectures rely on caches to reduce the latency gap between the CPU and main memory. While indispensable for performance, caches pose a serious threat to security because they leak information about memory access patterns of programs ...
Pablo Cañones, Boris Köpf, Jan Reineke
doaj +3 more sources
Pipelined CPU-GPU Scheduling for Caches
Heterogeneous microprocessors integrate a CPU and GPU with a shared cache hierarchy on the same chip, affording low-overhead communication between the CPU and GPU's cores. Often times, large array data structures are communicated from the CPU to the GPU and back. While the on-chip cache hierarchy can support such CPU-GPU producer-consumer sharing, this
Gerzhoy, Daniel, Yeung, Donald
openaire +2 more sources
Proposal New Cache Coherence Protocol to Optimize CPU Time through Simulation Caches [PDF]
The cache coherence is the most important issue that rapidly affected the performance of a multicore processor as a result of increasing the number of cores on chip multiprocessors and the shared memory program that will be run on these processors ...
Luma Fayeq Jalil +2 more
doaj +3 more sources
Optimizing CPU Cache Utilization in Cloud VMs with Accurate Cache Abstraction [PDF]
This paper shows that cache-based optimizations are often ineffective in cloud virtual machines (VMs) due to limited visibility into and control over provisioned caches. In public clouds, CPU caches can be partitioned or shared among VMs, but a VM is unaware of cache provisioning details.
Tofigh, Mani +5 more
openaire +3 more sources
CacheOut: Leaking Data on Intel CPUs via Cache Evictions [PDF]
Recent transient-execution attacks, such as RIDL, Fallout, and ZombieLoad, demonstrated that attackers can leak information while it transits through microarchitectural buffers. Named Microarchitectural Data Sampling (MDS) by Intel, these attacks are likened to "drinking from the firehose", as the attacker has little control over what data is observed ...
van Schaik, Stephan +4 more
openaire +4 more sources
Data Rate Estimation for Wireless Core-to-Cache Communication in Multicore CPUs
In this paper, a principal architecture of common purpose CPU and its main components are discussed, CPUs evolution is considered and drawbacks that prevent future CPU development are mentioned.
Maria S. Komar +4 more
doaj +3 more sources

