Results 31 to 40 of about 134,754 (281)
The topic discussed was the intelligent design of network multimedia using BD and virtual AI technology. First of all, the authors gave a brief overview of its relevant research background, and then comprehensively analysed the advantages and disadvantages of previous scholars' research on network multimedia.
Xin Zhang
wiley +1 more source
Proposal New Cache Coherence Protocol to Optimize CPU Time through Simulation Caches [PDF]
The cache coherence is the most important issue that rapidly affected the performance of a multicore processor as a result of increasing the number of cores on chip multiprocessors and the shared memory program that will be run on these processors ...
Luma Fayeq Jalil +2 more
doaj +1 more source
In an asymmetric multi-core architecture, multiple heterogeneous cores share the last-level cache (LLC). Due to the different memory access requirements among heterogeneous cores, the LLC competition is more intense.
Juan Fang +4 more
doaj +1 more source
Data Cache-Energy and Throughput Models: Design Exploration for Embedded Processors [PDF]
Most modern 16-bit and 32-bit embedded processors contain cache memories to further increase instruction throughput of the device. Embedded processors that contain cache memories open an opportunity for the low-power research community to model the ...
McDonald-Maier, Klaus +1 more
core +3 more sources
Wait-Free Shared-Memory Irradiance Caching [PDF]
Parallelizing rendering algorithms to exploit multiprocessor and multicore machines isn't straightforward. Certain methods require frequent synchronization among threads to obtain benefits similar to the sequential algorithm. One such algorithm is the irradiance cache (IC), an acceleration data structure that caches indirect diffuse irradiance values ...
Debattista, Kurt +3 more
openaire +4 more sources
Cache Coherence Protocol Design and Simulation Using IES (Invalid Exclusive read/write Shared) State
To improve the efficiency of a processor in recent multiprocessor systems to deal with data, cache memories are used to access data instead of main memory which reduces the latency of delay time.
Baghdad Science Journal
doaj +1 more source
Variable-based multi-module data caches for clustered VLIW processors [PDF]
Memory structures consume an important fraction of the total processor energy. One solution to reduce the energy consumed by cache memories consists of reducing their supply voltage and/or increase their threshold voltage at an expense in access time. We
Abella Ferrer, Jaume +4 more
core +1 more source
Locality-Based Cache Management and Warp Scheduling for Reducing Cache Contention in GPU
GPGPUs has gradually become a mainstream acceleration component in high-performance computing. The long latency of memory operations is the bottleneck of GPU performance. In the GPU, multiple threads are divided into one warp for scheduling and execution.
Juan Fang, Zelin Wei, Huijing Yang
doaj +1 more source
Three-dimensional memory vectorization for high bandwidth media memory systems [PDF]
Vector processors have good performance, cost and adaptability when targeting multimedia applications. However, for a significant number of media programs, conventional memory configurations fail to deliver enough memory references per cycle to feed the ...
Corbal San Adrián, Jesús +2 more
core +1 more source
Evaluating a number of cache coherency misses based on a statistical model
False cache sharing happens when different parallel execution threads update the variables that reside in the same cache line. We suggest in this paper to evaluate the number of cache misses using code instrumentation and post-mortem trace analysis: the ...
Evgeny Velesevich
doaj +1 more source

