Results 41 to 50 of about 62,688 (226)

Research on heterogeneous acceleration platform based on FPGA [PDF]

open access: yesITM Web of Conferences, 2022
In the context of today’s artificial intelligence, the volume of data is exploding. Although scaling distributed clusters horizontally to cope with the increasing demands on computing power for massive data processing is feasible.
Meng Yuan, Yang Jun
doaj   +1 more source

Packet Processing Architecture Using Last-Level-Cache Slices and Interleaved 3D-Stacked DRAM

open access: yesIEEE Access, 2020
Packet processing performance of Network Function Virtualization (NFV)-aware environment depends on the memory access performance of commercial-off-the-shelf (COTS) hardware systems.
Tomohiro Korikawa   +3 more
doaj   +1 more source

Terahertz Band Intra-Chip Communications: Can Wireless Links Scale Modern x86 CPUs?

open access: yesIEEE Access, 2017
Massive multi-core processing has recently attracted significant attention from the research community as one of the feasible solutions to satisfy constantly growing performance demands. However, this evolution path is nowadays hampered by the complexity
Vitaly Petrov   +6 more
doaj   +1 more source

GPU IMPLEMENTATION OF ATOMIC FLUID MD SIMULATION.

open access: yesTASK Quarterly, 2022
A computer simulation of an atomic fluid on a GPU was implemented using the CUDA architecture. It was shown that the programming model for efficient numerical computing applications was changing with the development of the CUDA architecture.
ALEKSANDER DAWID
doaj   +1 more source

Fast Query Processing by Distributing an Index over CPU Caches

open access: yes, 2005
Data intensive applications on clusters often require requests quickly be sent to the node managing the desired data. In many applications, one must look through a sorted tree structure to determine the responsible node for accessing or storing the data.
Cooperman, Gene, Ma, Xiaoqin
core   +1 more source

SU(2) Lattice Gauge Theory Simulations on Fermi GPUs [PDF]

open access: yes, 2011
In this work we explore the performance of CUDA in quenched lattice SU(2) simulations. CUDA, NVIDIA Compute Unified Device Architecture, is a hardware and software architecture developed by NVIDIA for computing on the GPU.
Bhanot   +14 more
core   +1 more source

Evaluating kernels on Xeon Phi to accelerate Gysela application [PDF]

open access: yes, 2014
This work describes the challenges presented by porting parts ofthe Gysela code to the Intel Xeon Phi coprocessor, as well as techniques used for optimization, vectorization and tuning that can be applied to other applications.
Bigot, J.   +5 more
core   +4 more sources

Simulation of direct mapped, k-way and fully associative cache on all pairs shortest paths algorithms

open access: yesСистемный анализ и прикладная информатика, 2019
Caches are intermediate level between fast CPU and slow main memory. It aims to store copies of frequently used data and to reduce the access time to the main memory.
A. A. Prihozhy
doaj   +1 more source

Cache memory system for high performance CPU with 4GHz [PDF]

open access: yesJournal of the Korea Society of Computer and Information, 2013
TIn this paper, we propose a high performance L1 cache structure on the high clock CPU of 4GHz. The proposed cache memory consists of three parts, i.e., a direct-mapped cache to support fast access time, a two-way set associative buffer to exploit temporal locality, and a buffer-select table.
Bo-Sung Jung, Jung-Hoon Lee
openaire   +1 more source

Home - About - Disclaimer - Privacy