Results 251 to 260 of about 209,152 (323)
Some of the next articles are maybe not open access.

NVIDIA A100 Tensor Core GPU: Performance and Innovation

IEEE Micro, 2021
NVIDIA A100 Tensor Core GPU is NVIDIA's latest flagship GPU. It has been designed with many new innovative features to provide performance and capabilities for HPC, AI, and data analytics workloads.
Jack Choquette
exaly   +2 more sources

NVIDIA Hopper H100 GPU: Scaling Performance

IEEE Micro, 2023
The H100 Tensor Core GPU is NVIDIA’s latest flagship GPU. It has been designed to provide industry leading performance for high-performance computing, artificial intelligence, and data analytics datacenter workloads. Notable new features include a fourth-
Jack Choquette
exaly   +2 more sources

A Survey on optimized implementation of deep learning models on the NVIDIA Jetson platform

Journal of Systems Architecture, 2019
Design of hardware accelerators for neural network (NN) applications involves walking a tight rope amidst the constraints of low-power, high accuracy and throughput.
Sparsh Mittal
exaly   +2 more sources

Benchmarking Deep Learning Models on NVIDIA Jetson Nano for Real-Time Systems: An Empirical Investigation

Procedia Computer Science
The proliferation of complex deep learning (DL) models has revolutionized various applications, including computer vision-based solutions, prompting their integration into real-time systems.
Tushar Prasanna Swaminathan   +2 more
exaly   +2 more sources

Hardware Compute Partitioning on NVIDIA GPUs*

IEEE Real Time Technology and Applications Symposium, 2023
Embedded and autonomous systems are increasingly integrating AI/ML features, often enabled by a hardware accelerator such as a GPU. As these workloads become increasingly demanding, but size, weight, power, and cost constraints remain unyielding, ways to
Joshua Bakita, James H. Anderson
semanticscholar   +1 more source

Quantum Mechanics/Molecular Mechanics Simulations on NVIDIA and AMD Graphics Processing Units

Journal of Chemical Information and Modeling, 2023
We have ported and optimized the graphics processing unit (GPU)-accelerated QUICK and AMBER-based ab initio quantum mechanics/molecular mechanics (QM/MM) implementation on AMD GPUs.
M. Manathunga   +3 more
semanticscholar   +1 more source

An Open, Programmable, Multi-Vendor 5G O-RAN Testbed with NVIDIA ARC and OpenAirInterface

Conference on Computer Communications Workshops, 2023
The transition of fifth generation (5G) cellular systems to softwarized, programmable, and intelligent networks depends on successfully enabling public and private 5G deployments that are (i) fully software-driven and (ii) with a performance at par with ...
Davide Villa   +11 more
semanticscholar   +1 more source

Quantum Computer Simulations at Warp Speed: Assessing the Impact of GPU Acceleration: A Case Study with IBM Qiskit Aer, Nvidia Thrust & cuQuantum

IEEE International Conference on e-Science, 2023
Quantum computer simulators are crucial for the development of quantum computing. This work investigates GPU and multi-GPU systems' suitability and performance impact on a widely used simulation tool – the state vector simulator Qiskit Aer. In particular,
Jennifer Faj   +3 more
semanticscholar   +1 more source

NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model

arXiv.org
We introduce Nemotron-Nano-9B-v2, a hybrid Mamba-Transformer language model designed to increase throughput for reasoning workloads while achieving state-of-the-art accuracy compared to similarly-sized models. Nemotron-Nano-9B-v2 builds on the Nemotron-H
Nvidia Aarti Basant   +209 more
semanticscholar   +1 more source

Nvidia’s Arm Deal Investigated

New Electronics, 2021
IS THE UK PROBE JUST ANOTHER EXAMPLE OF HOW GOVERNMENTS ARE LOOKING TO TIGHTEN THEIR CONTROL OF SEMICONDUCTOR TECHNOLOGY?
openaire   +1 more source

Home - About - Disclaimer - Privacy