Results 61 to 70 of about 77,762 (283)
Though custom deep learning (DL) hardware accelerators are attractive for making inferences in edge computing devices, their design and implementation remain a challenge. Open-source frameworks exist for exploring DL hardware accelerators.
Dennis Agyemanh Nana Gookyi +4 more
doaj +1 more source
The CBE Hardware Accelerator for Numerical Relativity: A Simple Approach
Hardware accelerators (such as the Cell Broadband Engine) have recently received a significant amount of attention from the computational science community because they can provide significant gains in the overall performance of many numerical ...
Khanna, Gaurav
core +1 more source
Hardware Accelerated DNA Sequencing [PDF]
Basecalling is a core function in DNA sequencing. It is responsible for the conversion of measured date to a text representation of the DNA’s molecular make-up. Recent advances in sequencing machinery have greatly accelerated the rate at which DNA data can be gathered using miniaturized platforms.
ZhongPan Wu +4 more
openaire +1 more source
Surface Tension Measurement of Ti‐6Al‐4V by Falling Droplet Method in Oxygen‐Free Atmosphere
In this article, the temperature‐dependent surface tension of free falling, oscillating Ti‐6Al‐4V droplets is investigated in both argon and monosilane doped, oxygen‐free atmosphere. Droplet temperature and oscillation are captured with one single high‐speed camera, and the surface tension is calculated with Rayleigh's formula.
Johannes May +9 more
wiley +1 more source
Possibilities of using of hardware accelerators for intrusion detection and prevention systems
The subject of this study is the capabilities of FPGA technology for cybersecurity solutions with the network interface accelerators of SmartNIC, as well as the technologies for building, deploying, supporting, and accelerating intrusion detection ...
Artem Tetskyi, Artem Perepelitsyn
doaj +1 more source
A compression strategy to accelerate LSTM meta-learning on FPGA
Driven by edge computing, how to efficiently deploy the meta-learner LSTM in the resource constrained FPGA terminal equipment has become a big problem.
NianYi Wang +4 more
doaj +1 more source
An all‐in‐one analog AI accelerator is presented, enabling on‐chip training, weight retention, and long‐term inference acceleration. It leverages a BEOL‐integrated CMO/HfOx ReRAM array with low‐voltage operation (<1.5 V), multi‐bit capability over 32 states, low programming noise (10 nS), and near‐ideal weight transfer.
Donato Francesco Falcone +11 more
wiley +1 more source
Many applications of Selforganizing Feature Maps (SOMs) need a high performance hardware system in order to be efficient. Because of the regular and modular structure of SOMs , a hardware realization is obvious. Based on the idea of a massively parallel system, several chips have been designed, manufactured and tested by the authors.
Rüping, Stefan +2 more
openaire +2 more sources
Unleashing the Power of Machine Learning in Nanomedicine Formulation Development
A random forest machine learning model is able to make predictions on nanoparticle attributes of different nanomedicines (i.e. lipid nanoparticles, liposomes, or PLGA nanoparticles) based on microfluidic formulation parameters. Machine learning models are based on a database of nanoparticle formulations, and models are able to generate unique solutions
Thomas L. Moore +7 more
wiley +1 more source
Instruction-Level Abstraction (ILA): A Uniform Specification for System-on-Chip (SoC) Verification
Modern Systems-on-Chip (SoC) designs are increasingly heterogeneous and contain specialized semi-programmable accelerators in addition to programmable processors.
Gupta, Aarti +5 more
core +1 more source

