Results 91 to 100 of about 186,522 (338)

Halogen‐Bond Coupled Halogenated‐π‐Conjugation Enables Giant Birefringence in Hydrogen‐Bonded Organic Frameworks

open access: yesAdvanced Science, EarlyView.
The perfect crystal packing achieved via the halogen‐bond (XB) coupled halogenated‐π‐conjugation strategy effectively induces an ultrahigh birefringence (Δn = 0.97), which exhibits promising potential in polarization control and phase modulation.
Miao‐Bin Xu   +6 more
wiley   +1 more source

RRAM Variability Harvesting for CIM‐Integrated TRNG

open access: yesAdvanced Electronic Materials, EarlyView.
This work demonstrates a compute‐in‐memory‐compatible true random number generator that harvests intrinsic cycle‐to‐cycle variability from a 1T1R RRAM array. Parallel entropy extraction enables high‐throughput bit generation without dedicated circuits. This approach achieves NIST‐compliant randomness and low per‐bit energy, offering a scalable hardware
Ankit Bende   +4 more
wiley   +1 more source

Quantum data parallelism in quantum neural networks

open access: yesPhysical Review Research
Quantum neural networks hold promise for achieving lower generalization error bounds and enhanced computational efficiency in processing certain datasets.
Sixuan Wu, Yue Zhang, Jian Li
doaj   +1 more source

Electrode‐Engineered Dual‐Mode Multifunctional Lead‐Free Perovskite Optoelectronic Memristors for Neuromorphic Computing

open access: yesAdvanced Electronic Materials, EarlyView.
A lead‐free perovskite memristive solar cell structure that call emulate both synaptic and neuronal functions controlled by light and electric fields depending on top electrode type. ABSTRACT Memristive devices based on halide perovskites hold strong promise to provide energy‐efficient systems for the Internet of Things (IoT); however, lead (Pb ...
Michalis Loizos   +4 more
wiley   +1 more source

Study on Distributed Training Optimization Based on Hybrid Parallel [PDF]

open access: yesJisuanji kexue
Large-scale neural network training is a hot topic in the field of deep learning,and distributed training stands out as one of the most effective methods for training large neural networks across multiple nodes.Distributed training typically involves ...
XU Jinlong, LI Pengfei, LI Jianan, CHEN Biaoyuan, GAO Wei, HAN Lin
doaj   +1 more source

Process-Oriented Parallel Programming with an Application to Data-Intensive Computing [PDF]

open access: yes, 2014
We introduce process-oriented programming as a natural extension of object-oriented programming for parallel computing. It is based on the observation that every class of an object-oriented language can be instantiated as a process, accessible via a ...
Givelberg, Edward
core  

goSLP: Globally Optimized Superword Level Parallelism Framework

open access: yes, 2018
Modern microprocessors are equipped with single instruction multiple data (SIMD) or vector instruction sets which allow compilers to exploit superword level parallelism (SLP), a type of fine-grained parallelism.
Amarasinghe, Saman, Mendis, Charith
core   +1 more source

Exploring Quantum Support Vector Regression for Predicting Hydrogen Storage Capacity of Nanoporous Materials

open access: yesAdvanced Intelligent Discovery, EarlyView.
In this study we employed support vector regressor and quantum support vector regressor to predict the hydrogen storage capacity of metal–organic frameworks using structural and physicochemical descriptors. This study presents a comparative analysis of classical support vector regression (SVR) and quantum support vector regression (QSVR) in predicting ...
Chandra Chowdhury
wiley   +1 more source

A Pipeline-Based ODE Solving Framework

open access: yesIEEE Access
The traditional parallel solving methods of ordinary differential equations (ODE) are mainly classified into task-parallelism, data-parallelism, and instruction-level parallelism.
Ruixia Cao, Shangjun Hou, Lin Ma
doaj   +1 more source

Designing Memristive Materials for Artificial Dynamic Intelligence

open access: yesAdvanced Intelligent Discovery, EarlyView.
Key characteristics required of memristors for realizing next‐generation computing, along with modeling approaches employed to analyze their underlying mechanisms. These modeling techniques span from the atomic scale to the array scale and cover temporal scales ranging from picoseconds to microseconds. Hardware architectures inspired by neural networks
Youngmin Kim, Ho Won Jang
wiley   +1 more source

Home - About - Disclaimer - Privacy