Results 61 to 70 of about 3,147,688 (366)
How Many Cooks Spoil the Soup? [PDF]
In this work, we study the following basic question: "How much parallelism does a distributed task permit?" Our definition of parallelism (or symmetry) here is not in terms of speed, but in terms of identical roles that processes have at the same time in
D Alistarh +19 more
core +2 more sources
Computational Simulations of Metal–Organic Frameworks to Enhance Adsorption Applications
This review highlights the significance of molecular simulations in expanding the understanding of metal–organic frameworks (MOFs) and improving their gas adsorption applications. The historical development and implementation of molecular simulations in the MOF field are given, high‐throughput computational screening studies used to unlock the ...
Hilal Daglar +3 more
wiley +1 more source
Benefits of Open Quantum Systems for Quantum Machine Learning
Quantum machine learning (QML), poised to transform data processing, faces challenges from environmental noise and dissipation. While traditional efforts seek to combat these hindrances, this perspective proposes harnessing them for potential advantages. Surprisingly, under certain conditions, noise and dissipation can benefit QML.
María Laura Olivera‐Atencio +2 more
wiley +1 more source
Parallelism Strategies for Big Data Delayed Transfer Entropy Evaluation
Generated and collected data have been rising with the popularization of technologies such as Internet of Things, social media, and smartphone, leading big data term creation. One class of big data hidden information is causality.
Jonas R. Dourado +2 more
doaj +1 more source
Implementation and Evaluation of Dynamic Task Allocation for Human–Robot Collaboration in Assembly
Human–robot collaboration is becoming increasingly important in industrial assembly. In view of high cost pressure, resulting productivity requirements, and the trend towards human-centered automation in the context of Industry 5.0, a reasonable ...
Christoph Petzoldt +5 more
doaj +1 more source
Hierarchical incremental class learning with reduced pattern training [PDF]
Hierarchical Incremental Class Learning (HICL) is a new task decomposition method that addresses the pattern classification problem. HICL is proven to be a good classifier but closer examination reveals areas for potential improvement.
Bao, C, Guan, SU, Sun, RT
core +1 more source
Brain‐Inspired In‐Memory Data Pruning and Computing with TaOx Mem‐Selectors
In article number 2502168, Zhongrui Wang, Xiaoxin Xu, Dashan Shang, and co‐workers present the first nanoscale Mem‐Selector device that integrates both nonvolatile resistive memory and volatile threshold switching functionalities for visual data pruning.
Yi Li +15 more
wiley +1 more source
Executing linear algebra kernels in heterogeneous distributed infrastructures with PyCOMPSs
Python is a popular programming language due to the simplicity of its syntax, while still achieving a good performance even being an interpreted language. The adoption from multiple scientific communities has evolved in the emergence of a large number of
Amela Ramon +4 more
doaj +1 more source
Memory and Parallelism Analysis Using a Platform-Independent Approach
Emerging computing architectures such as near-memory computing (NMC) promise improved performance for applications by reducing the data movement between CPU and memory. However, detecting such applications is not a trivial task.
Awan, Ahsan Javed +5 more
core +1 more source
A Comparison of Big Data Frameworks on a Layered Dataflow Model [PDF]
In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution models, for which only informal (and ...
Aldinucci, Marco +3 more
core +2 more sources

