Results 281 to 290 of about 278,558 (314)
Some of the next articles are maybe not open access.
Proceedings of the 28th ACM international conference on Supercomputing, 2014
Many-core architectures, such as the Intel Xeon Phi, provide dozens of cores and hundreds of hardware threads. To utilize such architectures, application programmers are increasingly looking at hybrid programming models, where multiple threads interact with the MPI library (frequently called "MPI+X" models).
Min Si +4 more
openaire +1 more source
Many-core architectures, such as the Intel Xeon Phi, provide dozens of cores and hundreds of hardware threads. To utilize such architectures, application programmers are increasingly looking at hybrid programming models, where multiple threads interact with the MPI library (frequently called "MPI+X" models).
Min Si +4 more
openaire +1 more source
The Network Agnostic MPI – Scali MPI Connect
2003This paper presents features and performance of Scali MPI Connect (SMC). Key features in SMC are presented, such as dynamic selection of interconnect at runtime, automatic fail-over and runtime selection of optimized collective operations. Performance is measured both for basic communication functions such as bandwidth and latency and for real ...
Lars Paul Huse, Ole W. Saastad
openaire +1 more source
Proceedings of the 25th European MPI Users' Group Meeting, 2018
When an MPI program experiences a failure, the most common recovery approach is to restart all processes from a previous checkpoint and to re-queue the entire job. A disadvantage of this method is that, although the failure occurred within the main application loop, live processes must start again from the beginning of the program, along with new ...
Nawrin Sultana +5 more
openaire +1 more source
When an MPI program experiences a failure, the most common recovery approach is to restart all processes from a previous checkpoint and to re-queue the entire job. A disadvantage of this method is that, although the failure occurred within the main application loop, live processes must start again from the beginning of the program, along with new ...
Nawrin Sultana +5 more
openaire +1 more source
Proceedings of the 22nd European MPI Users' Group Meeting, 2015
A majority of parallel applications executed on HPC clusters use MPI for communication between processes. Most users treat MPI as a black box, executing their programs using the cluster's default settings. While the default settings perform adequately for many cases, it is well known that optimizing the MPI environment can significantly improve ...
Esthela Gallardo +4 more
openaire +1 more source
A majority of parallel applications executed on HPC clusters use MPI for communication between processes. Most users treat MPI as a black box, executing their programs using the cluster's default settings. While the default settings perform adequately for many cases, it is well known that optimizing the MPI environment can significantly improve ...
Esthela Gallardo +4 more
openaire +1 more source
Journal of Peptide Science, 2017
Cationic antimicrobial peptides have attracted increasing attention as a novel class of antibiotics to treat infectious diseases caused by pathogenic bacteria. However, susceptibility to protease is a shortcoming in their development.
Beijun Liu +10 more
semanticscholar +1 more source
Cationic antimicrobial peptides have attracted increasing attention as a novel class of antibiotics to treat infectious diseases caused by pathogenic bacteria. However, susceptibility to protease is a shortcoming in their development.
Beijun Liu +10 more
semanticscholar +1 more source
MPI-GLUE: Interoperable high-performance MPI combining different vendor’s MPI worlds
1998Several metacomputing projects try to implement MPI for homogeneous and heterogeneous clusters of parallel systems. MPI-GLUE is the first approach which exports nearly full MPI 1.1 to the user’s application without losing the efficiency of the vendors’ MPI implementations.
openaire +1 more source
Toward Heterogeneous MPI+MPI Programming: Comparison of OpenMP and MPI Shared Memory Models
2020This paper introduces our research on investigating the possibility of using heterogeneous all-MPI programming for the efficient parallelization of real-world scientific applications on clusters of multicore SMP/ccNUMA nodes. The investigation is based on verifying the efficiency of parallelizing a CFD application known as MPDATA, which contains a set ...
Lukasz Szustak +3 more
openaire +1 more source
A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard
Parallel Computing, 1996W. Gropp +3 more
semanticscholar +1 more source
Using Mpi Portable Parallel Programming With The Message Passing Interface
, 2016C. Freytag
semanticscholar +1 more source

