ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

EuroMPI 2016 - 23rd European MPI Users' Group Meeting

Date2016-09-25 - 2016-09-28

Deadline2016-03-14

VenueKyoto, Japan Japan

Keywords

Websitehttps://www.eurompi2016.ed.ac.uk

Topics/Call fo Papers

EuroMPI is the pre-eminent meeting for users, developers and researchers to interact and discuss new developments and applications of message passing parallel computing, in particular using and related to the Message Passing Interface (MPI). The annual meeting has a long, rich tradition, and has been held in many European countries.
EuroMPI 2016 will continue to focus on benchmarks, tools, parallel I/O, fault tolerance, and parallel applications using MPI, enhancements and extensions to MPI, and alternative interfaces for high-performance homogeneous/heterogeneous/hybrid systems. Through the presentation of contributed papers, poster presentations and invited talks, attendees will have the opportunity to share ideas and experiences and to contribute to the improvement and furthering of message-passing and related parallel programming paradigms. In addition to the main conference’s technical program, one-day or half-day workshops will be held. The Call for Workshops and Call for Papers will be announced separately and also shown in the conference page.
Topics of interest for the meeting include, but are not limited to:
Shortcomings of MPI, alternatives to MPI and reasons for choosing not to use MPI for high-performance computing.
MPI implementation issues and improvements, including extensions to MPI, towards exascale computing, such as many-cores, GPGPU, and heterogeneous architectures.
Hybrid and heterogeneous programming using MPI and interoperability with other interfaces.
Interaction between message-passing software and hardware, in particular new high performance architectures.
MPI support for data-intensive parallel applications.
New MPI-IO mechanisms and I/O stack optimizations.
Fault tolerance and error handling in message-passing implementations and systems.
Performance evaluation for MPI and MPI based applications.
Automatic performance tuning of MPI applications and implementations.
Verification of message passing applications and protocols.
Applications using message-passing, in particular in Computational Science and Scientific Computing.
Parallel algorithms and scalable communication patterns in the message-passing paradigm.
New programming paradigms implemented over MPI, like hierarchical programming and global address spaces.
MPI parallel programming and application performance for cloud computing.

Last modified: 2016-01-14 23:25:56