Conference Program (Tentative; Subject to Change)

Monday September 25, 2017
Time Event

Full Day

25 Years of MPI Symposium


Skills to Thrive: Second EuroMPI Careers in HPC Workshop

Tuesday September 26, 2017
Time Event


Opening Remarks (Pavan Balaji, Bill Gropp, Rajeev Thakur, Kathryn Mohror)


Keynote Session

Session Chair: Bill Gropp

Rick Stevens
Argonne National Laboratory

Deep Learning Software Stacks and Requirements for MPI


Morning Break


Paper Session 1: Collectives

Session Chair: Rajeev Thakur

Practical, Linear-time, Fully Distributed Algorithms for Irregular Gather and Scatter
Jesper Larsson Träff

Enabling Hierarchy-aware MPI Collectives in Dynamically Changing Topologies
Simon Pickartz, Carsten Clauss, Stefan Lankes, and Antonello Monti

Transforming Blocking MPI Collectives to Non-blocking and Persistent Operations
Hadia Ahmed, Anthony Skjellum, Purushotham V. Bangalore, and Peter Pirkelbauer


Lunch Break


Paper Session 2: Persistent Collectives

Session Chair: Jesper Larsson Träff

Planning for Performance: Persistent Collective Operations for MPI
Bradley Morgan, Daniel J. Holmes, Anthony Skjellum, Purushotham Bangalore, and Srinivas Sridharan

Offloaded MPI Persistent Collectives using Persistent Generalized Request Interface
Masayuki Hatanaka, Masamichi Takagi, Atsushi Hori, and Yutaka Ishikawa


Industry Session

Maria Garzaran

How Intel is Enabling MPI for Current and Future Architectures


Afternoon Break


Poster Presentations


Poster Session and Reception

Wednesday September 27, 2017
Time Event


Keynote Session

Session Chair: Kathryn Mohror

Michela Taufer
University of Delaware

Building the next Generation of MapReduce Programming Models over MPI to Fill the Gaps between Data Analytics and Supercomputers


Morning Break


Paper Session 3: Tools and Simulation

Session Chair: Martin Schulz

Verification of MPI Programs using CIVL
Ziqing Luo, Manchun Zheng, and Stephen F. Siegel

Using Software-Based Performance Counters to Expose Low-Level Open MPI Performance Information
David Eberius, George Bosilca, and Thananon Patinyasakdikul

Characterizing MPI Matching via Trace-based Simulation
Kurt Ferreira, Scott Levy, Kevin Pedretti, and Ryan E. Grant


Lunch Break


Paper Session 4: Memory and Topology

Session Chair: Antonio Peña

A Hierarchical Model to Manage Hardware Topology in MPI Applications
Emmanuel Jeannot, Farouk Mansouri, and Guillaume Mercier

Enhanced Memory Management for Scalable MPI Intra-node Communication on Many-core Processor
Joong-Yeon Cho, Hyun-Wook Jin, and Dukyun Nam

Improving the memory access locality of hybrid MPI applications
Matthias Diener, Sam White, Laxmikant Kale, Michael Campbell, Daniel Bodony, and Jonathan Freund


Afternoon Break


Panel Session: MPI on Post-Exascale Systems

Panelists: Bill Gropp (Moderator), Atsushi Hori, Martin Schulz, and Michela Taufer



Thursday September 28, 2017
Time Event


Keynote Session

Session Chair: Pavan Balaji

Ron Brightwell
Sandia National Laboratories

What Will Impact the Future Success of MPI?


Morning Break


Paper Session 5: MPI Usage and Infrastructure

Session Chair: Rolf Rabenseifner

Notified Access in Coarray Fortran
Alessandro Fanfarillo and Davide Del Vento

What does fault tolerant Deep Learning need from MPI?
Vinay Amatya, Abhinav Vishnu, Jeff Daily, and Charles Siegel

PMIx: Process Management for Exascale Environments
Ralph Castain, David Solt, Joshua Hursey, and Aurelien Bouteiller


Lunch Break


Paper Session 6: Best Papers

Session Chair: Pavan Balaji

MPI Windows on Storage for HPC Applications
Sergio Rivas-Gomez, Stefano Markidis, Erwin Laure, Ivy Bo Peng, Gokcen Kestor, Roberto Gioiosa, and Sai Narasimhamurthy

MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU
Srinivasan Ramesh, Aurele Maheo, Sameer Shende, Allen Malony, Hari Subramoni, and Dhabaleswar Panda



Panel Session: MPI on Post-Exascale Systems

Despite claims that MPI would never be effective for first petascale and now exascale systems, MPI is being used effectively on petascale systems and remains the internode programming model for the upcoming pre-exascale systems. Furthermore, MPI is likely to be a key part of the programming environment for exascale systems. This panel will address the question of whether MPI can continue to be the programming model for extreme-scale systems, or whether post-exascale systems will require either significant additions to MPI or need new programming systems to address both the capabilities of post-exascale systems as well as changing needs of applications. What should change in MPI? What changes could broaden the applicability of MPI? How would those changes impact the use of MPI on average-sized parallel computers?

Accepted Posters

  • Nathan T. Weeks, Meiyue Shao, Brandon Cook, Marcus Wagner, Glenn R. Luecke, Pieter Maris, James P. Vary. Accelerating MPI Reductions on Intel Xeon Phi [poster] [abstract]
  • Takahiro Kawashima, Masaaki Fushimi, Takafumi Nose, Shinji Sumimoto, and Naoyuki Shida. A Memory Saving Communication Method Using Remote Atomic Operations [poster] [abstract]
  • Martin Ruefenacht, Mark Bull, Stephen Booth. Recursive Multiplying: More Flexible Than Expected [poster]
  • Marco Bungart, Claudia Fohry. Extending the MPI Backend of X10 by Elasticity [poster] [abstract]
  • Sudheer Chunduri, Paul Coffman, Scott Parker, Kalyan Kumaran. Performance Analysis of MPI on Cray XC40 Xeon Phi System [poster] [abstract]
  • Shintaro Iwasaki, Abdelhalim Amer, Kenjiro Taura, Pavan Balaji. Optimistic Threading Techniques for MPI+ULT [poster]
  • Sam White and Laxmikant V. Kale. Adaptive MPI: Dynamic Runtime Support for MPI Applications
  • Brandon L. Morris and Anthony Skjellum. MPIgnite: An MPI-Like Language for Apache Spark [poster] [abstract]
  • Nawrin Sultana, Shane Farmer, Anthony Skjellum, Ignacio Laguna, Kathryn Mohror, and Murali Emani. Designing a Reinitializable and Fault Tolerant MPI Library [poster]

Latest News by @EuroMPIconf

Important Dates

  • Full paper submission due: May 1, 2017 May 29, 2017
  • Authors notification: June 26, 2017 July 13, 2017
  • Camera-ready papers due: July 20, 2017 August 4, 2017
  • Conference dates: September 25-28, 2017
  • Late-Breaking Posters submissions due date: September 6, 2017
  • Late-Breaking Posters notification date: September 10, 2017

Previous Conferences


The EuroMPI/USA 2017 proceedings can be obtained from the ACM Digital Library.


  • Argonne National Laboratory

Gold Sponsors


Bronze Sponsors