Workshop Room

Straub Hall, Room 156

Workshop Program

[9:00am - 9:15am] Opening Remarks


[9:15am - 10:30am] Session 1: Keynote Talk

All programming models are wrong but some are useful: Identifying productive abstractions for exascale simulation

Dr. Jeff Hammond, Intel

[slides]
Abstract:

Implementing correct and efficient simulation software has never been easy and the massive parallelism and diverse hardware platforms associated with exascale computing aren't going to make it any easier. I will describe our experiences with evolving NWChem to use modern OpenMP parallelism using traditional, task-based and offload-oriented (target) motifs as well as complementary study based on the Parallel Research Kernels (PRKs). The PRKs are a set of application skeletons that allow for rapid implementations in a wide range of programming models. I'll report on our most recent study, which compared a range of C++ parallel models (RAJA, Kokkos, C++17 parallel STL, Threading Building Blocks, SYCL) to OpenMP 4.5 and OpenCL. I'll conclude by discussing the tension between physical simulation workloads and data analytics workloads and how it affects system architecture.

Bio:

Jeff Hammond is a Senior System Architect at Intel. His research interests include computational chemistry, numerical linear algebra, parallel programming models, and high-performance computing system architecture. He contributes to the development of the open standards for parallel computing, especially MPI and OpenMP. Prior to joining Intel, he worked at Argonne Leadership Computing Facility as a computational scientist. He received his PhD in chemistry from the University of Chicago, where he was Department of Energy Computational Science Graduate Fellow. For more information, please see https://github.com/jeffhammond.


[10:30am - 11:00am] Tea/Coffee Break


[11:00am - 12:30pm] Session 2: Application Study

Session Chair: Hajime Fujita, Intel

  • "Experiences Using CPUs and GPUs for Cooperative Computation in a Multi-Physics Simulatio", Olga Pearce.
  • "Semantics-Aware Prediction for Analytic Queries in MapReduce Environment", Weikuan Yu, Zhuo Liu, and Xiaoning Ding. [slides]
  • "Fast, General Parallel Computation for Machine Learning", Robin Elizabeth Yancey, and Norman Matloff. [slides]

[12:30pm - 1:30pm] Lunch Break


[1:30pm - 2:00pm] Session 3: Invited Paper 1

  • "A Simple yet Effective Graph Partition Model for GPU Computing", Eddy Z. Zhang.

[2:00pm - 3:30pm] Session 4: Performance Modeling and Analysis

Session Chair: Chair: Eddy Z. Zhang, Rutgers University

  • "The Energy Efficiency of Modern Multicore Systems", Dumitrel Loghin, and Yong Meng Teo. [slides]
  • "Evaluating Support for OpenMP Offload Features", Jose Monsalve Diaz, Swaroop Pophale, Kyle Friedline, Oscar Hernandez, David E. Bernholdt, and Sunita Chandrasekaran. [slides]
  • "High-Performance Sparse Matrix-Matrix Products on Intel KNL and Multicore Architectures", Yusuke Nagasaka, Satoshi Matsuoka, Ariful Azad, and Aydın Buluç. [slides]

[3:30pm - 4:00pm] Tea/Coffee Break


[4:00pm - 4:30pm] Session 5: Invited Paper 2

  • "Efficient Implementation of MPI-3 RMA over OpenFabrics Interface", Hajime Fujita, Chongxiao Cao, Sayantan Sur, Charles Archer, and Maria Garzaran. [paper]

[4:30pm - 6:30pm] Session 6: Programming Model and Runtime Systems

Session Chair: Weikuan Yu, Florida State University

  • "A Dedicated Message Matching Mechanism for Collective Communications", S. Mahdieh Ghazimirsaeed, Ryan. E. Grant, and Ahmad Afsahi.
  • "Run-Length Base-Delta Encoding for High-Speed Compression", Taylor Lloyd, Kit Barton, Ettore Tiotto, and Jos´e Nelson Amaral. [slides]
  • "Contention-Aware Resource Scheduling for Burst Buffer Systems", Weihao Liang, Yong Chen, Jialin Liu, and Hong An. [slides]
  • "iPregel: A Combiner-Based In-Memory Shared-Memory Vertex-Centric Framework", Ludovic A. R. Capelli, Zhenjiang Hu, and Timothy A. K. Zakian.

[6:30pm - 6:40pm] Closing Remarks