The Eighth International Workshop on
Accelerators and Hybrid Exascale Systems
(AsHES)
Join us on May 21st, 2018
Vancouver, British Columbia, Canada
To be held in conjunction with
32nd IEEE International Parallel and Distributed Processing Symposium

Opening Remarks

8:45 - 9:00 am

Keynote by Dr. Michael Wolfe

9:00 - 10:00 am

Will Accelerators and Hybrid Systems Succeed This Time Around?

Michael Wolfe, NVIDIA, USA

Abstract: Accelerated systems have been around in HPC for 50 years or more. IBM produced the 2938 Array Processor for mainframes in 1969, and Floating Point Systems developed the AP-120B array processor for minicomputers in 1976. Ten years ago, the IBM PowerXCell coprocessor was used in Roadrunner, the fastest machine in the TOP500 list. Yet, repeatedly, the improved performance from each accelerator was eventually obviated by faster and more capable CPUs.
Now we see GPUs becoming common as compute accelerators, FPGAs achieving some success, and other accelerator ASICs being designed for specific applications. How do today's accelerators differ from the previous generations? What will it take to make the current move to accelerators successful in the long term? Is there something unique about the jump from petascale to exascale computing that begs for accelerators? How will compilers and languages have to evolve for these systems? We explore these questions and more.

Bio: Michael Wolfe has worked on languages and compilers for parallel computing since his days in graduate school at the University of Illinois in 1976. Along the way, he cofounded Kuck and Associates, Inc. (since acquired by Intel), tried his hand in academia at the Oregon Graduate Institute (since merged with the Oregon Health and Sciences University), and worked on High Performance Fortran at PGI (since acquired by STMicroelectronics, and more recently by NVIDIA). He now spends most of his time as the technical lead for a team that works to improve the PGI compilers for highly parallel computing, and in particular for GPU accelerators.

Break 10:00 - 10:30 am

Session 1: Runtime Scheduling and Performance Analytics

10:30 am - 12:00 pm
Session Chair: Aparna Chandramowlishwaran, University of California, Irvine, USA

  • NVIDIA Tensor Core Programmability, Performance & Precision
    Stefano Markidis, Steven Wei Der Chien, Erwin Laure, Ivy Bo Peng and Jeffrey S. Vetter
  • Optimizing an Atomics-based Reduction Kernel on OpenCL FPGA Platform
    Zheming Jin and Hal Finkel
  • Leveraging Data-Flow Task Parallelism for Locality-Aware Dynamic Scheduling on Heterogeneous Platforms
    Osman Seckin Simsek, Andi Drebes, Mikel Lujan and Antoniu Pop

Lunch 12:00 - 1:30 pm

Session 2: Algorithms and Applications

1:30 - 3:00 pm
Session Chair: Stefano Markidis, KTH Royal Institute of Technology, Sweden

  • Tacho: Memory-Scalable Task Parallel Sparse Cholesky Factorization
    Kyungjoo Kim, H. Carter Edwards and Sivasankaran Rajamanickam
  • Sorting Large Datasets with Heterogeneous CPU/GPU Architectures
    Michael Gowanlock and Ben Karsin
  • Improving Performance of Genomic Aligners on Intel Xeon Phi-based Architectures
    Shaolong Chen and Miquel Senar

Break 3:00 - 3:30 pm

Session 3: Emerging Accelerator Architectures

3:30 - 4:30 pm
Session Chair: Jeffrey Young, Georgia Institute of Technology, USA

  • An Initial Characterization of the Emu Chick
    Eric Hein, Tom Conte, Jeffrey Young, Srinivas Eswar, Jiajia Li, Patrick Lavin, Richard Vuduc and Jason Riedy
  • Exploring the Vision Processing Unit as Co-processor for Inference
    Sergio Rivas-Gomez, Stefano Markidis, Antonio J. Peña, David Moloney and Erwin Laure

Closing Remarks

4:30 pm






home : organizers : call for papers : registration : program : submission : contact us