Analysis of Topology-Dependent MPI Performance on Gemini Networks
|Title||Analysis of Topology-Dependent MPI Performance on Gemini Networks|
|Publication Type||Conference Paper|
|Year of Publication||2013|
|Authors||Pena, AJ, Carvalho, RGCorrea, Dinan, J, Balaji, P, Thakur, R, Gropp, WD|
|Conference Name||EuroMPI 2013|
|Conference Location||Madrid, Spain|
Current HPC systems utilize a variety of interconnection networks, with varying features and communication characteristics. MPI normalizes these interconnects with a common interface used by most HPC applications. However, network properties can have a significant impact on application performance. We explore the impact of the interconnect on application performance on the Blue Waters supercomputer. Blue Waters uses a three-dimensional, Cray Geminitorus network, which provides twice the Y-dimension bandwidth in the X and Z dimensions. Through several benchmarks, including a halo-exchange example, we demonstrate that application-level mapping to the network topology yields significant performance improvements.