Compiler Optimization for Data-Driven Task Parellelism on Distributed Memory Systems
|Title||Compiler Optimization for Data-Driven Task Parellelism on Distributed Memory Systems|
|Year of Publication||2014|
|Authors||Armstrong, TG, Wozniak, JM, Wilde, M, Foster, IT|
The data-driven task parallelism execution model can support parallel programming models that are well suited for large-scale distributed-memory parallel computing, for example, simulations and analysis pipelines running on clusters and clouds. We describe a novel compiler intermediate representation and optimizations for this execution model, including adaptions of standard techniques alongside novel techniques. These techniques are applied to Swift/T, a high-level declarative language for flexible data flow composition of functions, which may be serial or use lower-level parallel programming models such as MPI and OpenMP. This paper presents preliminary results, indicating that our compiler optimizations reduce communication overhead by 70 to 93% on distributed memory systems.