Influence of infiniband FDR on the performance of remote GPU virtualization.
|Title||Influence of infiniband FDR on the performance of remote GPU virtualization.|
|Publication Type||Conference Proceedings|
|Year of Publication||2013|
|Authors||Reano, C, Mayo, R, Quintana-Orti, ES, Silla, F, Duato, J, Pena, AJ|
|Conference Name||IEEE Cluster 2013|
|Conference Location||Indianapolis, IN|
The use of GPUs to accelerate general-purpose scientific and engineering applications is mainstream today, but their adoption in current high-performance computing clusters is impaired primarily by acquisition costs and power consumption. Therefore, the benefits of sharing a reduced number of GPUs among all the nodes of a cluster can be remarkable for many applications. This approach, usually referred to as remote GPU virtualization, aims at reducing the number of GPUs present in a cluster, while increasing their utilization rate.
The performance of the interconnection network is key to achieving reasonable performance results by means of remote GPU virtualization. To this end, several networking technologies with throughput comparable to that of PCI Express have appeared recently. In this paper we analyze the influence of InfiniBand FDR on the performance of remote GPU virtualization, comparing its impact on a variety of GPU-accelerated applications with other networking technologies, such as Infini-Band QDR and Gigabit Ethernet. Given the severe limitations of freely available remote GPU virtualization solutions, the rCUDA framework is used as the case study for this analysis. Results show that the new FDR interconnect, featuring higher bandwidth than its predecessors, allows the reduction of the overhead of using GPUs remotely, thus making this approach even more appealing.