Improving the Performance of the POSIX I/O Interface to PVFS
|Title||Improving the Performance of the POSIX I/O Interface to PVFS|
|Year of Publication||2002|
|Authors||Vilayannur, M, Ross, RB, Carns, PH, Thakur, R, Sivasubramaniam, A, Kandemir, M|
The ever-increasing gap in performance between CPU/memory technologies and the I/O subsystem (disks, I/O buses) in modern workstations has exacerbated the I/O bottlenecks inherent in applications that access large disk resident data sets. A simultaneous development in recent times has seen the maturity of Linux-based off-the-shelf clusters of PCs for low-cost, high-performance computing solutions. A common technique to alleviate the I/O bottlenecks on such platforms is the use of parallel file systems. One such parallel file system is the Parallel Virtual File System (PVFS), which is a freely available tool to achieve high-performance I/O on Linux-based clusters.
In this paper, we describe some of the key performance and scalability improvements that we have implemented for the UNIX I/O interface to PVFS. To illustrate the performance gains, we present experimental results using Bonnie++, a commonly used file system benchmark to test file system throughput; a synthetic parallel I/O application for calculating aggregate read and write bandwidths; and a synthetic benchmark which calculates the time taken to untar the Linux kernel source tree to measure performance of large number of small file operations. We also compare the I/O performance of these techniques when using a Myrinet-based network and when using a fast Ethernet-based network for I/O-related communications. With these techniques we achieve aggregate read and write bandwidth as high as 550 MB/s with Myrinet and 160 MB/s with fast Ethernet.