Initially, we tested the performance of the network using system default TCP parameters. We connected Tuva, Chitambo, flying, and Spot with 155 Mbps ATM on a LANE network, and Pitcairn and Lemon on a 100 Mbps switched fast ethernet segment. The LANE cloud operated with an MTU of 1500 bytes, as did the ethernet. Pitcairn, Tuva, Chitambo, and Lemon were connected directly to the same Cisco switch, tamlin. The experiments which were conducted were as follows:
For all tests, the default NFS protocol was used for each architecture (NFS v 3 for Solaris, Irix). NFS tests were not done using the AIX machine (spot) because it wasn't serving any filesystems. The colors of the hostnames indicate the network adapter type:
SERVER / CLIENT |
RTCP |
NFSWR |
NFSSR |
NFSNR |
| chitambo / tuva | 64.74 Mbps | 22.82 Mbps | 39.84 Mbps | 32.16 Mbps |
| tuva / chitambo | 63.27 Mbps | 22.95 Mbps | 36.56 Mbps | 28.75 Mbps |
| chitambo / pitcairn | 67.53 Mbps | 39.98 Mbps | 39.24 Mbps | 39.05 Mbps |
| pitcairn / chitambo | 61.32 Mbps | 36.65 Mbps | 608.2 Mbps | 741.9 Mbps |
| tuva / pitcairn | 68.44 Mbps | 37.84 Mbps | 43.58 Mbps | 50.44 Mbps |
| pitcairn / tuva | 64.52 Mbps | 22.67 Mbps | 47.52 Mbps | 191 Mbps |
| tuva / flying | 10.21 Mbps | 4.85 Mbps | 28.12 Mbps | 20.34 Mbps |
| flying / tuva | 35.22 Mbps | 1.73 Mbps | 20.86 Mbps | 11.49 Mbps |
| chitambo / flying | 9.00 Mbps | 6.95 Mbps | 27.57 Mbps | 20.16 Mbps |
| flying / chitambo | 29.68 Mbps | 1.65 Mbp | 17.8 Mbps | 12.39 Mbps |
| chitambo / spot | 47.99 Mbps | |||
| spot / chitambo | 41.42 Mbps | |||
| tuva / spot | 48.19 Mbps | |||
| spot / tuva | 48.86 Mbps | |||
| flying / spot | 27.48 Mbps | |||
| spot / flying | 37.96 Mbps |
The machines varied in the amount of main memory, as noted in the equipment page. Thus, the NFS read tests returned faster-than-network-bandwith due to NFS cache performance. In general, these tests show little, if any benefit of using ATM over fast Ethernet. Furthermore, these tests showed that we were getting much less than the maximum network load for these machines, even over a local switch. To overcome these problems, we then moved on to investingating TCP tuning parameters for Solaris. We also upgraded the ATM driver software on the Solaris machines and ATM LANE software on the Cisco switch.
Solaris 2.5.1 defaults to a maximum TCP window size of 8K. These tests were done over a range of TCP window sizes and packet sizes to understand the effects of the TCP window size on TCP performance over ATM and Fast Ethernet on a LAN.
This test was done between hosts chitambo.mcs.anl.gov and tuva.mcs.anl.gov. Both hosts had FORE SBA-200E ATM adapters connected to a Cisco Catalyst 5500 switch. They were configured to be on a 1500 MTU LANE network (mcs-9-net). For small packet sizes, the TCP window size made the most difference, boosting TCP performance from ~65 Mbps to ~105 Mbps. For larger packet sizes, the performance gain was less substantial. Independent of the TCP window size, the larger packet sizes resulted in generally better performance.
An identical test was run between chitambo.mcs.anl.gov and pitcairn.mcs.anl.gov. The performance shows the same general characteristics, higher throughput for smaller TCP buffers with a larger range of performance.
The average performance over all packet sizes increased from ~89 Mbps to ~97 Mbps over ATM by increasing the TCP window size. Performance levelled off at a window size of around 40000 bytes. Over fast Ethernet, performance increased from ~77 Mbps to ~80Mbps.
With the insights from this parameter testing, we configured the Solaris ATM hosts to use a TCP window size of 64 Kbytes, and the Fast Ethernet hosts to use a TCP window size of 18 Kbytes.
The ATM interfaces were then added to a classical IP network, and performance was measured. The classical IP network has an MTU of 9180 bytes, instead of the 1500 bytes for the LANE network. Also, Classical IP provides direct IP-over-ATM, instead of the ethernet-over-ATM we see from LANE.