I wanted to know exactly how much throughput I get from my new switch (HP ProcCurve 1400-24G) and found a network performance test tool that is available on many platforms – iPerf. I installed it on my Pc (Asus P5NSLI MB with dual-core Intel Pentium 4 3.4 GHz, a Raid 0 configuration, and a Marvell on-board Ethernet controller), a Synology Rackstation RS-407 NAS (ARM CPUrunning Linux, Raid 5), and a Sun Fire V20z rack server with two 2.4GHz Athlon processors running Windows Server 2003 x64 (2 Broadcom NetXtreme Gigabit Ethernet controllers, single SCSI disk).
I purchased the HP switch because it supports Jumbo frames. I tested the transfer rate between all three stations in both directions with three different MTU settings. All NICs support Jumbo frames and the value can be configured in the NIC driver settings (Windows) or in the Synology administration UI.
1500 is the default (non-jumbo). I limited the MTU size for all three network cards to 1500 and ran iPerf in all 6 constellations.
The results did not exactly blow me away. I have a mix of Cat-5 and Cat-5e cabling in the house and was concerned that I would experience bandwidth limitations. Since the servers in the test were directly connected to the switch with newly purchased Cat-6 patch cables, I expected a significantly higher transfer rate between the servers than between the workstation and either of the servers. The workstation was connected via 15 feet of Cat-5 cable, a Cat-5 wall jack, and a Cat-6 patch cable.
iPerf reported transfer rates between 150 and 250 kbps, not exactly Gigabit-like. So I turned up the MTU and hoped for the best.
The next MTU size that is supported by all three network card drivers was 4000. I expected transfer rates to go up consistently.
iPerf now reported between 100 and 400 kbps, except between the workstation and the Sun server – there the performance descreased to a fraction of what it should be.
9000 is the largest supported frame size. I set the MTU to 9000 on all sides and ran the same iPerf tests again.
Now iPerf reports over 500 kbps download speed from the Synology NAS, which would be nice if traffic between the Sun server and the workstation wasn’t so unrealistically slow. In addition, there now are oddities between the Sun and the Synology as well. I was able to consistenly reproduce these results.
I cannot say yet why I am observing these things. I opened a support case with HP networking and I am still waiting for an answer there. I did do some basic file transfer tests with a tool, by simply copying a large 1 GB video file between the participants, and while I did not measure such low transfer rates as iPerf reported, it definitely appears that the upload speed is consistently going down when Jumbo frames are enabled.
250,000 Mbps (bits) is a little over 30 MBps (bytes). Copying a 1GB file across the network takes about 30 seconds with MTU=1500. It does not get any faster than that, but downloads become substantially slower, e.g. > 2 minutes per GB when I set MTU=9000. The SMC Gigabit switch that I replaced and that did not have Jumbo frame support had the exact same performance. So, maybe the HP switch does not really support Jumbo frames? I am still waiting for a definitive answer from support, but if you have a Giga switch that performs better with Jumbo frames I’d like to hear about it.
Needless to say, I turned Jumbo frames off again and I am running with the basic MTU of 1500. While the iPerf numbers are not impressive the subjective performance gain from upgrading to a Gigabit switch was much more dramatic. It almost seems that iPerf does not function correctly with larger MTU sizes, too.