I am having problems achieving 10GbE speeds and I would like some advice to help me figure out what is going on with my systems. The hardware I am using is as follows:
Switch HP ProCurve 5406zl (J8699A) w/K.15.04.0007 firmware
10GBase-T 8-Port Module (J9546A)
NIC Intel X520-T2 10GbE (E10G42BT)
Server Dell R610 w/2 7.2K SATA HDD Mirrored, Windows Server 2008 R2 w/SP1
Storage Aberdeen AberNAS 365X8 w/15 7.2K SATA HDD RAID 5, Windows
Storage Server 2008
For both the server and the storage I have confirmed that I have installed the NIC into a 8X PCI-e 2.0 slot. On the switch I make sure that Jumbo Frames is enabled.
I then configure the NIC as follows:
Jumbo Packet = 9014
Large Send Offload (IPv4) = Enabled
RSS = Enabled
RSS Queues = 4
TCP/IP Offload = Enabled
When I copy a file on the storage system from itself to itself I average about 500MB/s. This leads me to believe that if all goes well the most I would be able to achieve when working with my storage is 500MB/s.
I then copy a file from the server to the storage and the average is about 45-50MB/s. This is the part that I do not understand. Every link between the devices is 10GbE and I have enabled all of the “tweaks” to maximize the usage of 10GbE but I still don’t get anywhere near the performance I was hoping for. I also looked at Resource Monitor and can confirm that none of the processors was at 100%, I was nowhere near 100% RAM usage, and the NIC utilization hovered around 3-4% for both systems.
One thing that I would like to confirm is that other people have been able to achieve transfer rates at least close to 10GbE. If you have achieved near 10GbE (or at least as fast as your disk I/O) speeds I would love to know what you are using and what you had to do to “tweak” your network setting in order to get there.