Link to home
Start Free TrialLog in
Avatar of System Engineer
System EngineerFlag for United States of America

asked on

Problems achieving 10GbE Performance

I am having problems achieving 10GbE speeds and I would like some advice to help me figure out what is going on with my systems.  The hardware I am using is as follows:

Switch          HP ProCurve 5406zl (J8699A) w/K.15.04.0007 firmware
                    10GBase-T 8-Port Module (J9546A)

NIC              Intel X520-T2 10GbE (E10G42BT)

Server         Dell R610 w/2 7.2K SATA HDD Mirrored, Windows Server 2008 R2 w/SP1

Storage       Aberdeen AberNAS 365X8 w/15 7.2K SATA HDD RAID 5, Windows
                    Storage Server 2008

For both the server and the storage I have confirmed that I have installed the NIC into a 8X PCI-e 2.0 slot. On the switch I make sure that Jumbo Frames is enabled.

I then configure the NIC as follows:
Jumbo Packet = 9014
Large Send Offload (IPv4) = Enabled
RSS = Enabled
RSS Queues = 4
TCP/IP Offload = Enabled

When I copy a file on the storage system from itself to itself I average about 500MB/s. This leads me to believe that if all goes well the most I would be able to achieve when working with my storage is 500MB/s.

I then copy a file from the server to the storage and the average is about 45-50MB/s. This is the part that I do not understand. Every link between the devices is 10GbE and I have enabled all of the “tweaks” to maximize the usage of 10GbE but I still don’t get anywhere near the performance I was hoping for. I also looked at Resource Monitor and can confirm that none of the processors was at 100%, I was nowhere near 100% RAM usage, and the NIC utilization hovered around 3-4% for both systems.

One thing that I would like to confirm is that other people have been able to achieve transfer rates at least close to 10GbE. If you have achieved near 10GbE (or at least as fast as your disk I/O) speeds I would love to know what you are using and what you had to do to “tweak” your network setting in order to get there.
Avatar of Duncan Meyers
Duncan Meyers
Flag of Australia image

The most likely reason that the copy to the storage is markedly slower is that you're hitting the peak write performance of the storage. Remember that it has to calculate parity blocks an that involves reading existing data (this depends on the array. Quality arrays do what's called a Modified RAID 3 Write where the parity block is calculated and the data is written out in one oprration and so avoids RAID write penalties.

Regarding network performance, there was a similar question a couple of months back. IIRC correctly, the fix was to use a different PCIe slot with fewer lanes.
ASKER CERTIFIED SOLUTION
Avatar of jrhelgeson
jrhelgeson
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Member_2_231077
Member_2_231077

2 * 7.2K disks, that made me laugh. Try copying from the server to itself rather than from the storage to itself.
Avatar of System Engineer

ASKER

So the consensus is that I am focusing on the networking where I should be looking at my storage.

In that case let me ask a follow up question: What are the average throughput rates achievable when using higher end storage systems.

The storage system I am currently using was chosen because of it's comparatively small price tag. You get a lot of storage capacity for less cost when choosing SATA drives. I would like to get a rough estimate on how much throughput higher end solutions such as SAS and SSD drives might provide.

Is it all about the type of HDD I choose? What about the controller card? For instance this system uses a SAS/SATA controller card. What about fibre channel or some other alternate controller choice?
There are obviously three parts to the equation!

1) Where you are copying from.
2) The network NIC-2-Switch-2 NIC
3) Where you are copying to.

As 1Gbit/sec = 100MBytes/sec, then with a 10GbE link you are looking at 10 * 100MB/sec or 1GB/sec, and as the others have said you are really really going to struggle to get the utilisation up in double figures from one system.

What you really need to do isd to test the network on its own. You know copy /dev/zero > /dev/null but across the network.
I have used iperf to test the network and have been able to see around 8Gb/s speeds. All I had to test with was Windows 7 and Server 2008 R2. I'm guessing that if I used a Linux variant I could squeeze a little more speed out of the network tests.

I'm not really worried about the network segment. It looks like I have to improve my disk I/O before I have to worry about maxing out the network.

Unfortunately my storage system is so slow that I can't even prove 1GbE speeds. I don't really want to go through the expense of getting my hands on higher end storage only to find out that I'm still getting 50MB/s file transfers. That's why I would like to know what other people have seen using other storage solutions.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I've seen appropriately configured arrays hit 800MB/sec - but that was a box we set up specifically to go like stink. Real world performance numbers are more like 30 - 50 MB/sec and 2500 - 4000 IOPS for a 200 seat organization. Backup bandwidths are much, much higher.
Don't worry about offending me. I think the whole purpose of this website is to throw your problems up in the air and see who takes a shot at them. Every now and then someone is going to throw the shotgun instead of the clay pigeon.

I am using the systems mentioned in my original question as part of a Microsoft Hyper-V virtualization farm. (Server 2008 R2 Failover Cluster w/CSV hosting about 20 VMs) This setup works fine for virtualizing servers even though the VM's are a little slow. I am trying to figure out what needs to be done to get higher performance for virtualization.

Does anyone know of good resource websites dealing with storage performance? I think I'm beyond CNET's reviews here.
If you are trying to connect to a budget based storage device using iSCSI, you can expect your write speeds to be ~25 MegaBytes/second, sustained.  Read speeds will be a bit higher.  If you are getting faster than that, then it is all bonus from there on an iSCSI array with SATA disks.

I base my benchmarks on writing to a single node in an iSCSI array cluster that uses SATA disks.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Have you given up on this question, or is it resolved?
@LesterClayton, its actually "Fibre Channel", although i am guessing you are in the America's where that spelling is a bit alien!
I have only limited hardware so I have been unable to see what I consider 10GbE speeds yet.

I would like to know what speeds other people have seen from within virtualized systems. I understand that there will be a performance hit with virtualization but I would like to know how much.

In particular I an interested in a file transfer outside of virtualization to determine a "best case" performance level. Then I would like to see a VM to storage and/or VM to VM file transfer.

Ignoring virtualization for a while I am interested in the performance differences between different storage systems. SATA vs SAS vs SSD for disks. iSCSI vs FCoE vs HBAs for connectivity.
I think you might have to ask another question on that.
SATA vs SAS vs SSD

SATA - Low cost, high capacity, low range rpm, MB/sec limited by rpm, IOPS typically below 100/sec, duty cycle <80%

SAS - Medium cost, typically lower capacities, high range rpm, MB/sec limited by rpm, IOPS up to 150/sec, duty cycle 100%

SSD - V.high cost, typically low capacities, MB/sec high, IOPS in the 10's of thousands, duty cycle 100%


So use SSD's for the really volatile stuff and either SAS/SATA for the rest.
Not really true, there are 100% duty cycle SATA disks and there are high capacity low performance SAS disks, that's why I said they needed to ask a new question.
Andy, re New question
Yes you are right, did consider it, but your reply wasnt there when i started composing that reply.

re 100% SATA disks and high cap SAS - Yup, the distinctions between the technologies are starting to get very blurred. I should have used "typically" much more often.
I had originally thought I was dealing with a networking bottleneck but now I think it has more to do with my I/O systems. I think I will post a new question dealing with the performance of I/O systems as it relates to virtualizarion.

One thing that I have never been able to figure out on this site is how to "close" a question. This has happen to me a few times where I don't want to delete the question but I feel that it has reached the end of it's usefullness.
You can close the question by accepting a single answer, multiple answers, or your own answer.

You may get some objections if you request closing your own answer.  The best thing would be to accept multiple answers, especially to people who had came to the same conclusion about Disk Subsystem being the issue :)