Solved

Problems achieving 10GbE Performance

Posted on 2011-09-30
20
4,850 Views
Last Modified: 2012-05-12
I am having problems achieving 10GbE speeds and I would like some advice to help me figure out what is going on with my systems.  The hardware I am using is as follows:

Switch          HP ProCurve 5406zl (J8699A) w/K.15.04.0007 firmware
                    10GBase-T 8-Port Module (J9546A)

NIC              Intel X520-T2 10GbE (E10G42BT)

Server         Dell R610 w/2 7.2K SATA HDD Mirrored, Windows Server 2008 R2 w/SP1

Storage       Aberdeen AberNAS 365X8 w/15 7.2K SATA HDD RAID 5, Windows
                    Storage Server 2008

For both the server and the storage I have confirmed that I have installed the NIC into a 8X PCI-e 2.0 slot. On the switch I make sure that Jumbo Frames is enabled.

I then configure the NIC as follows:
Jumbo Packet = 9014
Large Send Offload (IPv4) = Enabled
RSS = Enabled
RSS Queues = 4
TCP/IP Offload = Enabled

When I copy a file on the storage system from itself to itself I average about 500MB/s. This leads me to believe that if all goes well the most I would be able to achieve when working with my storage is 500MB/s.

I then copy a file from the server to the storage and the average is about 45-50MB/s. This is the part that I do not understand. Every link between the devices is 10GbE and I have enabled all of the “tweaks” to maximize the usage of 10GbE but I still don’t get anywhere near the performance I was hoping for. I also looked at Resource Monitor and can confirm that none of the processors was at 100%, I was nowhere near 100% RAM usage, and the NIC utilization hovered around 3-4% for both systems.

One thing that I would like to confirm is that other people have been able to achieve transfer rates at least close to 10GbE. If you have achieved near 10GbE (or at least as fast as your disk I/O) speeds I would love to know what you are using and what you had to do to “tweak” your network setting in order to get there.
0
Comment
Question by:System Engineer
  • 5
  • 4
  • 3
  • +3
20 Comments
 
LVL 30

Expert Comment

by:Duncan Meyers
ID: 36895154
The most likely reason that the copy to the storage is markedly slower is that you're hitting the peak write performance of the storage. Remember that it has to calculate parity blocks an that involves reading existing data (this depends on the array. Quality arrays do what's called a Modified RAID 3 Write where the parity block is calculated and the data is written out in one oprration and so avoids RAID write penalties.

Regarding network performance, there was a similar question a couple of months back. IIRC correctly, the fix was to use a different PCIe slot with fewer lanes.
0
 
LVL 15

Accepted Solution

by:
jrhelgeson earned 167 total points
ID: 36895537
Yes, Ethernet throughput far exceeds the throughput of your disks.  Throughput to your disk is actually really good at 50mb/s sustained.  If you had Raid0 15k drives, you could get more - but even still, most gigabit connections never even come close to maxing out.

As a Cisco Engineer for many years, the only links that reach that saturation are the ones that are aggregating traffic, such as a trunk port.  In cases of server virtualization, where you can have 50 virtual servers running on a single box, all connecting out to an iSCSI storage array that is itself clustered - there I've also seen those speeds approach 5gb/s sustained. (This was to a SCALE san cluster with ~15 nodes).

Aside from that - there is just no way that you could get that high a transfer speed out of a single server.
0
 
LVL 55

Expert Comment

by:andyalder
ID: 36896255
2 * 7.2K disks, that made me laugh. Try copying from the server to itself rather than from the storage to itself.
0
 

Author Comment

by:System Engineer
ID: 36903854
So the consensus is that I am focusing on the networking where I should be looking at my storage.

In that case let me ask a follow up question: What are the average throughput rates achievable when using higher end storage systems.

The storage system I am currently using was chosen because of it's comparatively small price tag. You get a lot of storage capacity for less cost when choosing SATA drives. I would like to get a rough estimate on how much throughput higher end solutions such as SAS and SSD drives might provide.

Is it all about the type of HDD I choose? What about the controller card? For instance this system uses a SAS/SATA controller card. What about fibre channel or some other alternate controller choice?
0
 
LVL 16

Expert Comment

by:Gerald Connolly
ID: 36905811
There are obviously three parts to the equation!

1) Where you are copying from.
2) The network NIC-2-Switch-2 NIC
3) Where you are copying to.

As 1Gbit/sec = 100MBytes/sec, then with a 10GbE link you are looking at 10 * 100MB/sec or 1GB/sec, and as the others have said you are really really going to struggle to get the utilisation up in double figures from one system.

What you really need to do isd to test the network on its own. You know copy /dev/zero > /dev/null but across the network.
0
 

Author Comment

by:System Engineer
ID: 36905934
I have used iperf to test the network and have been able to see around 8Gb/s speeds. All I had to test with was Windows 7 and Server 2008 R2. I'm guessing that if I used a Linux variant I could squeeze a little more speed out of the network tests.

I'm not really worried about the network segment. It looks like I have to improve my disk I/O before I have to worry about maxing out the network.

Unfortunately my storage system is so slow that I can't even prove 1GbE speeds. I don't really want to go through the expense of getting my hands on higher end storage only to find out that I'm still getting 50MB/s file transfers. That's why I would like to know what other people have seen using other storage solutions.
0
 
LVL 30

Assisted Solution

by:Duncan Meyers
Duncan Meyers earned 167 total points
ID: 36906047
I'll be blunt. You are attacking this from the wrong direction. Rather than working out what performance your kit can provide (which you've done rather well), you now need to work out what performance your applications actually require. If you are a 10 user accountancy practice, then an EMC VMAX array (which can handle Gigabytes per second in throughput) is obviously going to be massive overkill. If, on the other hand, you're an animation studio with 10 people, you'll need a lot more horsepower in the storage array.

But to answer your question: SSDs are fastest, followed by 15K rpm SAS/scsi/FC drives, then 10k SAS/scsi/fc then SATA. Which is also the order of cost. Enterprise storage arrays typically do not use SATA drives for primary storage. Instead they're used for secondary or archive storage.

The metrics you need to look at are IOPS. In Windows Perfmon, use Physical Disk writes pwr second and reads per second.
0
 
LVL 30

Expert Comment

by:Duncan Meyers
ID: 36906109
I've seen appropriately configured arrays hit 800MB/sec - but that was a box we set up specifically to go like stink. Real world performance numbers are more like 30 - 50 MB/sec and 2500 - 4000 IOPS for a 200 seat organization. Backup bandwidths are much, much higher.
0
 

Author Comment

by:System Engineer
ID: 36906272
Don't worry about offending me. I think the whole purpose of this website is to throw your problems up in the air and see who takes a shot at them. Every now and then someone is going to throw the shotgun instead of the clay pigeon.

I am using the systems mentioned in my original question as part of a Microsoft Hyper-V virtualization farm. (Server 2008 R2 Failover Cluster w/CSV hosting about 20 VMs) This setup works fine for virtualizing servers even though the VM's are a little slow. I am trying to figure out what needs to be done to get higher performance for virtualization.

Does anyone know of good resource websites dealing with storage performance? I think I'm beyond CNET's reviews here.
0
 
LVL 15

Expert Comment

by:jrhelgeson
ID: 36906362
If you are trying to connect to a budget based storage device using iSCSI, you can expect your write speeds to be ~25 MegaBytes/second, sustained.  Read speeds will be a bit higher.  If you are getting faster than that, then it is all bonus from there on an iSCSI array with SATA disks.

I base my benchmarks on writing to a single node in an iSCSI array cluster that uses SATA disks.
0
Do You Know the 4 Main Threat Actor Types?

Do you know the main threat actor types? Most attackers fall into one of four categories, each with their own favored tactics, techniques, and procedures.

 
LVL 17

Assisted Solution

by:LesterClayton
LesterClayton earned 166 total points
ID: 36971298
I've written an article which can help you benchmark network throughput - but you will need two servers to test.

http://www.experts-exchange.com/A_8010.html

As far as disk throughput is concerned, there are many good comments here already and I do not need to re-iterate.  Simply put, your biggest obstacle is the fact that you're using iSCSI.  Fiber Channel is far faster for Disk IO, where I am able to peak around 800 MB/sec on my 8 IBM Storwize v7000.  Fiber Channel's maximum packet size is 128 MB whereas TCP/IP the maximum packet size is 64 KB (and you have to fragment that to get it through).
0
 
LVL 15

Expert Comment

by:jrhelgeson
ID: 37047495
Have you given up on this question, or is it resolved?
0
 
LVL 16

Expert Comment

by:Gerald Connolly
ID: 37050843
@LesterClayton, its actually "Fibre Channel", although i am guessing you are in the America's where that spelling is a bit alien!
0
 

Author Comment

by:System Engineer
ID: 37058743
I have only limited hardware so I have been unable to see what I consider 10GbE speeds yet.

I would like to know what speeds other people have seen from within virtualized systems. I understand that there will be a performance hit with virtualization but I would like to know how much.

In particular I an interested in a file transfer outside of virtualization to determine a "best case" performance level. Then I would like to see a VM to storage and/or VM to VM file transfer.

Ignoring virtualization for a while I am interested in the performance differences between different storage systems. SATA vs SAS vs SSD for disks. iSCSI vs FCoE vs HBAs for connectivity.
0
 
LVL 55

Expert Comment

by:andyalder
ID: 37059291
I think you might have to ask another question on that.
0
 
LVL 16

Expert Comment

by:Gerald Connolly
ID: 37059428
SATA vs SAS vs SSD

SATA - Low cost, high capacity, low range rpm, MB/sec limited by rpm, IOPS typically below 100/sec, duty cycle <80%

SAS - Medium cost, typically lower capacities, high range rpm, MB/sec limited by rpm, IOPS up to 150/sec, duty cycle 100%

SSD - V.high cost, typically low capacities, MB/sec high, IOPS in the 10's of thousands, duty cycle 100%


So use SSD's for the really volatile stuff and either SAS/SATA for the rest.
0
 
LVL 55

Expert Comment

by:andyalder
ID: 37059457
Not really true, there are 100% duty cycle SATA disks and there are high capacity low performance SAS disks, that's why I said they needed to ask a new question.
0
 
LVL 16

Expert Comment

by:Gerald Connolly
ID: 37059563
Andy, re New question
Yes you are right, did consider it, but your reply wasnt there when i started composing that reply.

re 100% SATA disks and high cap SAS - Yup, the distinctions between the technologies are starting to get very blurred. I should have used "typically" much more often.
0
 

Author Comment

by:System Engineer
ID: 37059620
I had originally thought I was dealing with a networking bottleneck but now I think it has more to do with my I/O systems. I think I will post a new question dealing with the performance of I/O systems as it relates to virtualizarion.

One thing that I have never been able to figure out on this site is how to "close" a question. This has happen to me a few times where I don't want to delete the question but I feel that it has reached the end of it's usefullness.
0
 
LVL 17

Expert Comment

by:LesterClayton
ID: 37059659
You can close the question by accepting a single answer, multiple answers, or your own answer.

You may get some objections if you request closing your own answer.  The best thing would be to accept multiple answers, especially to people who had came to the same conclusion about Disk Subsystem being the issue :)
0

Featured Post

Free Gift Card with Acronis Backup Purchase!

Backup any data in any location: local and remote systems, physical and virtual servers, private and public clouds, Macs and PCs, tablets and mobile devices, & more! For limited time only, buy any Acronis backup products and get a FREE Amazon/Best Buy gift card worth up to $200!

Join & Write a Comment

Suggested Solutions

If your business is like most, chances are you still need to maintain a fax infrastructure for your staff. It’s hard to believe that a communication technology that was thriving in the mid-80s could still be an essential part of your team’s modern I…
When it comes to security, there are always trade-offs between security and convenience/ease of administration. This article examines some of the main pros and cons of using key authentication vs password authentication for hosting an SFTP server.
This tutorial will walk an individual through setting the global and backup job media overwrite and protection periods in Backup Exec 2012. Log onto the Backup Exec Central Administration Server. Examine the services. If all or most of them are stop…
Get a first impression of how PRTG looks and learn how it works.   This video is a short introduction to PRTG, as an initial overview or as a quick start for new PRTG users.

758 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

18 Experts available now in Live!

Get 1:1 Help Now