OPENFILER 2.99 VMWARE ESXi 5.5.0 - Slow ISCSI throughput over ethernet, fine over Fibre Channel

Pardon my brain dump.  The problem has a lot of information and I'm just trying to get it all out there, first:

I've built a SAN using OF 2.99.2.  I've updated it and configured the FC and ISCSI to connect to a volume.  When I perform ISCSI via ethernet backups, I get horrendous throughput.  (20MB/s from one host, 5-10MB/s from another..)  I noticed that even though I set 9000 MTU on the NIC, it's actually set at 7000 and denies any change.  I'm assuming the NIC (some booty rltk junker) doesn't support over 7000MTu..  

I can copy full speed over the Fibre channel to the same volume (~2.5-3.0Gbps).
I can copy full speed to a different ISCSI (starwind from my workstation) over ethernet host (95MB/s - Gigabit, 9000MTU)
When I copy using ISCSI over ethernet to the OF host to the same volume I get max speed, it crawls at 5-10-20MB/s.
The volume is set up to use FILE i/o

I can mount one of the VMDKs the backup script just previously wrote on the same 'slow' volume via ISCSI (while it's backing up the rest of the vms at the slow speed), run bench32 on it and still achieve 40-50MB/s.  Perplexing.

It has to be the way ISCSI is configured on the OF box.  Has to be....   Is there anything I'm just not thinking of?

LVL 16
Chris HInfrastructure ManagerAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
The issue is Open Filer as a product was a successful SAN, when it was created, but it's now was last updated in 2011, so it's now over 4 years old, and there have been many many better solutions than this.

e.g. FreeNAS (although) slow, you would be better off with a ZFS based implementation of Nexenta Community Editior

Personally if this is a Lab or Production I would seek other SAN/NAS based devices which are more up to date, and this old piece of junk!

and the biggest issue with it, is the bottleneck in the Disk I/O system, you would be better using Centos and use CFS (similar to ZFS).
Chris HInfrastructure ManagerAuthor Commented:
I've requested that this question be deleted for the following reason:

Hancocka's response is ludicrous and doesn't help me.  While I respect his knowledge of VMWARE, his response thwarts any other experts who might have chimed in.  I'm not sure what anything he replied with, which ALL I'm well beyond aware of, has anything to do with the question I asked.  Please just delete my question.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Leave the question open, for other Expert views, and opinions then!

I can work with you through this issue, but the bottleneck on top performance is in the disk sub system with Openfiler.

but lets have a look at your networking stack, NFS, iSCSI Best Practice, VMKernel Portgroups...

Have you tried NFS with Openfiler....
Determine the Perfect Price for Your IT Services

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden with our free interactive tool and use it to determine the right price for your IT services. Download your free eBook now!

Chris HInfrastructure ManagerAuthor Commented:
@Mr. Wolfe  I tried deleting the question...   Hancocka objected to it.  Problem resolved by replacing a piece of hardware.
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
what hardware did you replace ?
Chris HInfrastructure ManagerAuthor Commented:
I first tried the onboard NIC at 7000MTU, replaced it with  a PCI 133MHz NIC at 1500MTU and then with a PCI-e NIC at 1500 and 9000 MTU  (improved throughput to 65MB/s on both 1500 and 9000, no change, but architecture change was obvious boost)

I then unmapped and remapped the volume using fileio instead of blockio.  I knew better, not to set the blockio, but must have just rushed through and set it when I initially built the box a few years back.

Nonetheless, the box now transfers at near theoretical gigabit speeds (800mbps~) over ISCSI via 1Gb ethernet.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Chris HInfrastructure ManagerAuthor Commented:
The NIC had always just been used as a management console, so this wasn't an issue until I wanted to roundrobin the volume to an ESXi host.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.