Chris H
asked on
OPENFILER 2.99 VMWARE ESXi 5.5.0 - Slow ISCSI throughput over ethernet, fine over Fibre Channel
Pardon my brain dump. The problem has a lot of information and I'm just trying to get it all out there, first:
I've built a SAN using OF 2.99.2. I've updated it and configured the FC and ISCSI to connect to a volume. When I perform ISCSI via ethernet backups, I get horrendous throughput. (20MB/s from one host, 5-10MB/s from another..) I noticed that even though I set 9000 MTU on the NIC, it's actually set at 7000 and denies any change. I'm assuming the NIC (some booty rltk junker) doesn't support over 7000MTu..
I can copy full speed over the Fibre channel to the same volume (~2.5-3.0Gbps).
I can copy full speed to a different ISCSI (starwind from my workstation) over ethernet host (95MB/s - Gigabit, 9000MTU)
When I copy using ISCSI over ethernet to the OF host to the same volume I get max speed, it crawls at 5-10-20MB/s.
The volume is set up to use FILE i/o
I can mount one of the VMDKs the backup script just previously wrote on the same 'slow' volume via ISCSI (while it's backing up the rest of the vms at the slow speed), run bench32 on it and still achieve 40-50MB/s. Perplexing.
It has to be the way ISCSI is configured on the OF box. Has to be.... Is there anything I'm just not thinking of?
Thanks!
I've built a SAN using OF 2.99.2. I've updated it and configured the FC and ISCSI to connect to a volume. When I perform ISCSI via ethernet backups, I get horrendous throughput. (20MB/s from one host, 5-10MB/s from another..) I noticed that even though I set 9000 MTU on the NIC, it's actually set at 7000 and denies any change. I'm assuming the NIC (some booty rltk junker) doesn't support over 7000MTu..
I can copy full speed over the Fibre channel to the same volume (~2.5-3.0Gbps).
I can copy full speed to a different ISCSI (starwind from my workstation) over ethernet host (95MB/s - Gigabit, 9000MTU)
When I copy using ISCSI over ethernet to the OF host to the same volume I get max speed, it crawls at 5-10-20MB/s.
The volume is set up to use FILE i/o
I can mount one of the VMDKs the backup script just previously wrote on the same 'slow' volume via ISCSI (while it's backing up the rest of the vms at the slow speed), run bench32 on it and still achieve 40-50MB/s. Perplexing.
It has to be the way ISCSI is configured on the OF box. Has to be.... Is there anything I'm just not thinking of?
Thanks!
ASKER
I've requested that this question be deleted for the following reason:
Hancocka's response is ludicrous and doesn't help me. While I respect his knowledge of VMWARE, his response thwarts any other experts who might have chimed in. I'm not sure what anything he replied with, which ALL I'm well beyond aware of, has anything to do with the question I asked. Please just delete my question.
Thanks,
Chris
Hancocka's response is ludicrous and doesn't help me. While I respect his knowledge of VMWARE, his response thwarts any other experts who might have chimed in. I'm not sure what anything he replied with, which ALL I'm well beyond aware of, has anything to do with the question I asked. Please just delete my question.
Thanks,
Chris
Leave the question open, for other Expert views, and opinions then!
I can work with you through this issue, but the bottleneck on top performance is in the disk sub system with Openfiler.
but lets have a look at your networking stack, NFS, iSCSI Best Practice, VMKernel Portgroups...
Have you tried NFS with Openfiler....
I can work with you through this issue, but the bottleneck on top performance is in the disk sub system with Openfiler.
but lets have a look at your networking stack, NFS, iSCSI Best Practice, VMKernel Portgroups...
Have you tried NFS with Openfiler....
ASKER
@Mr. Wolfe I tried deleting the question... Hancocka objected to it. Problem resolved by replacing a piece of hardware.
what hardware did you replace ?
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
The NIC had always just been used as a management console, so this wasn't an issue until I wanted to roundrobin the volume to an ESXi host.
e.g. FreeNAS (although) slow, you would be better off with a ZFS based implementation of Nexenta Community Editior
http://www.nexenta.com/products/downloads/download-community-edition
Personally if this is a Lab or Production I would seek other SAN/NAS based devices which are more up to date, and this old piece of junk!
and the biggest issue with it, is the bottleneck in the Disk I/O system, you would be better using Centos and use CFS (similar to ZFS).