Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x
?
Solved

NFS vs, iSCSI throughput ?

Posted on 2016-08-19
20
Medium Priority
?
215 Views
Last Modified: 2016-08-25
Hi All,

I'd like to know which storage network technology is better in terms of performance when combining 2x Ethernet Cat6 cable to the NAS from the ESXi server ?

NFS or iSCSI

What I mean combining is by configuring the NIC teaming like LACP or 802.3ad in my Physical Server which is able to combine 4x Cat6 cable into the Cisco Switch 3850 and then connected to another server with the same 4x Cat6 cables, Windows Network shows the NIC Team connection is 4 Gbps :-)

So which one is better to combine in VMware ?
0
Comment
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 8
  • 8
  • 3
  • +1
20 Comments
 
LVL 123

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE^2)
Andrew Hancock (VMware vExpert / EE MVE^2) earned 1000 total points
ID: 41763267
it's like comparing apples and bananas!

So lets start with what shared storage do you have, and what does the vendor recommend ?

It's best for you to check yourself, which performs best in YOUR environment. I can tell you what performs best in ours!

and that's NFS with our NetApp filers, because there is less overhead than iSCSI, because NetApp were designed round NFS, and then iSCSI performs best on our Equallogic SANS, because they only support iSCSI!

Do you see!

and often the bottleneck is the networking, and/or the Disk I/O in the chassis, or storage processors!

it depends on the workload, and the design of the SAN or NAS.

What I can tell you is that LACP is NOT Supported for VMware ESXi servers, when using Standard Switches, and usually Not supported or preferred method for iSCSI, where you would use Bindings and Multipath, but this does depend on WHAT IS SUPPORTED on your SAN (ISCSI).

NAS - NFS
SAN - iSCSI or Fibre Channel.

I'm sure you've seen these articles before....

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client


Note: LACP is only supported in vSphere 5.1, 5.5 and 6.0 with vSphere Distributed Switches and on the Cisco Nexus 1000V.

Note: The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5 and later, all the load balancing algorithms of LACP are supported:



Source
Sample configuration of EtherChannel / Link Aggregation Control Protocol (LACP) with ESXi/ESX and Cisco/HP switches (1004048)
2
 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41763268
Andrew, the shared storage tahtI have is QNAP TVS-471 NAS
https://www.qnap.com/en-us/product/model.php?II=158

I do not have switch in between the ESXi server and the NAS, so it will be direct connection.
0
 
LVL 123

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE^2)
Andrew Hancock (VMware vExpert / EE MVE^2) earned 1000 total points
ID: 41763275
the QNAP is not exactly a fast performing unit, entry level at best. only four disks, so not many IOPS on four disks, even with RAID 10.

again LACP not supported on ESXi standard switches, NAS/SAN hybrid, we would test, what performs best.

Use Multipath (MPIO), round robin path selection,  Jumbo Frames for both iSCSI and NFS...

you could build a static team (non LACP) for NFS....

iSCSI more complicated to setup, because you will need to build LUNs to present etc

I've posted your testing tools... so you've got some homework to complete.
1
NEW Veeam Agent for Microsoft Windows

Backup and recover physical and cloud-based servers and workstations, as well as endpoint devices that belong to remote users. Avoid downtime and data loss quickly and easily for Windows-based physical or public cloud-based workloads!

 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41763286
Yes, that's the thing Andrew, I'll be testing it using the tools that you've suggested.

I also have done the following:

1. Created the LUN for iSCSI and the shared folder for NFS.
2. Configured Jumbo frame on both std. vSwitch & each NIC interface on the NAS (MTU 9000)

For the PSP, I also just installed Pernixdata FVP Freedom which has its own PSP:
http://pernixdata.com/free-software
0
 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41763288
Andrew,

When you said
again LACP not supported on ESXi standard switches

Do I have to create the vDS to enable the NFS or iSCSI LACP/802.3ad teaming to the NAS ?

Since this is one ESXi host to one TVS-471 NAS connection, do I have to put Cisco switch or anything that supports LACP in between ?
0
 
LVL 123

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE^2)
Andrew Hancock (VMware vExpert / EE MVE^2) earned 1000 total points
ID: 41763296
again iSCSI not supported MPIO for LACP, LACP not supported in Standard Switch only Static Trunks.

there is no real need for LACP, that's only a protocol for trunking.

You don't use trunks or iSCSI...

try it by all means, what ever works for you!

iSCSI - Multipath as per my doc.

NFS - you can team trunk if you like.

Connecting ESXi direct to NAS seems a bit weird, but people do it, to avoid using the cost of a switch.

But a 4 disk NAS from Qnap?

it's not going to fly....performance wise!

where do you think your bottleneck will be ?
1
 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41763310
Yes, now I'm fully understand Andrew.

NFS - you can team trunk if you like.

So if that's the case, then is there any steps that I need to follow to configure that ?
0
 
LVL 123

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE^2)
Andrew Hancock (VMware vExpert / EE MVE^2) earned 1000 total points
ID: 41763312
remember, that the teaming policy on the ESXi server, ONLY affects the outbound traffic from ESXi to the physical switch, which do don't have!

Inbound traffic to the ESXi server is affected by the physical configuration on the switch! (which you also don't have!)

Now as you have NO PHYSICAL SWITCH!

you cannot do that!

so basically get a switch, or direct attach, and do MPIO, or NFS with different IP Addresses!

again, direct connecting a NAS, is a bit home brew!!!!

we don' even do that sort of bodging in a VMware non-production lab!
1
 
LVL 4

Assisted Solution

by:david_tocker
david_tocker earned 500 total points
ID: 41763548
Neither iSCSI or NFS will take advantage of bonded NICs for on a single LUN/stream. You will gain a small amount of resiliency, but not performance.
The only way to get over the 1gb bottleneck is to configure MPIO.

Here is QNAP's documentation on setting it up:

http://files.qnap.com/news/pressresource/product/How_to_connect_to_your_QNAP_Turbo_NAS_from_Windows_Server_2012_using_MPIO.pdf
1
 
LVL 47

Assisted Solution

by:Craig Beck
Craig Beck earned 500 total points
ID: 41769618
I'd use NFS if you aren't sure what's best or don't have experience with iSCSI.

As Andrew says, LACP is only a trunking protocol; it doesn't have anything to do with how packets are shared between links.  You could use any link-aggregation method and still see similar throughput.
1
 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41769633
yes, that does make sense.

I've just knew it that switch is needed in the middle to combine the traffic as LACP :-)

But since here i do not plan ti buy switch, I guess, I cannot use LACP or Link Aggregation technology, for both iSCS and NFS.
0
 
LVL 123
ID: 41769815
That is Correct.

MPIO and Multipath for iSCSI will work though, and this is recommended!
1
 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41769820
Andrew,

Thanks for the further clarification, I have implemented the below setup in my ESXi host:

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client


is that what you mean by
MPIO and Multipath for iSCSI will work though, and this is recommended!
and the only option that I can implement due to no switch in between ?
0
 
LVL 123

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE^2)
Andrew Hancock (VMware vExpert / EE MVE^2) earned 1000 total points
ID: 41769828
Correct, BUT that is RECOMMENDED and BEST PRACTICE.

LACP NOT supported on Standard Switches, and MPIO is the RECOMMEDNED and BEST PRACTICE for iSCSI not TEAMING or TRUNKS.
0
 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41769835
What about MPIO for NFS Connectivity ?
Would that works ?
0
 
LVL 123

Accepted Solution

by:
Andrew Hancock (VMware vExpert / EE MVE^2) earned 1000 total points
ID: 41769846
No, you cannot bind ports for NFS.

but you can setup the vSwitch the same, and use different IP Address subnets for each vmnic portgroup that works, or TEAM. (but you cannot because you have no Switch)
0
 
LVL 8

Author Closing Comment

by:Senior IT System Engineer
ID: 41769855
Thanks all for the clarification !
0
 
LVL 47

Expert Comment

by:Craig Beck
ID: 41769989
I've just knew it that switch is needed in the middle to combine the traffic as LACP :-)

Actually, that's not entirely correct.  You don't need a switch to use LACP.  You can use LACP directly between two hosts that support it.
0
 
LVL 123
ID: 41769996
LACP not supported on standard switches in ESXi.
1
 
LVL 47

Expert Comment

by:Craig Beck
ID: 41770033
I agree, Andrew.  I was merely stating that you don't NEED a switch to use LACP if devices support it.
You can use LACP directly between two hosts that support it.
That implies that I wasn't talking about ESXi's standard switches, as they don't support it. :-)
1

Featured Post

The Ideal Solution for Multi-Display Applications

Check out ATEN’s VS1912 12-Port DP Video Wall Media Player at InfoComm 2017. Kerri describes how easy it is to design creative video walls in asymmetric layouts and schedule detailed playlists ahead of time with its advanced scheduling feature.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In this article we will learn how to backup a VMware farm using Nakivo Backup & Replication. In this tutorial we will install the software on a Windows 2012 R2 Server.
Make the most of your online learning experience.
There's a multitude of different network monitoring solutions out there, and you're probably wondering what makes NetCrunch so special. It's completely agentless, but does let you create an agent, if you desire. It offers powerful scalability …
Monitoring a network: how to monitor network services and why? Michael Kulchisky, MCSE, MCSA, MCP, VTSP, VSP, CCSP outlines the philosophy behind service monitoring and why a handshake validation is critical in network monitoring. Software utilized …

722 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question