MPIO and Link Aggregation (LACP) difference for iSCSI Network ?

Hi all,

Does anyone know what's the difference between the MPIO and LACP for the storage networking on Ethernet cable ?

I'm trying to determine the best deployment and storage network between my ESXi, HP Procurve switch and the QNAP NAS.

The hard disk drive on the NAS is backed by RAID-1 SATA2 7200rpm.
LVL 8
Senior IT System EngineerIT ProfessionalAsked:
Who is Participating?
 
Vladislav KaraievConnect With a Mentor Solutions ArchitectCommented:
@Senior IT System Engineer, it depends on your requirements.

From what I understood, you are describing the : "iSCSI SAN (Qnap) -> NAS VM -> NFS Datastore -> Client VM" architecture. I think, this is not the most efficient approach since NFS/SMB over iSCSI will introduce an overhead (it can be insignificant though, so you should obviously test it before implementation).

Instead of NFS over iSCSI, I would recommend you to choose between "Qnap NFS -> NFS Datastore -> Client VM" and "Qnap iSCSI -> VMFS Datastore -> Client VM" approaches. Unlike NFS, VMFS is a block level file system and it will combine nicely with iSCSI storage.

Choose the protocol depending on your workload. I prefer to configure iSCSI as an underlying storage protocol.
If I want to create an SMB file-share, I usually spawn a Windows Server VM on top of VMFS datastore and configure a File Server role inside of it. In my opinion, no one does SMB better than the Windows-native implementation.
1
 
Taras ShvedConnect With a Mentor Storage EngineerCommented:
"Both LACP and MPIO provide the promised redundancy, offering failover without user’s involvement. It is a good thing, but when it comes to performance, it is clear that MPIO wins the competition. The more data paths it uses, the better the throughput will be."

https://www.starwindsoftware.com/blog/lacp-vs-mpio-on-windows-platform-which-one-is-better-in-terms-of-redundancy-and-speed-in-this-case-2
2
 
Andrew Hancock (VMware vExpert / EE MVE^2)Connect With a Mentor VMware and Virtualization ConsultantCommented:
LACP is NOT SUPPORTED by ESXi when using Standard Switches, ONLY Distributed Switches.

also the recommended and best practice method is using MULTIPATH (MPIO) for iSCSI.

see my EE Article

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

Same procedure for all versions of ESXi including 6.0.
1
The new generation of project management tools

With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.

 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Hi Taras,

The link from starwind is "Error establishing a database connection" ?
0
 
Taras ShvedStorage EngineerCommented:
Hi,

That was some DB maintenance I guess. Anyways, I've just checked and it's up and running now.
1
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Thanks Andrew,

So for the iSCSI data Network, the best Practice is with MPIO on standard vSwitch.

What about for data Network for Production VM ?

Can I use LACP for higher throughput while still maintaining redundant path in case one cable is broken ?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
So for the iSCSI data Network, the best Practice is with MPIO on standard vSwitch.

Multipath is recommended for all iSCSI, unless your SAN vendors tells you different.

What about for data Network for Production VM ?

Can I use LACP for higher throughput while still maintaining redundant path in case one cable is broken ?

Again LACP is NOT SUPPORTED for Standard Switches.

and we are now asking question which our out of scope and drifting off the original questioned asked, which was about iSCSI.
2
 
Vladislav KaraievConnect With a Mentor Solutions ArchitectCommented:
Never use LACP or any other kind of network aggregation for iSCSI networks unless it was required by your SAN vendor. Use MPIO (Multipathing) instead.

Generally speaking, teaming creates a network overhead by adding an extra text string into each Ethernet frame.
 
Usually, nothing bad happens during the low workload or when teaming is used along with NAS protocols (NFS/SMB) since the number of Ethernet frames per second is not really high.
 
In case of iSCSI traffic which, essentially, turns into block level access, the number of frames per second may be really high especially when smaller 4k/8k access patterns are being used. When iSCSI networks are teamed, LACP driver processes each frame which leads to an extra CPU load and increased latency.
1
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Vladislav,

So in this case for general purpose VM like file server VMs on VMware is to use MPIO with 2X uplinks and then host the VMFS LUN on NFS rather than iSCSI ?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)Connect With a Mentor VMware and Virtualization ConsultantCommented:
Test which works for you best NFS or iSCSI.

Multipath as per my EE Article for iSCSI.

NFS you can use teaming, or a multipath arrangement.

BUT it does depend on what your SAN vendor recommends.

So what does Qnap recommend, and a Qnap NAS, is based on NAS, e.g. NFS, my bet is iSCSI is another added layer on the Qnap, so maybe slower.

But the only way to know, is to test both using iSCSI and NFS.

and if you only want to create a VM for a File Server, why not use native NAS features such as Windows/CIFS Shares?
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Hi Andrew,

Thanks for the clarification and suggestion.

As for the native Windows CIFS it requires Windows server VM right ?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
As for the native Windows CIFS it requires Windows server VM right ?

No, your Qnap NAS can also do WIndows CIFS.
0
All Courses

From novice to tech pro — start learning today.