Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 722
  • Last Modified:

VMware ESXi network trunking best practice scenario for iSCSI & NAS ?

People,

I'm trying to build my existing branch office infrastructure with limited hardware possible due to budget restrictions, so I'd like to know what is best way to approach this limitation for my scenario:

Goal:
I need to deploy several VMs in one ESXi host that is using HP DL 360 rack server with iSCSI network & QNAP TVS-471 NAS device.
VMs to be deployed:
Active Directory (DNS and DHCP)
File Server
SQL Server 2014 Standard
VCenter Server appliance
SharePoint Server 2013

The thing that I'd like to know is which Network should I configure for trunking given that the HP DL 360 G7 have 4x NICs and QNAP TVS-471 have 4 NICs port as well.

My initial setup would be:

HP DL360 Server ---> HP Procurve Switch 1810-J9660A --> QNAP TVS-471

pNIC1 - Production data network & vMotion network
pNIC2 - iSCSI network 2
pNIC3 - iSCSI network 3
pNIC4 - NFS network (For File server VM - VMFS data store)

The LUN presented to the ESXi as VMFS datastore are all using RAID1 - SATA2 7200 rpm HDD

Assumption:
iSCSI is dedicated for the rest of the VMs and for the resource intensive process so that it does not impacting the file server access for the office users.

Question:
Which one shall I trunk or combine with the network or does the above deployment is correct ?

Any help would be greatly appreciated.

Thanks,
0
Senior IT System Engineer
Asked:
Senior IT System Engineer
  • 4
  • 3
3 Solutions
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
What you need to know, is trunk teaming may not be correct for your Qnap iSCSI SAN.

Most support Multi-path which is not a trunk team, but two seperate uplinks configured as two individual network interfaces a vSwitch.

see my EE Article

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0

also enabled and test if Jumbo Frames, gives you more performance

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

(also applicable for ALL versions of ESXi, 5.0, 5.1, 5.5, 6.0 and 6.5.)
1
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Andrew,

Thanks for the article and explanation.

What about separating the traffic for iSCSI & NFS assumption. ?

Is that correct or shall I just use iSCSI for better performance instead of NFS ?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
We always separate traffic for iSCSI and NFS by using different storage networks, or VLANS.

As for using iSCSI or NFS, you will need to test, and see which performs better, most small, low end SANS e.g. Qnap, in fact are NASes with iSCSI!

So adding iSCSI is an additional layer on top of NFS/NAS, e.g. so causing additional overhead
1
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Andrew, thanks for the clarification and suggested article.

I have enabled the Jumbo frame on the Switch, ESXi server and the QNAP NAS.

I'll see if the VMFS data store can perform better on NFS compare with iSCSI
0
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Does this considered as best practice in deploying iSCSI network ?

Network diagram
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Yes looks okay, providing the iSCSI ans NFS are separated.
0
 
Vladislav KaraievSolutions ArchitectCommented:
There are already some good advises posted here. I just would like to summarize it a bit and put my 2 cents:

1. Never use LACP or any other kind of network aggregation for iSCSI networks unless it was required by your SAN vendor. Use MPIO (Multipathing) instead. Generally speaking, teaming creates a network overhead by adding an extra text string into each Ethernet frame.
 
Usually, nothing bad happens during the low workload or when teaming is used along with NAS protocols (NFS/SMB) since the number of Ethernet frames per second is not really high.
 
In case of iSCSI traffic which, essentially, turns into block level access, the number of frames per second may be really high especially when smaller 4k/8k access patterns are being used. When iSCSI networks are teamed, LACP driver processes each frame which leads to an extra CPU load and increased latency.

2. Segregate iSCSI networks from any other types of traffic.

Production workload is not a stable value. IOPS may fluctuate due to multiple reasons: scheduled backup jobs, simultaneous startup of VMs, your office "prime-time" and many other. Both storage server and client server should get a maximum use of your NICs capabilities in order to provide a stable and fast storage access for your clients. If you would like to achieve stable storage performance, never mix your storage networks with anything else. In your case, you need to segregate NFS, Backup and vMotion networks from iSCSI network.

As an option, you can put both vMotion and Backup networks on the same NIC since the workload of these services is, usually, scheduled and temporary.

3. Enable Jumbo Frames aka 9000 MTU.

This is a general recommendation for iSCSI networks. Larger frame size helps to "stack" more data into the single frame and decrease the processing time of each request.

And don't forget to enable Jumbo Frames on all "sides" of your physical network: on hypervisor hosts, physical switches and on your SAN.

P.S.: your diagram looks good to me.
2
 
Senior IT System EngineerIT ProfessionalAuthor Commented:
Thanks all !
0

Featured Post

Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

  • 4
  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now