Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x
?
Solved

VMware ESXi network trunking best practice scenario for iSCSI & NAS ?

Posted on 2016-11-12
8
Medium Priority
?
547 Views
Last Modified: 2016-11-17
People,

I'm trying to build my existing branch office infrastructure with limited hardware possible due to budget restrictions, so I'd like to know what is best way to approach this limitation for my scenario:

Goal:
I need to deploy several VMs in one ESXi host that is using HP DL 360 rack server with iSCSI network & QNAP TVS-471 NAS device.
VMs to be deployed:
Active Directory (DNS and DHCP)
File Server
SQL Server 2014 Standard
VCenter Server appliance
SharePoint Server 2013

The thing that I'd like to know is which Network should I configure for trunking given that the HP DL 360 G7 have 4x NICs and QNAP TVS-471 have 4 NICs port as well.

My initial setup would be:

HP DL360 Server ---> HP Procurve Switch 1810-J9660A --> QNAP TVS-471

pNIC1 - Production data network & vMotion network
pNIC2 - iSCSI network 2
pNIC3 - iSCSI network 3
pNIC4 - NFS network (For File server VM - VMFS data store)

The LUN presented to the ESXi as VMFS datastore are all using RAID1 - SATA2 7200 rpm HDD

Assumption:
iSCSI is dedicated for the rest of the VMs and for the resource intensive process so that it does not impacting the file server access for the office users.

Question:
Which one shall I trunk or combine with the network or does the above deployment is correct ?

Any help would be greatly appreciated.

Thanks,
0
Comment
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 3
8 Comments
 
LVL 123

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE^2)
Andrew Hancock (VMware vExpert / EE MVE^2) earned 1000 total points
ID: 41884689
What you need to know, is trunk teaming may not be correct for your Qnap iSCSI SAN.

Most support Multi-path which is not a trunk team, but two seperate uplinks configured as two individual network interfaces a vSwitch.

see my EE Article

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0

also enabled and test if Jumbo Frames, gives you more performance

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

(also applicable for ALL versions of ESXi, 5.0, 5.1, 5.5, 6.0 and 6.5.)
1
 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41885017
Andrew,

Thanks for the article and explanation.

What about separating the traffic for iSCSI & NFS assumption. ?

Is that correct or shall I just use iSCSI for better performance instead of NFS ?
0
 
LVL 123

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE^2)
Andrew Hancock (VMware vExpert / EE MVE^2) earned 1000 total points
ID: 41885142
We always separate traffic for iSCSI and NFS by using different storage networks, or VLANS.

As for using iSCSI or NFS, you will need to test, and see which performs better, most small, low end SANS e.g. Qnap, in fact are NASes with iSCSI!

So adding iSCSI is an additional layer on top of NFS/NAS, e.g. so causing additional overhead
1
Survive A High-Traffic Event with Percona

Your application or website rely on your database to deliver information about products and services to your customers. You can’t afford to have your database lose performance, lose availability or become unresponsive – even for just a few minutes.

 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41885240
Andrew, thanks for the clarification and suggested article.

I have enabled the Jumbo frame on the Switch, ESXi server and the QNAP NAS.

I'll see if the VMFS data store can perform better on NFS compare with iSCSI
0
 
LVL 8

Author Comment

by:Senior IT System Engineer
ID: 41886196
Does this considered as best practice in deploying iSCSI network ?

Network diagram
0
 
LVL 123
ID: 41886219
Yes looks okay, providing the iSCSI ans NFS are separated.
0
 
LVL 1

Accepted Solution

by:
Vladislav Karaiev earned 1000 total points
ID: 41887740
There are already some good advises posted here. I just would like to summarize it a bit and put my 2 cents:

1. Never use LACP or any other kind of network aggregation for iSCSI networks unless it was required by your SAN vendor. Use MPIO (Multipathing) instead. Generally speaking, teaming creates a network overhead by adding an extra text string into each Ethernet frame.
 
Usually, nothing bad happens during the low workload or when teaming is used along with NAS protocols (NFS/SMB) since the number of Ethernet frames per second is not really high.
 
In case of iSCSI traffic which, essentially, turns into block level access, the number of frames per second may be really high especially when smaller 4k/8k access patterns are being used. When iSCSI networks are teamed, LACP driver processes each frame which leads to an extra CPU load and increased latency.

2. Segregate iSCSI networks from any other types of traffic.

Production workload is not a stable value. IOPS may fluctuate due to multiple reasons: scheduled backup jobs, simultaneous startup of VMs, your office "prime-time" and many other. Both storage server and client server should get a maximum use of your NICs capabilities in order to provide a stable and fast storage access for your clients. If you would like to achieve stable storage performance, never mix your storage networks with anything else. In your case, you need to segregate NFS, Backup and vMotion networks from iSCSI network.

As an option, you can put both vMotion and Backup networks on the same NIC since the workload of these services is, usually, scheduled and temporary.

3. Enable Jumbo Frames aka 9000 MTU.

This is a general recommendation for iSCSI networks. Larger frame size helps to "stack" more data into the single frame and decrease the processing time of each request.

And don't forget to enable Jumbo Frames on all "sides" of your physical network: on hypervisor hosts, physical switches and on your SAN.

P.S.: your diagram looks good to me.
2
 
LVL 8

Author Closing Comment

by:Senior IT System Engineer
ID: 41892228
Thanks all !
0

Featured Post

Simplify Your Workload with One Tool

How do you combat today’s intelligent hacker while managing multiple domains and platforms? By simplifying your workload with one tool. With Lunarpages hosting through Plesk Onyx, you can:

Automate SSL generation and installation with two clicks
Experience total server control

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

When rebooting a vCenters 6.0 and try to connect using vSphere Client we get this issue "Invalid URL: The hostname could not parsed." When we get this error we need to do some changes in the vCenter advanced settings to fix the issue.
Giving access to ESXi shell console is always an issue for IT departments to other Teams, or Projects. We need to find a way so that teams can use ESXTOP for their POCs, or tests without giving them the access to ESXi host shell console with a root …
In this video we outline the Physical Segments view of NetCrunch network monitor. By following this brief how-to video, you will be able to learn how NetCrunch visualizes your network, how granular is the information collected, as well as where to f…
Monitoring a network: how to monitor network services and why? Michael Kulchisky, MCSE, MCSA, MCP, VTSP, VSP, CCSP outlines the philosophy behind service monitoring and why a handshake validation is critical in network monitoring. Software utilized …

705 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question