Celebrate National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17


How many Nics vSphere Cluster with SAN

Posted on 2013-12-14
Medium Priority
Last Modified: 2013-12-21
I'm currently putting together a new environment that will utilize all 10GBe SPF+, I'm working on the paperwork on trying to figure out how many NICs for the 10Gbit I need to be truly a supported cluster.

Right now four physical hosts, each with 6x10Gbit SPF+

There will be a SAN, central switches are both Nexus 7ks

I know iScsiA and iScsiB, but there are about 200 VLANs that Ill be wanting to be able to add VMs to as well.

Any pointers would be appreciated
Question by:QuestionsGuy
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
LVL 123
ID: 39718948
It's recommended to have at least:-

2 x NICs Management Network
2 x NICs vMotion Network
2 x NICs Virtual Machine Network
2 x NICs iSCSI Network Storage Network (NFS)

The above is based on standard resilience, e.g. two nics per service, you may increase, but with 10GBe, I think two will suffice.

The above could be split, into VLANs

So you could run. 6 x 10GBit as a Trunk, with VLANs split for the above networks, or

4 Pairs for VMs/vMotion/Virtual Machine Network

2 Pairs for iSCSI

see my EE Article, Step by Step Tutorial Instructions with Screenshots

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0

Author Comment

ID: 39718951
vMotion can exist on the same as Management Network as well, considering its all SAN based I don't think I would need to dedicate two 10Gbits, at least I wouldnt think.

What about for the vswitches, you recommend the vmware distributed or the cisco nexus 1000v?  Would this make the above task much more sensible
LVL 123
ID: 39719009
You would normally try to isolate vMotion onto a different network, to prevent traffic issues, during vMotions. (these are Best Practice and Recommended).

Two nics are there for resilience, if one should fail - no vMotion.

Depends on how large a cluster you are building, and your licenses, and skills, Network Management etc

VMware Distributed Switches and/or the Cisco Nexus 1000V can be daunting configurations for the beginner.
Turn your laptop into a mobile console!

The CV211 Laptop USB Console Adapter provides a direct Laptop-to-Computer connection for fast and easy remote desktop access with no software to install.

LVL 42

Expert Comment

ID: 39719545
with 10GB interfaces I would recommend cutting down the on the vswitches and using VLANs.  With 1GB nics it's easy to use 6 or 8 nics per hosts but filling a nexus switch with 6 ports is not a good roi.  I would put the management vswitch on 1GB.  The rest I would split up into 2 10GB vswitches.  vmotion and iscsi on an and the rest for everything else.  This way you only use 4 10GB ports per host and save some room for expansion, the 7k nexus is not cheap per port.
LVL 10

Accepted Solution

Mohammed Rahman earned 2000 total points
ID: 39719833
Design should take many parameters into account.

What all stuff will you be running on those 4 hosts? (DB, AD, Exchange, SQL.....).

I/O that you expect between SAN and Hosts.

Will vMotion be set to "Conservative" or "Aggressive".

hanccocka's suggestion of using 2 NICs seems good (except, you can use 1Gb card for management, if available). Of 6 SPF+ you can use as below.

2 - vMotion
2 - Storage
2 - VM Network

If you think you will not be expanding your environment for some time, why not use ALL available ports rather keep them vacant in a HOPE that we MIGHT be using it in near future. Even if you happen to upgrade your equipment, you can always re-design your network to less ports.

My suggestion to you is to make use of maximum resources when available. Again, if you think the I/O is not too huge and 2x10Gb would be an over kill and you have plans to expand this in near future; you can end up utilizing 4 ports as suggested above.

** This is just my opinion and nothing against any one's recommendation. :-)

Author Comment

ID: 39720173
The four physical hosts have 768GB Ram Each, Dual 2697 E5s etc, then the SANs will be a pair of equallogics that will use 10Gbit SPF+ ports as well, with the central switch being a nexus 7k that will have 64 10GBit SPF+ Ports.

Exchange, SQL, AD, this cluster will fulfill all, there's plenty of ram, horsepower etc (even K20 GPU Accelerators).

My preference will be to go with distributed switching, I have plenty of cisco and vm ware experience, just the past experience I usually have a lot of 1GBit ports and I assign two per network type, this time around I only have the 6 10Gbits that I need to properly split between everything and I want to do it once, and do it right.

I've run vmotion over the management network previously without any issues, reality is shared storage, so its only moving the VMs to a host, which previously on dual 1Gbits took literally seconds.
LVL 123
ID: 39720193
Go with Distributed Switches, Trunks and VLANs.

Are these Dells R720?

if so purchase the SD Dual Card module, and install ESXi to the mirrored SD card!

Author Comment

ID: 39720255
R620s and yes thats exactly what Im buying, I can use the two onboard nics for management network, two of vmotion, two for networks and two for storage

How does iSCSIA and iSCSIB get divided here though?
LVL 123
ID: 39720366
vSphere with distributed switches with multiple uplink ports. for port binding

distributed port group per each physical nic.

set the teaming policy so that one active distributed port group has only one active uplink port.

Author Comment

ID: 39721269
So iSCSIA and iSCSIB can exist on the same pair of 10Gbit SPF+ Ports?
LVL 123
ID: 39721277
Yes, just apply port bindings for iSCSIA and iSCSIB to active uplink ports.

Featured Post

Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

During and after that shift to cloud, one area that still poses a struggle for many organizations is what to do with their department file shares.
This month, Experts Exchange’s free Course of the Month is focused on CompTIA IT Fundamentals.
After creating this article (http://www.experts-exchange.com/articles/23699/Setup-Mikrotik-routers-with-OSPF.html), I decided to make a video (no audio) to show you how to configure the routers and run some trace routes and pings between the 7 sites…
In this video we outline the Physical Segments view of NetCrunch network monitor. By following this brief how-to video, you will be able to learn how NetCrunch visualizes your network, how granular is the information collected, as well as where to f…

730 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question