How many Nics vSphere Cluster with SAN

I'm currently putting together a new environment that will utilize all 10GBe SPF+, I'm working on the paperwork on trying to figure out how many NICs for the 10Gbit I need to be truly a supported cluster.

Right now four physical hosts, each with 6x10Gbit SPF+

There will be a SAN, central switches are both Nexus 7ks

I know iScsiA and iScsiB, but there are about 200 VLANs that Ill be wanting to be able to add VMs to as well.

Any pointers would be appreciated
QuestionsGuyAsked:
Who is Participating?
 
Mohammed RahmanConnect With a Mentor Commented:
Design should take many parameters into account.

What all stuff will you be running on those 4 hosts? (DB, AD, Exchange, SQL.....).

I/O that you expect between SAN and Hosts.

Will vMotion be set to "Conservative" or "Aggressive".

hanccocka's suggestion of using 2 NICs seems good (except, you can use 1Gb card for management, if available). Of 6 SPF+ you can use as below.

2 - vMotion
2 - Storage
2 - VM Network

If you think you will not be expanding your environment for some time, why not use ALL available ports rather keep them vacant in a HOPE that we MIGHT be using it in near future. Even if you happen to upgrade your equipment, you can always re-design your network to less ports.

My suggestion to you is to make use of maximum resources when available. Again, if you think the I/O is not too huge and 2x10Gb would be an over kill and you have plans to expand this in near future; you can end up utilizing 4 ports as suggested above.

** This is just my opinion and nothing against any one's recommendation. :-)
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
It's recommended to have at least:-

2 x NICs Management Network
2 x NICs vMotion Network
2 x NICs Virtual Machine Network
2 x NICs iSCSI Network Storage Network (NFS)

The above is based on standard resilience, e.g. two nics per service, you may increase, but with 10GBe, I think two will suffice.

The above could be split, into VLANs

So you could run. 6 x 10GBit as a Trunk, with VLANs split for the above networks, or

4 Pairs for VMs/vMotion/Virtual Machine Network

2 Pairs for iSCSI

see my EE Article, Step by Step Tutorial Instructions with Screenshots

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0
0
 
QuestionsGuyAuthor Commented:
vMotion can exist on the same as Management Network as well, considering its all SAN based I don't think I would need to dedicate two 10Gbits, at least I wouldnt think.

What about for the vswitches, you recommend the vmware distributed or the cisco nexus 1000v?  Would this make the above task much more sensible
0
Never miss a deadline with monday.com

The revolutionary project management tool is here!   Plan visually with a single glance and make sure your projects get done.

 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
You would normally try to isolate vMotion onto a different network, to prevent traffic issues, during vMotions. (these are Best Practice and Recommended).

Two nics are there for resilience, if one should fail - no vMotion.

Depends on how large a cluster you are building, and your licenses, and skills, Network Management etc

VMware Distributed Switches and/or the Cisco Nexus 1000V can be daunting configurations for the beginner.
0
 
Paul SolovyovskySenior IT AdvisorCommented:
with 10GB interfaces I would recommend cutting down the on the vswitches and using VLANs.  With 1GB nics it's easy to use 6 or 8 nics per hosts but filling a nexus switch with 6 ports is not a good roi.  I would put the management vswitch on 1GB.  The rest I would split up into 2 10GB vswitches.  vmotion and iscsi on an and the rest for everything else.  This way you only use 4 10GB ports per host and save some room for expansion, the 7k nexus is not cheap per port.
0
 
QuestionsGuyAuthor Commented:
The four physical hosts have 768GB Ram Each, Dual 2697 E5s etc, then the SANs will be a pair of equallogics that will use 10Gbit SPF+ ports as well, with the central switch being a nexus 7k that will have 64 10GBit SPF+ Ports.

Exchange, SQL, AD, this cluster will fulfill all, there's plenty of ram, horsepower etc (even K20 GPU Accelerators).

My preference will be to go with distributed switching, I have plenty of cisco and vm ware experience, just the past experience I usually have a lot of 1GBit ports and I assign two per network type, this time around I only have the 6 10Gbits that I need to properly split between everything and I want to do it once, and do it right.

I've run vmotion over the management network previously without any issues, reality is shared storage, so its only moving the VMs to a host, which previously on dual 1Gbits took literally seconds.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Go with Distributed Switches, Trunks and VLANs.

Are these Dells R720?

if so purchase the SD Dual Card module, and install ESXi to the mirrored SD card!
0
 
QuestionsGuyAuthor Commented:
R620s and yes thats exactly what Im buying, I can use the two onboard nics for management network, two of vmotion, two for networks and two for storage

How does iSCSIA and iSCSIB get divided here though?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
vSphere with distributed switches with multiple uplink ports. for port binding

distributed port group per each physical nic.

set the teaming policy so that one active distributed port group has only one active uplink port.
0
 
QuestionsGuyAuthor Commented:
So iSCSIA and iSCSIB can exist on the same pair of 10Gbit SPF+ Ports?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Yes, just apply port bindings for iSCSIA and iSCSIB to active uplink ports.
0
All Courses

From novice to tech pro — start learning today.