Link to home
Start Free TrialLog in
Avatar of QuestionsGuy
QuestionsGuy

asked on

How many Nics vSphere Cluster with SAN

I'm currently putting together a new environment that will utilize all 10GBe SPF+, I'm working on the paperwork on trying to figure out how many NICs for the 10Gbit I need to be truly a supported cluster.

Right now four physical hosts, each with 6x10Gbit SPF+

There will be a SAN, central switches are both Nexus 7ks

I know iScsiA and iScsiB, but there are about 200 VLANs that Ill be wanting to be able to add VMs to as well.

Any pointers would be appreciated
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

It's recommended to have at least:-

2 x NICs Management Network
2 x NICs vMotion Network
2 x NICs Virtual Machine Network
2 x NICs iSCSI Network Storage Network (NFS)

The above is based on standard resilience, e.g. two nics per service, you may increase, but with 10GBe, I think two will suffice.

The above could be split, into VLANs

So you could run. 6 x 10GBit as a Trunk, with VLANs split for the above networks, or

4 Pairs for VMs/vMotion/Virtual Machine Network

2 Pairs for iSCSI

see my EE Article, Step by Step Tutorial Instructions with Screenshots

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0
Avatar of QuestionsGuy
QuestionsGuy

ASKER

vMotion can exist on the same as Management Network as well, considering its all SAN based I don't think I would need to dedicate two 10Gbits, at least I wouldnt think.

What about for the vswitches, you recommend the vmware distributed or the cisco nexus 1000v?  Would this make the above task much more sensible
You would normally try to isolate vMotion onto a different network, to prevent traffic issues, during vMotions. (these are Best Practice and Recommended).

Two nics are there for resilience, if one should fail - no vMotion.

Depends on how large a cluster you are building, and your licenses, and skills, Network Management etc

VMware Distributed Switches and/or the Cisco Nexus 1000V can be daunting configurations for the beginner.
with 10GB interfaces I would recommend cutting down the on the vswitches and using VLANs.  With 1GB nics it's easy to use 6 or 8 nics per hosts but filling a nexus switch with 6 ports is not a good roi.  I would put the management vswitch on 1GB.  The rest I would split up into 2 10GB vswitches.  vmotion and iscsi on an and the rest for everything else.  This way you only use 4 10GB ports per host and save some room for expansion, the 7k nexus is not cheap per port.
ASKER CERTIFIED SOLUTION
Avatar of Mohammed Rahman
Mohammed Rahman
Flag of India image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
The four physical hosts have 768GB Ram Each, Dual 2697 E5s etc, then the SANs will be a pair of equallogics that will use 10Gbit SPF+ ports as well, with the central switch being a nexus 7k that will have 64 10GBit SPF+ Ports.

Exchange, SQL, AD, this cluster will fulfill all, there's plenty of ram, horsepower etc (even K20 GPU Accelerators).

My preference will be to go with distributed switching, I have plenty of cisco and vm ware experience, just the past experience I usually have a lot of 1GBit ports and I assign two per network type, this time around I only have the 6 10Gbits that I need to properly split between everything and I want to do it once, and do it right.

I've run vmotion over the management network previously without any issues, reality is shared storage, so its only moving the VMs to a host, which previously on dual 1Gbits took literally seconds.
Go with Distributed Switches, Trunks and VLANs.

Are these Dells R720?

if so purchase the SD Dual Card module, and install ESXi to the mirrored SD card!
R620s and yes thats exactly what Im buying, I can use the two onboard nics for management network, two of vmotion, two for networks and two for storage

How does iSCSIA and iSCSIB get divided here though?
vSphere with distributed switches with multiple uplink ports. for port binding

distributed port group per each physical nic.

set the teaming policy so that one active distributed port group has only one active uplink port.
So iSCSIA and iSCSIB can exist on the same pair of 10Gbit SPF+ Ports?
Yes, just apply port bindings for iSCSIA and iSCSIB to active uplink ports.