Solved

How many Nics vSphere Cluster with SAN

Posted on 2013-12-14
11
440 Views
Last Modified: 2013-12-21
I'm currently putting together a new environment that will utilize all 10GBe SPF+, I'm working on the paperwork on trying to figure out how many NICs for the 10Gbit I need to be truly a supported cluster.

Right now four physical hosts, each with 6x10Gbit SPF+

There will be a SAN, central switches are both Nexus 7ks

I know iScsiA and iScsiB, but there are about 200 VLANs that Ill be wanting to be able to add VMs to as well.

Any pointers would be appreciated
0
Comment
Question by:QuestionsGuy
11 Comments
 
LVL 117
ID: 39718948
It's recommended to have at least:-

2 x NICs Management Network
2 x NICs vMotion Network
2 x NICs Virtual Machine Network
2 x NICs iSCSI Network Storage Network (NFS)

The above is based on standard resilience, e.g. two nics per service, you may increase, but with 10GBe, I think two will suffice.

The above could be split, into VLANs

So you could run. 6 x 10GBit as a Trunk, with VLANs split for the above networks, or

4 Pairs for VMs/vMotion/Virtual Machine Network

2 Pairs for iSCSI

see my EE Article, Step by Step Tutorial Instructions with Screenshots

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0
0
 

Author Comment

by:QuestionsGuy
ID: 39718951
vMotion can exist on the same as Management Network as well, considering its all SAN based I don't think I would need to dedicate two 10Gbits, at least I wouldnt think.

What about for the vswitches, you recommend the vmware distributed or the cisco nexus 1000v?  Would this make the above task much more sensible
0
 
LVL 117
ID: 39719009
You would normally try to isolate vMotion onto a different network, to prevent traffic issues, during vMotions. (these are Best Practice and Recommended).

Two nics are there for resilience, if one should fail - no vMotion.

Depends on how large a cluster you are building, and your licenses, and skills, Network Management etc

VMware Distributed Switches and/or the Cisco Nexus 1000V can be daunting configurations for the beginner.
0
 
LVL 42

Expert Comment

by:paulsolov
ID: 39719545
with 10GB interfaces I would recommend cutting down the on the vswitches and using VLANs.  With 1GB nics it's easy to use 6 or 8 nics per hosts but filling a nexus switch with 6 ports is not a good roi.  I would put the management vswitch on 1GB.  The rest I would split up into 2 10GB vswitches.  vmotion and iscsi on an and the rest for everything else.  This way you only use 4 10GB ports per host and save some room for expansion, the 7k nexus is not cheap per port.
0
 
LVL 10

Accepted Solution

by:
Mohammed Rahman earned 500 total points
ID: 39719833
Design should take many parameters into account.

What all stuff will you be running on those 4 hosts? (DB, AD, Exchange, SQL.....).

I/O that you expect between SAN and Hosts.

Will vMotion be set to "Conservative" or "Aggressive".

hanccocka's suggestion of using 2 NICs seems good (except, you can use 1Gb card for management, if available). Of 6 SPF+ you can use as below.

2 - vMotion
2 - Storage
2 - VM Network

If you think you will not be expanding your environment for some time, why not use ALL available ports rather keep them vacant in a HOPE that we MIGHT be using it in near future. Even if you happen to upgrade your equipment, you can always re-design your network to less ports.

My suggestion to you is to make use of maximum resources when available. Again, if you think the I/O is not too huge and 2x10Gb would be an over kill and you have plans to expand this in near future; you can end up utilizing 4 ports as suggested above.

** This is just my opinion and nothing against any one's recommendation. :-)
0
How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

 

Author Comment

by:QuestionsGuy
ID: 39720173
The four physical hosts have 768GB Ram Each, Dual 2697 E5s etc, then the SANs will be a pair of equallogics that will use 10Gbit SPF+ ports as well, with the central switch being a nexus 7k that will have 64 10GBit SPF+ Ports.

Exchange, SQL, AD, this cluster will fulfill all, there's plenty of ram, horsepower etc (even K20 GPU Accelerators).

My preference will be to go with distributed switching, I have plenty of cisco and vm ware experience, just the past experience I usually have a lot of 1GBit ports and I assign two per network type, this time around I only have the 6 10Gbits that I need to properly split between everything and I want to do it once, and do it right.

I've run vmotion over the management network previously without any issues, reality is shared storage, so its only moving the VMs to a host, which previously on dual 1Gbits took literally seconds.
0
 
LVL 117
ID: 39720193
Go with Distributed Switches, Trunks and VLANs.

Are these Dells R720?

if so purchase the SD Dual Card module, and install ESXi to the mirrored SD card!
0
 

Author Comment

by:QuestionsGuy
ID: 39720255
R620s and yes thats exactly what Im buying, I can use the two onboard nics for management network, two of vmotion, two for networks and two for storage

How does iSCSIA and iSCSIB get divided here though?
0
 
LVL 117
ID: 39720366
vSphere with distributed switches with multiple uplink ports. for port binding

distributed port group per each physical nic.

set the teaming policy so that one active distributed port group has only one active uplink port.
0
 

Author Comment

by:QuestionsGuy
ID: 39721269
So iSCSIA and iSCSIB can exist on the same pair of 10Gbit SPF+ Ports?
0
 
LVL 117
ID: 39721277
Yes, just apply port bindings for iSCSIA and iSCSIB to active uplink ports.
0

Featured Post

IT, Stop Being Called Into Every Meeting

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

Join & Write a Comment

PRTG Network Monitor lets you monitor your bandwidth usage, so you know who is using up your bandwidth, and what they're using it for.
HOW TO: Connect to the VMware vSphere Hypervisor 6.5 (ESXi 6.5) using the vSphere (HTML5 Web) Host Client 6.5, and perform a simple configuration task of adding a new VMFS 6 datastore.
This video shows you how to use a vSphere client to connect to your ESX host as the root user. Demonstrates the basic connection of bypassing certification set up. Demonstrates how to access the traditional view to begin managing your virtual mac…
After creating this article (http://www.experts-exchange.com/articles/23699/Setup-Mikrotik-routers-with-OSPF.html), I decided to make a video (no audio) to show you how to configure the routers and run some trace routes and pings between the 7 sites…

708 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

19 Experts available now in Live!

Get 1:1 Help Now