Solved

How many Nics vSphere Cluster with SAN

Posted on 2013-12-14
11
450 Views
Last Modified: 2013-12-21
I'm currently putting together a new environment that will utilize all 10GBe SPF+, I'm working on the paperwork on trying to figure out how many NICs for the 10Gbit I need to be truly a supported cluster.

Right now four physical hosts, each with 6x10Gbit SPF+

There will be a SAN, central switches are both Nexus 7ks

I know iScsiA and iScsiB, but there are about 200 VLANs that Ill be wanting to be able to add VMs to as well.

Any pointers would be appreciated
0
Comment
Question by:QuestionsGuy
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
11 Comments
 
LVL 121
ID: 39718948
It's recommended to have at least:-

2 x NICs Management Network
2 x NICs vMotion Network
2 x NICs Virtual Machine Network
2 x NICs iSCSI Network Storage Network (NFS)

The above is based on standard resilience, e.g. two nics per service, you may increase, but with 10GBe, I think two will suffice.

The above could be split, into VLANs

So you could run. 6 x 10GBit as a Trunk, with VLANs split for the above networks, or

4 Pairs for VMs/vMotion/Virtual Machine Network

2 Pairs for iSCSI

see my EE Article, Step by Step Tutorial Instructions with Screenshots

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0
0
 

Author Comment

by:QuestionsGuy
ID: 39718951
vMotion can exist on the same as Management Network as well, considering its all SAN based I don't think I would need to dedicate two 10Gbits, at least I wouldnt think.

What about for the vswitches, you recommend the vmware distributed or the cisco nexus 1000v?  Would this make the above task much more sensible
0
 
LVL 121
ID: 39719009
You would normally try to isolate vMotion onto a different network, to prevent traffic issues, during vMotions. (these are Best Practice and Recommended).

Two nics are there for resilience, if one should fail - no vMotion.

Depends on how large a cluster you are building, and your licenses, and skills, Network Management etc

VMware Distributed Switches and/or the Cisco Nexus 1000V can be daunting configurations for the beginner.
0
Free NetCrunch network monitor licenses!

Only on Experts-Exchange: Sign-up for a free-trial and we'll send you your permanent license!

Here is what you get: 30 Nodes | Unlimited Sensors | No Time Restrictions | Absolutely FREE!

Act now. This offer ends July 14, 2017.

 
LVL 42

Expert Comment

by:paulsolov
ID: 39719545
with 10GB interfaces I would recommend cutting down the on the vswitches and using VLANs.  With 1GB nics it's easy to use 6 or 8 nics per hosts but filling a nexus switch with 6 ports is not a good roi.  I would put the management vswitch on 1GB.  The rest I would split up into 2 10GB vswitches.  vmotion and iscsi on an and the rest for everything else.  This way you only use 4 10GB ports per host and save some room for expansion, the 7k nexus is not cheap per port.
0
 
LVL 10

Accepted Solution

by:
Mohammed Rahman earned 500 total points
ID: 39719833
Design should take many parameters into account.

What all stuff will you be running on those 4 hosts? (DB, AD, Exchange, SQL.....).

I/O that you expect between SAN and Hosts.

Will vMotion be set to "Conservative" or "Aggressive".

hanccocka's suggestion of using 2 NICs seems good (except, you can use 1Gb card for management, if available). Of 6 SPF+ you can use as below.

2 - vMotion
2 - Storage
2 - VM Network

If you think you will not be expanding your environment for some time, why not use ALL available ports rather keep them vacant in a HOPE that we MIGHT be using it in near future. Even if you happen to upgrade your equipment, you can always re-design your network to less ports.

My suggestion to you is to make use of maximum resources when available. Again, if you think the I/O is not too huge and 2x10Gb would be an over kill and you have plans to expand this in near future; you can end up utilizing 4 ports as suggested above.

** This is just my opinion and nothing against any one's recommendation. :-)
0
 

Author Comment

by:QuestionsGuy
ID: 39720173
The four physical hosts have 768GB Ram Each, Dual 2697 E5s etc, then the SANs will be a pair of equallogics that will use 10Gbit SPF+ ports as well, with the central switch being a nexus 7k that will have 64 10GBit SPF+ Ports.

Exchange, SQL, AD, this cluster will fulfill all, there's plenty of ram, horsepower etc (even K20 GPU Accelerators).

My preference will be to go with distributed switching, I have plenty of cisco and vm ware experience, just the past experience I usually have a lot of 1GBit ports and I assign two per network type, this time around I only have the 6 10Gbits that I need to properly split between everything and I want to do it once, and do it right.

I've run vmotion over the management network previously without any issues, reality is shared storage, so its only moving the VMs to a host, which previously on dual 1Gbits took literally seconds.
0
 
LVL 121
ID: 39720193
Go with Distributed Switches, Trunks and VLANs.

Are these Dells R720?

if so purchase the SD Dual Card module, and install ESXi to the mirrored SD card!
0
 

Author Comment

by:QuestionsGuy
ID: 39720255
R620s and yes thats exactly what Im buying, I can use the two onboard nics for management network, two of vmotion, two for networks and two for storage

How does iSCSIA and iSCSIB get divided here though?
0
 
LVL 121
ID: 39720366
vSphere with distributed switches with multiple uplink ports. for port binding

distributed port group per each physical nic.

set the teaming policy so that one active distributed port group has only one active uplink port.
0
 

Author Comment

by:QuestionsGuy
ID: 39721269
So iSCSIA and iSCSIB can exist on the same pair of 10Gbit SPF+ Ports?
0
 
LVL 121
ID: 39721277
Yes, just apply port bindings for iSCSIA and iSCSIB to active uplink ports.
0

Featured Post

PeopleSoft Has Never Been Easier

PeopleSoft Adoption Made Smooth & Simple!

On-The-Job Training Is made Intuitive & Easy With WalkMe's On-Screen Guidance Tool.  Claim Your Free WalkMe Account Now

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

For many of us, the  holiday season kindles the natural urge to give back to our friends, family members and communities. While it's easy for friends to notice the impact of such deeds, understanding the contributions of businesses and enterprises i…
Giving access to ESXi shell console is always an issue for IT departments to other Teams, or Projects. We need to find a way so that teams can use ESXTOP for their POCs, or tests without giving them the access to ESXi host shell console with a root …
After creating this article (http://www.experts-exchange.com/articles/23699/Setup-Mikrotik-routers-with-OSPF.html), I decided to make a video (no audio) to show you how to configure the routers and run some trace routes and pings between the 7 sites…
In this brief tutorial Pawel from AdRem Software explains how you can quickly find out which services are running on your network, or what are the IP addresses of servers responsible for each service. Software used is freeware NetCrunch Tools (https…

628 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question