Blades and Convergent Network Switches

I'm looking to deploy a blade solution in our network.  It is, however, going to be my first time with them and I'd like to verify my understanding what I've read about it.  We're a VMware shop, so I'll be referencing it specifically to how it'd be used.

Since the Dell website is the easiest to load out a full enclosure, that's what I'm going to use as a demo unit for this question.

Dell M1000E Enclosure:
	8 x M520 Blade Servers:
		1 x Broadcom 5720 Quad NIC in slot A
		2 x Brocade 1741M-K Dual Port 10GbE CNA in slots B and C
	1 pair of M6348 Ethernet Switch in slot A
	2 pairs of M8428-K Converged Network Switch in slots B and C

Open in new window

I have an existing fibre channel storage array which plugs into a pair of SAN switches, which presently feeds a number of rack mounted servers (to be replaced by the blades), as well as a pair of GigE switches operating as distribution layer switches.

Assuming blade CNAs were numbered and converged switches were lettered, would it make sense that I could:
Connect CNAs to Storage Array, offering FC adapters to VMware vSphere hosts (blades)
	Switch B-1 to extern Storage array Controller 1 port 1 via fibre
	Switch B-2 to extern Storage array Controller 2 port 1 via fibre
	Switch C-1 to extern Storage array Controller 1 port 2 via fibre
	Switch C-2 to extern Storage array Controller 2 port 2 via fibre
Connect CNAs to Ethernet distribution switching, offering 10GbE network access (negotiating down to 1GbE if extern switches don't support 10GbE):
	Switch B-1 to extern GbE switch 1
	Switch B-2 to extern GbE switch 2
	Switch C-1 to extern GbE switch 1
	Switch C-2 to extern GbE switch 2
Connect 1GbE switch to Ethernet distribution switching:
	Switch A-1 to extern GbE Switch 1
	Switch A-1 to extern GbE Switch 2
	Switch A-2 to extern GbE Switch 1
	Switch A-2 to extern GbE Switch 2

Open in new window

And that the connections as described above would offer:
4 paths to the FC storage
4 10GbE paths to the network
4 1GbE paths to the network

Would I also be able to connect the converged switches to each other for increased availability?  For example, if they were not connected and both Converged Switch B1 and Storage Array Controller 2 were to fail, I would ultimately only have 1 path to the storage array (C1 -> SA C1P2), where if they were interconnected, I would also be able to utilize C2 -> SA C1P2, providing room for one of the Blade slot C mez cards to fail as well.
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Thomas RushCommented:
You look to be on the right track.
Would you like me to recommend an HP reseller who can help you put together a system that meets your needs using the HP blade system?  (or even perhaps a reseller for another vendor, but I'm partial to the HP systems)

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
lunanatAuthor Commented:
Thanks SelfGovern, I have a sizable list of resellers from previous business dealings (covering all the major manufacturers), as well as a published request for proposal when the time comes.  At this point I still need to see if my budget request will be approved (fingers crossed, we're pretty much at full capacity now and we still have 3 months till I'd be able to make the purchase!)
If you go with HP I'd skip the convergent networking part to avoid the very expensive switches that can handle FCoE to FC and just put FC mezzanines in the blades and Brocade switches in the I/O slots of the C7000 and connect those FC switches to your current ones.  Convergent networking is still in its infancy and therefore isn't cheap.
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

lunanatAuthor Commented:
Hi Andyalder,

The idea is to get ahead of the curve, historically I have been either at the curve or behind and we've suffered for it a few years after our purchase when we end up at capacity.  Time to learn from past mistakes.  From a VMware perspective, I would arrange my networking as follows:
A:  2 active/2 standby 1GbE on a vSwitch with vmotion and vmkernel
B/C: 2 active / 2 standby 10GbE on a vSwitch with assorted vlans and vms for vm traffic
B/C: 4 active paths to the storage array

I am also considering arranging the 10GbE vSwitch to 4 active as well, but I'd need to look at the load balancing to ensure I didn't have any mac flapping or similar issues going on.  For the time being that would be a "to consider" solution.  Will be fun to test :)

Ultimately, I end up with a dedicated nic for system usage (vmk for example) which although is a SPOF, isn't a big one - the VMs will continue running, even if the cluster loses contact with a host, which allows me to gracefully resolve the problem after hours.

I also end up with 2 converged adapters doing the work of 4 purpose-built... I have 4 active paths to my storage, and either 2/2 or 4/0 for VM traffic at aggregated 10GbE speeds.  My network would certainly bottleneck the service out to PCs, but cross-host VM to VM communication would be blisteringly fast by our present standards.

I certainly understand what you're saying, but that's my logic :)
What do you have to convert the FCoE from the CNAs into FC to connect to your SAN?
lunanatAuthor Commented:
It is my expectation that the converged switches will be able to accept FC inputs, likely via an adapter such as a SFP.
They are, I was just asking what switches you were buying.
lunanatAuthor Commented:
Ah,  right.  My plan is to plug the storage array controllers right into the blade fibre switches on a 1:1 basis (4 controller FC ports, 4 switches).  Nothing else is yet plugging into the storage fabric but my hosts, so no reason to have additional FC SAN switches.

If I end up connecting all my FC together as I mentioned in my OP, I would use existing FC SAN switching for that... HP branded Brocade switches.
So you are getting convergent network switches that have FC support for your blade enclosure? Hmmm, Dell M8428-k price $13K each from one supplier but another is claiming 1/10th of that price - obviously a typo. It'll certainly work but you're looking at €40K+ for the 4 switches.
lunanatAuthor Commented:
Yup, super expensive.  But this time I can get ahead of the curve, get some wicked fast access to my storage.  I'm only going to put 8 of the possible 16 blades in it, which means I've got room for a very cost-effective 100% increase in capacity (the blades themselves are pretty inexpensive, and the switching infrastructure will be in place already).

It'll stay in production for between 5 and 10 years (unless something insane happens, but then I get more new toys so no complaints), then it'll go to my colo for another 5 to 10 years, and it will then get relegated down to non-virtualized tasks.  The cost over the lifetime is quite low on a year by year basis - I just need to front the capital.

When it's time to replace my storage array, I will be in a position to leverage the latest and greatest in throughputs, while actually being able to handle that kind of IO on the fabric.
Nice to know it'll keep up with the 16Gb ports on your future storage array.

I fail to understand how it will be faster than having a normal LAN plus additional FC infrastructure in your enclosure. I do understand the bit about new toys though. If you can afford $1000 per port now why wait for the price to drop.
lunanatAuthor Commented:
Okay, so maybe not "the" latest and greatest... but it's a big step from our present 4Gb ports ;)

The speed increase will be predominantly noticed in cross-VM communication... for example, an application server talking to a backend server.  We have a few line of business apps which are pretty heavily scaled out, and giving them a faster means of communication should improve their performance now and allow for it to continue to grow in the future.  It also means that I can leverage our existing disaster recovery solution faster.... muuuch faster :)

If the budget doesn't get approved in full I'll scale down to dedicated FC and eth slots.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Server Hardware

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.