How to configure Dell EMC with Ruckus ICX7250?

I have a Ruckus ICX 7250 24 port switch with 8 Licensed 10 GB SFP+ ports and I will be getting a Dell EMC ME4024 Storage Array with 8 Port Dual Controller for iSCSI SFP+. I would like to know how to configure it on the switch and how to best use it with vmware vsphere 6.5.

Any help would be greatly appreciated as I've never had to configure one from scratch.

Thank you
Al BertoAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Ben Personick (Previously QCubed)Lead SaaS Infrastructure EngineerCommented:
Thats a campus switch from the description, so it's not really designed for low latency high bandwidth applications.

iSCSi should be ideally a slow bandwidth as possible and utilize an iSCSI hardware accelerator on your Servers instead of the normal NIC port if the server is production-class.

If you are really going to be pushing your SAN the switch will probably l be your bottleneck before the servers..

That said, off the cuff, I'm guessing this s a Lab environment and you want some basic non-redundant setups for playing around with and not for real data load, since nothing you've mentioned shows there would be any redundancy in your setup.


Assuming so, you simply set up a Port channel on the Switch for 4+ ports, and connect your SAN to it, vlan your iSCSI network separately from your management network just to reduce broadcast, and have a separate 1G switch for management, or ad the Management vlan as well.

 Trunk the Ports of the SAN with all of your iSCSI VLANs (or just the one for now if you only have one) and the management vlan if not out of band.

  You'll want to enable Jumbo frames on the vlan, and setup an L2 SVI on the switch on your managemenand iSCSi Networks to be able to ping ad confirm connectivity, and manage your switch.

  Assuming you may be adding in another switch  later, and making the WILD assumption that this switch supports having Port Aggregation to a peer switch or by stacking the switches somehow, You'll leave the other ports unconnected for now, and add them to the switch later, or you can leave them in the Port Channel if you have a good experience with these switches (I am not familiar with the brand) and know it handles port loss well (as it should but they don;t always) and then you can always pull the ports later and set up the Port channel aggregation across the two switches at that time and plug them in.

Set up vlans on the switch with SVIs for all of your other networks you'll want to connect to and verify connectivity, and Connect your ESXi server to the switch on 2-4 ports, this will also be a Port channel, this should also span a second switch in the future if you get one.  That should have a Trunk setup with allowed values for your management network, your iscsi network, and your internal networks VMs will attach to.

your'll setup a virtual switch with the VLANs tagged for each of these.

You can separate vmotion from the management vlan but really there isn't much need and you  can just keep it together.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Ben Personick (Previously QCubed)Lead SaaS Infrastructure EngineerCommented:
According to their promotional material, despite being campus switches they do implement some higher end features such as stacking.

up to 8×10 GbE ports for uplinks or stacking and market-leading stacking density with up to 12

So, definitely do some testing of how it handles the port agg loss and add of connections once you have the setup done, it shod be seamless, if that is the case feel free to make your Port Channels the full 8 ports to the Dell, and you'll just unplug Cables and then remove port configs later when you get a second switch ad stack it, and then you'll add 4 of the new switches ports into the port channel and then plug in the cables to that switch later.
Ben Personick (Previously QCubed)Lead SaaS Infrastructure EngineerCommented:
Also since you only licensed 8 10G ports, you'll probably want to save some of those for your esxi servers.
Your Guide to Achieving IT Business Success

The IT Service Excellence Tool Kit has best practices to keep your clients happy and business booming. Inside, you’ll find everything you need to increase client satisfaction and retention, become more competitive, and increase your overall success.

Al BertoAuthor Commented:
thank you for the quick response, I do have a second ruckus configured as a stack sorry for not mentioning this, so actually I'll be connecting 4 not 8 is so ports. but you gave me very helpful information that will help. thank you.
andyalderCommented:
4 iSCSI ports per controller? If you have 4 hosts or fewer I would investigate direct attach rather than switched.
Ben Personick (Previously QCubed)Lead SaaS Infrastructure EngineerCommented:
Al berto,  Glad to help!

Here are some additional details on how to setup in the two switch scenario for your setup:

As you have 4 iScsi ports per controller, you will want 2 ports from each controller onto each switch to perform without interruption on switch loss.

eg:

 me4xxx controller A ports 1 and 3 to switch 1 ports 1 and 3, Me4xxx Controller A ports 2 and 4 to switch 2, ports 1 and 3.

me4xxx controller B ports 1 and 3 to switch 1 ports 2 and 4, Me4xxx Controller A ports 2 and 4 to switch 2, ports 2 and 4.
 
(example keeps all devices odd ports to odd/primary switch/cevice, ans all even ports to even/secondary switxh/controller)

For the PO(s)

If the emc me4xxxsupports both controller's IScsi ports going to a single PO:

Then the single PO on the stack of 4 ports per switch is preferable.

ie One Po (Po1) will have all switch ports to controllers (switch 1 ports 1 to 4, and switch 2 ports 1 to 4) in Po1.

If the 4xxx does not support the above, then:


You will do two POs, po1 will be switch 1 ports 1 and 3, and switch 2 ports 1 and 3, and po2 will be swotch 1 ports 2 and 4, and switch 2 ports 2 and 4.

ie All ports to controller A (odd ports 1 and 3 on both switches)  are in po1, ans all ports to controller B (even ports 2 and 4 on both switches) are in po2

Staggering the ports allows path redundancy on device loss without loss of connectivity.

Not interrupting the operation of your servers/SANs on device loss or reboot (switch/controller) is the preferable configuration for uptime and scalability, ans means yo may be able to do 0 downtime device maintenance durring maintenance windows.
andyalderCommented:
Dell don't use port trunking, they just assign two IP addresses. This is fairly standard in storage - multipath is used instead of LACP to direct half the traffic down each port. There is a video at https://www.youtube.com/watch?v=03mosqjdeHQ&t=374s (although why he is using switches rather than directly attaching it makes no sense as he only has two hosts).
Ben Personick (Previously QCubed)Lead SaaS Infrastructure EngineerCommented:
@Andy,  Yeah, MPIO is standard for Fibrechannel and iSCSI but LACP Port Aggs and Multi-pathing are not mutually exclusive concepts.

  However I didn't check on the device, I read EMC, and stopped there, when I looked into it, it isn't "really" an EMC product, it's a Dell Powervault "Disk Server" / DAS that has been re-branded with the EMC name, so my bad on not noticing that.

  * Since the "SAN" is really a Powervault Disk Server / DAS, you're going to want to keep he connections to the controllers as described, but you won't set up a port channel on them, as the ME doesn't support LACP.

  Essentially you'll set iSCSI_[Vlan#] as the native vlan on all of the ports the ME connects to, similar to the description of the "single PO" description, (switch 1 ports 1-4 and switch 2 ports 1 to 4) will have (Lets call that vlan "iSCSI_301" for reference.) just "iSCSI_301" as the native vlan.

  Having separate vlans for iscsi Traffic does not provide an actual benefit, and in fact has some drawbacks, but is sometime necessary (not in this case) for creating virtual interfaces on the vlan to assign IPs to.

 Instead you can use a flat network for your configuration with the same performance benefis and a little less trouble to manage.

  That said you can still create separate vlans if you really want to, in which case, you will absolutely want to make sure you only have two vlans (Lets call them iSCSI_301 and iSCSI_302), in which case you would do that similar to the 2-PO setup.  ie ("iSCSI_301" will be native on Ports 1 and 3 For Switch 1 and Switch 2, and "iSCSI_302" 2 will the native vlan on ports 2 and 4 for switch 1 and switch 2) instead replace "PO" with "iSCSI_VLAN"

As Andy points out you could set this up as a DAS.

However, I am assuming you want to have the ability to add further ESXi hosts in the future as you need to add capacity and plan to license more switch ports and set up more as you go along, and also may be envisioning a day when you add an additional SAn for more datastores on these ESXi Servers, and/or migrate them off of the existing Me4xxx to a new SAN withotu having to experience downtime.

All of these are good reasons to set up the switched infrastructure now at the beginning instead of setting these up as DASs.

~Q
andyalderCommented:
Sorry, bit confused as to how you would wire it up Ben. Just two of the four ports connected on each
SAN controller? Total of 4 wires to the SAN and one to each host?
Ben Personick (Previously QCubed)Lead SaaS Infrastructure EngineerCommented:
Hey Andy,

  No, he has more ports available, so he would still benefit from LACPing to the Hosts for their networks, does that help?  Not sure where there is confusion?

Ben
andyalderCommented:
>No, he has more ports available...

Could you explain how he has more ports available than the eight he said he had in the initial question?

Really need you to post a network diagram of this, the confusion may be that 8+8=16 in maths class but two eight-port switches meshed together only give twelve ports.
Al BertoAuthor Commented:
Each switch have 8 ports. Right now I have 2 ports on switch1 connected to 2 ports on switch 2 for the stack. which leaves me 6 ports on each switch. That should allow me to use all 8 ports on the Dell EMC am getting.
andyalderCommented:
Do you have other switches for your LAN? Where are your 10Gb server iSCSI cards connected to?

Having 80Gb between storage and network is not much use if your hosts are on 1Gb connections.
Al BertoAuthor Commented:
yes the rucks switches will be used between the storage and the VMware server.
Al BertoAuthor Commented:
and sorry the switches are 24 port and 8 ports SFP+.
andyalderCommented:
So you have enough 10Gb ports left over to connect your hosts to the storage network? Or do you have 80Gbps of iSCSI SAN connected to servers with 1Gb NICs?
Al BertoAuthor Commented:
the servers are all 1Gb Nics no iscsi adapters on the servers.
Ben Personick (Previously QCubed)Lead SaaS Infrastructure EngineerCommented:
Hey Andy, on mobile just saw the email message.

What I read in the messaging so far is he has the 24 port 1G switches with 8x 10G SFP+Ports licensed, and some dell servers with multi-port NICs in them to put ESXi on, so he's really only going to have 4 to 8 GB of bandwidth to each of those servers.

he'll end up wirh a couple spare 10G link he can use later to expand and add more switches.

My take is he's buying cheap 10G camput switches to build out the best bandwidth for the SAN he can, and begin an expantion of the network into 10G as if these were the eventual new converged core switches for the setup.

the switches he chose support up to 12 stack members, so thisnis actually a fairly pragmatic aproach in a small budget conscious environment.

I think using these servers as A DAS would be a waste to 1g ports and offer no expandability to other hosts in the future.

Al Berto,

  let me know if that is a fair assessment or if I am way off base here.
andyalderCommented:
If the servers are only 1Gb then one 10Gb cable from each controller to each switch is sufficient. I was assuming the servers had 10Gb NICs.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Storage

From novice to tech pro — start learning today.