Looking for assistance configuring a Cisco 2960-CX for iSCSI network


We are beginning a project to move away from a hyper-V environment with locally-attached (internal) storage on a single host server to a small VMware implementation with two Dell R730 servers attached to a NimbleStorage CS215 SAN.  The hosts will connect to the SAN via iSCSI and a dedicated network of 2 Cisco 2960-CX switches.

My strengths definitely do NOT run towards cisco configurations, although I can typically piece something together.  So far, research and experience tells me I need to do the following with the 2960-CXs:

1.  Set Jumbo frame MTU to 9000
2.  Set spanning-tree mode to rapid-pvst
3.  Set spanning-tree portfast on all SAN-facing switch ports.
4.  Set flowcontrol to bidirectional.
5.  Disable storm-control on all interfaces.

Does this sound reasonable?  Am I missing anything?

I've researched the Cisco site, and while there are many articles regarding Cisco and iSCSI, they seem to be tailored for much larger implementations than mine, so I get lost trying to refine it to something that better reflects my environment.

Any thoughts are appreciated!

Scott MilnerApplication AdministratorAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
All looks good, you've not mentioned how you have designed your multi-path iSCSI environment in VMware ESXi ?

see my EE article, step by step tutorial with screenshots, also valid for 5.x and 6.x.

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Scott MilnerApplication AdministratorAuthor Commented:
Thanks Andrew.  I haven't begun configuring VMware yet, but your tutorials look incredibly helpful!

I'm installing version 6.0... do you think your tutorials will apply?  I'll be researching differences between 5.0 and 6.0 to be certain.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Yes, iSCSI configuration is the same 5.x and 6.x. (I did state that!).
Determine the Perfect Price for Your IT Services

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden with our free interactive tool and use it to determine the right price for your IT services. Download your free eBook now!

I would add to VLAN separation to the list, and would note that you probably won't be too happy with your iscsi performance if you aren't bonding 2-4 of those gigabit interfaces, so you would also need to do port channel setup.  What is your plan for number of ports for native networking, iscsi networking and vmotion networking?
Scott MilnerApplication AdministratorAuthor Commented:
Thanks BenHanson.

The dell hosts will have 6 gig nics each (2 onboard and 4 on a Broadcom quad port card that I forget the model number of).  4 gig ports per controller on the San. My initial thought was to use the onboard ports for networking and vmotion, and the 4 ports on the card for data. (This is similar to how my hyper-v environment is set).
Scott MilnerApplication AdministratorAuthor Commented:
Thanks Andrew HAncock. Yes you did state that they would work for 6.0. Sorry about that!
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Always make sure you double up your service ports. management network, iSCSI and vMotion)

As for you SAN, you will probably want to run two pairs to both switches.

It's best practice to have a vMotion Network and iSCSI Network, 6x1GBe nics, does not give you many options.

So you may have to look at VLANs for iSCSI, vMotion, Management Network.

If this is a Dell R730, the usually have four onboard, and four on a quad.

So you could have 8 nics.
I'm a bit confused on the port count:

2 x 2960CX which as far as I can see would be 8 ports + 2 uplink, so 20 ports total

SAN = 4 x 2 or 8 ports
2 x R730 @ 6 ports each for 12 ports

I don't see where you have room for uplinks.

Personally, if there is any way you can afford it, I would do some sort of stackable pair of switches and get your servers to 8 ports total.  That would allow you to do link aggregation across chassis, though that's not totally necessary, and you would have full redundancy in your esxi port layout.  You definitely want separation between vMotion and iSCSI.  I wouldn't do iSCSI with less than 2, and personally, unless your VM's push substantial network data, I would do 4 iscsi, 1 vmotion and 1 native per host.  You are already in a compromise position with 1GB ISCSI.  The compromise with less than 4 iSCSI is day-in-day-out performance loss.  The compromise with 1 vmotion is slower migrations which you probably don't do that often, and you aren't necessarily in a hurry when you do, and the compromise with only 1 native for VM's is just fault tolerance(again, assuming your not building out mapping, video editing or image storage servers), which just means you need to have a plan for dealing with a  single port failure.

So do are your switches 8 + 2?
Scott MilnerApplication AdministratorAuthor Commented:
Sorry to you both for confusing things here.  I probably posted a bit ahead of myself, as I'm still researching/learning about the VMware implementation.

Our current Hyper-V host server is a Dell R720, which has 6 nics (2 onboard, plus 4 on an Intel I350).  Andrew, you are correct that the second host (which we are adding to the environment--it isn't a replacement for the first host) is an R730, which as 4 onboard nics plus the 4 port add-on daughter board, for a total of 8.

Ben, the Cisco 2960-CX switches have 8 Gb ports plus 2 copper or 2 fiber uplink ports (I'm assuming that I can only have one or the other of the uplink ports enabled... more research is necessary).

My intention was to have the iSCSI connections dedicated to the 2960 switches, and keep that traffic off my production network altogether.  I've attached a visio of what I thought the physical network would look like, but as I read through your responses (thank you both for the detail!), I'm thinking that I'm off here.  In any event, I know that I need to do more research.
Scott MilnerApplication AdministratorAuthor Commented:
Scott MilnerApplication AdministratorAuthor Commented:
thanks to you both for your responses!  I used information from both, and we completed the initial phase of the VMware install/conversion from Hyper-V last evening.

FYI, we now have 8 gigabit ports available on each host, and are currently utilizing 5... 2 for the lan connections, 1 for vmotion/management, and 2 for iSCSI to the SAN.  

I'm going to split off the managment from the vmotion, and add two additional connections to my production network with the remaining ports in the upcoming days.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Network Architecture

From novice to tech pro — start learning today.