Link to home
Start Free TrialLog in
Avatar of Kurt4949
Kurt4949Flag for United States of America

asked on

setting up iscsi network with two servers, two switches, and MD3200i for hyper-v

I've never set up an iscsi network before so I need some help.

My first question is, what's the best way to connect everything?  Please see the attached PDF file where Dell shows 3 scenarios.

1. Direct-Attached: This seems like the simplest option as it eliminates the two switches.  Do we lose any features like live migration?

2. I'm really not to sure what the main difference is between this and the last drawing.  They only show 1 server in this photo but you can have more, correct?

3. This shows multiple servers which is good because we have two.  However, it looks like they use less nics per server than the 2nd photo which means it's less redundant?

If we do set it up like option's 2 or 3, what kind of configuration would we need to do on the Dell 5424 switches that we have?


Thanks
Pages-from-Getting-Started-Guide.pdf
SOLUTION
Avatar of PerisherIT
PerisherIT

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Kurt4949

ASKER

Hello, thanks for the reply.

I've seen those documents but I'm still a bit confused.

1. None of the documents talk about a configuration like the first diagram in the MD3200i getting started guide that I attached to the question.  Is that a fully redundant \ recommended way to configure iscsi?  We will only have 2 servers.  It doesn't even call that iscsi, it just says "direct attached hosts" but that's not to be confused with the md3200 which has sas ports.

This seems like a good solution for us as we only have two servers but I want to make sure we aren't losing anything and that live migration will still work.  As I mentioned all the other documents show switched in between so I can't find any info on this set up.

2. 2nd drawing only shows one server.  Why?, can't you have more than one if it's connected like that?  They also use 4 nics on the server.  Only two nics are required to make it redundent, correct? Are the 4 nics only for increased performance and if so is it necessary to have four or will two be sufficient?

I don't under stand this statement in the hyper-vplanning.pdf you attached "For the iSCSI-based Dell PowerVault MD3200i storage array, you must have two NICs for I/O to the
storage array for each server."  Why MUST you have two?  I thought two was for redundancy so one nic should work?

3.  drawing 3 shows "up to 32 hosts" so how many hosts can drawing 2 have since they only show 1 and don't tell you how many?
Avatar of PerisherIT
PerisherIT

I am in a similar situation where we have two Hyper-V hosts connecting back to a MD3000i. Currently our config utilises one Power Connect 5424. All the iSCSI interfaces from the SAN are connected to the switch and I then connect the Hyper-V interfaces to the switch too.

However before pruchasing the switch my config was exactly like diagram one where I was direct attaching to the iSCSI interfaces. As long as your MPIO is ocnfigured correctly this will wokr with no issues.

In relation to your second point they are probably quoting Best Practive where if you want fail-over you shoud have two NICs connected and multi-pathing configured. Two NICs would be sufficient I would imagine, you could configure for Active/Passive fail-over or Active/Active for IO performance.

32 hosts is the connection limitation of the MD3200i SAN.
Why did you decide to add the 5424 if it was working fine without any switch?  I'm just trying to figure out which configuration to go with.

On the 5424 is there any special configuraton that needs to be done besides what's in this document? http://www.delltechcenter.com/page/Configuring+a+PowerConnect+5424+or+5448+Switch+for+use+with+an+iSCSI+storage+system
We purchased the 5424 because we expanded our iSCSI network. The MD3000i only has two iSCSI interfaces on each controller. We now have six servers connecting to the SAN.

My config for the switch is fairly similar to the one in that document. You need to enable Jumbo Frames on your MD3200I and on your hosts as well to see maximum beneift.

Are you implementing one or two switches?
We have two 5424's but after looking at the getting started guide we could connect up to 4 servers without any switches to the md3200i so i'm wondering if we even need them now.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I know this has been closed for a few months now, but we're in the same situation.  Looking to purchase a MD3200i and Dell is recommending (2) 6224's @ $1,200 each...along with (2) R610's with (6) NIC's, all with ToE, (2) with iSCSI Offload.

We were thinking of just doing direct connect, but I've been reading that there's high disk latency that way as the switches are actually prioritizing things.  Also, if going with a switch, if possible, use a L2 as they're not the overhead that L3 has, unless you need the management or VLAN's.
I believe much cheaper switches would work just fine if your using them exclusively for iscsi traffic.  There's really no configuration necessary.  Since we used the 5424's we had to disable all the features that come enabled by default like iscsi and voip prioritization as described in the link I posted above.  The iscsi optimization feature is really only necessary if you're using the switches for other non iscsi traffic.  Our system is up and running with the 5424's and working just fine.  In the future I'd probably stay away from dell switches.  Dell just sticks their label on them.  They are made\designed by some other company in taiwan or something.  Various models have different CLI commands making it difficult to learn them all.  The gui is junk too.  That's just my opinion.  Other ppl seem to like them.
By the way, we did not create any such trunk between the switches as described in the accepted answer and everything is running good.  We also went with esxi not hyper-v.
In addition to the added costs, I like not using switches as it's one less failure point, but if it is going to increase performance.  And yes, these would be dedicated, as I don't really see the cost benefit of using a L3 switch to put the iSCSI on it's own VLAN when I would think a cheaper dedicated switch(es) would be the better performing option.  I read a post on the SpiceWorks forums last night that mentioned one of the guys that runs SpiceWorks testing a bunch of switches for iSCSI.  And I guess the unmanaged Netgears far outperformed any of the higher-end L2/L3 iSCSI switches as they have no overhead.

My question is though, can I team (2) NIC's to (2) seperate un-managed switches (1 NIC to each switch) and still get 2GB of load balanced bandwidth?  Or am I just going to get 1GB of redundancy?
I'm not sure what the answer is.  It seems everyone has a different way to do iscsi which makes it confusing to configure.  We had a consultant come in and design the iscsi network.  We have 2 switches with 2 icsi nics per server.  1st nic goes to switch 1, 2nd nic goes to switch 2.  I actually had to reboot both switches.  I rebooted them one at a time and nothing went down.  I'm very happy with the set up and esxi vmotion is amazing.

I think I'm going to set up a new system in a different office and this time try NFS rather than iscsi.  It's supposed to be easier to set up.
Hi everyone,
I know this post is closed but I have a similar scenario and I wanted to get your inputs from the experts and from the person that asks the question. Here is the link of my question : thank you in advance

https://www.experts-exchange.com/questions/27618152/Virtual-environment-with-or-without-switches.html?anchorAnswerId=37693875#a37693875