MPIO or Network Teaming

hongedit
hongedit used Ask the Experts™
on
In this scenario:

2 or 3 ESXi Servers
Single Managed Switch
Single SAN

Imagine we have 3 NICs for the DATA network and 3 NICS for the SAN network for each host.

What would be consideration be for Teaming VS MPIO?

The reasoning for MPIO that was presented to me was this:

Teaming can send on all ports but only receive on one - MPIO theoretically can send and receive on each NIC full duplex as each has its own IP address.

Now I'm not convinced that is all there is to it...

What are the Pros and Cons to each, and how are they applied?

Thoughts?
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®

Commented:
can you add hardware to your san if so go 10gbe copper as thats going to be a major player in the next few years especially for virtualization and you need a spare slot in your server but that will make your san the best it can be as the 10gbe hba's are about £400 at the moment as its usually an £300 option on new servers and its starting to become a viable option especially for storage

Commented:
see
http://www.ms4u.info/2010/12/should-i-do-teaming-or-mpio-on-iscsi.html
why you cannot use mpio or teeming for storage thats why I am saying go 10gbe

Author

Commented:
Lets pretend 10GBe is out of the question.
Python 3 Fundamentals

This course will teach participants about installing and configuring Python, syntax, importing, statements, types, strings, booleans, files, lists, tuples, comprehensions, functions, and classes.

Author

Commented:
Sorry your link seems to be specific for Hyper-V, with VMWare I know Teaming does work as I have it like that right now.

A colleague is setting up another similar install and we were just debating MPIO vs Teaming.

kevinhsiehNetwork Engineer

Commented:
MPIO is only available for VMware Enterprise or Enterprise Plus licenses as I understand it, so that might eliminate it from consideration right there.

The reality is that my SAN has 6 network connections, 6 Hyper-V hosts and maybe 60+ VMs, and I rarely see any of the gig ports above 10%, so this is probably more of a theoretical vs a practical discussion.

Commented:
but wont the san need mpio as well

Commented:
as its a moot point if it doesn't
kevinhsiehNetwork Engineer

Commented:
You have to check with your SAN vendor to see if it supports MPIO. I know EqualLogic does out of the box.

Author

Commented:
Ok, so lets say theoretically that everything supports MPIO (licenses, hardware) and that traffic is substantial enough to see a difference (if there is one).

Commented:
thats why I think 10gb is going to be massive with vmware as it alows a fast san with no teaming if you dont want it so the vmware world is taking stcok of what can be done with 10gbe as thats the furtue of all current servers has an option for a 10gbe nic and the price has dropped so much in the last few years even the san world will be starting to use them as well  

Commented:
why doesn't ee have an edit option

Author

Commented:
I know how awesome 10gbe is...but can we please forget about that bit?

Its not part of the equation right now!

Commented:
Not a problem ok
it will be major say in 2 years I expect
Network Engineer
Commented:
To be really sure you would neesd to test with your own rig. MPIO is probably theoretically better, but it may not be in practice because it depends on whether or not you can even fill a single 1 Gb pipe, and if the penalty for when MPIO switches from 1 interface to another is significant. It's probably also dependant upon the mix of random vs. sequential IO, and how many LUNs are involved. YMMV.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial