TEAMED NIC and Microsoft NLB resulting in server reboots


I have an 2 Exchange 2013 servers with Client Access Roles installed on Windows 2012.  Each server has two physical network connections which I've set-up in a TEAM spreading the load.

In order to balance load them, I installed Microsoft Network Load Balancing on both servers and created a virtual cluster node in multicast mode using the TEAM virtual network connection.

The problem I'm seeing is that every 2-4 hours each servers is rebooting in a sporadic blue screen fashion.  Well I imagine a BSOD as the server just times out and by the time I get the server room, i'ts booting up again.

I removed the TEAM and set-up the CAS servers to use a single network connection and when I did that the servers didn't reboot once.

So I'm not sure where to go.  The minidump says "Probably caused by : NETIO.SYS ( NETIO+1d532"

I have updated the driver and firmware of the NIC but that hasn't resolved the issue.  I presume it's okay to use a TEAMED NIC in an NLB cluster.

Guidance on the cause and resolution would be great.

Thank you
Who is Participating?
Craig BeckConnect With a Mentor Commented:
Well, there are different approaches to achieving what you want now.  NLB is redundancy in it's own right, so it's not a requirement to have multiple NICs per host.

If you look around you'll not find any docs which mention that NLB -AND- NIC teaming is supported by Microsoft (at least I can't find any).  You will find plenty though that mention disabling teaming if you experience problems.

If I was wanting to use teamed NICs due to low bandwidth, and NLB, I'd look at virtualizing the server.  From past experience that has the best chance of stability as the Team and NLB don't use the same drivers.

Things like this make me worry...
Craig BeckCommented:
So it's a driver issue.  Are you using the latest?  If so, are there any release notes which mention previous issues or recent fixes?

If it is the most recent driver, try going back a version or two if Server 2012 supports it.
benowensAuthor Commented:
Do you think it's definitely a driver issue.  I've updated the server to use the latest drivers from the HP website and upgraded the NIC firmware then rebooted.

I'm just getting very odd behavior at the moment.  After I break the TEAM up I can't set a default gateway against the NIC.  I have to reboot and then the default gateway appears.

I'm not sure whether to look to downgrade the servers to Windows 2008 R2 or even look to Windows 2012 R2.  What do you think?

I believe the NIC is a Broadcom but you download the driver through the HP website.  I'll have a look at the Broadcom website tomorrow.
Creating Active Directory Users from a Text File

If your organization has a need to mass-create AD user accounts, watch this video to see how its done without the need for scripting or other unnecessary complexities.

Craig BeckCommented:
I would put money on it.  Trying 2008 R2 might be a good idea.
benowensAuthor Commented:
Cool.  Okay but using a teamed card with  NLB is generally considered okay?
Craig BeckCommented:
I personally wouldn't do NLB with teamed NICs (for fear of this very reason more than anything due to past experiences).
benowensAuthor Commented:
Oh really.  But by that basis, anyone that is using NLB on Windows 2012 can only utilise one physical 1GB connection which isn't exactly making the most of the throughput available.

I tired using NLB and adding the network cards as separate entities but as soon as you go to add a network card from a host that is already part of the cluster, it says something along the lines of 'host is already part of the cluster'.

Surely not everyone who uses NLB has a single NIC connection being used?
Simon Butler (Sembee)ConsultantCommented:
The Exchange product team don't recommend the use of Windows NLB, and you will find very few others will. If you want load balancing, use a load balancer outside of the servers. Much more reliable than the Windows NLB junk.

benowensAuthor Commented:
Yes, I think I've effectively read a lot about session affinity not being required any longer meaning that Windows NLB is a real option now.  However I've obviously made an incorrect assumption thinking that TEAMING and NLB will work together on the same OS.

Agreed, I can't find anything giving the thumbs up for Windows NLB with TEAMED NIC's.

Essentially as you've said it's the two levels of virtualisation of the MAC at TEAM and NLB level run by the OS.

Moving forward with two virtualised CAS servers isn't a possibility as the hardware has already been purchased so we really need to push forward with that.  However I can see the sense in having a virtualised NIC as it will effectively give better throughput and appear as one MAC/NIC at OS level which should avoid the NLB.

So now I have pushed forward with the two CAS servers using one 1Gbps connection which is balance loaded for receving email and client connections going forward.

They are moving from a single Exchange 2010 server which has two NIC's in a TEAM and we had no complaints on throughput there, so moving to two CAS servers with a single NIC should suffice.

Annoying that I can't make the most of the throughput of those other 3 NIC's on each CAS server though....any thoughts on that.?...

The option we mused here is installed a 10Gbps card in each CAS server to use in the NLB...again your thoughts would be appreciated?
benowensAuthor Commented:
There seems to be no rock hard document for or against this but the fact is without using TEAMED NIC'S with Windows NLB in the OS, the server doesn't crash.  So we have progressed with single NIC 50/50 balance loaded multicast Windows NLB.
All Courses

From novice to tech pro — start learning today.