IIS 7 - Server 2008 Network Load Balancing vs Hardware Solution

We currently have an environment with an IIS front end (our application server that provides IIS and connects to the backend SQL). It is one IIS server and we have a duplicate configuration on another server for a backup.

We are looking to add HA in several areas and I have done some research into a couple attractive options. We are planning to add another physical server and load balance in a way that when one server fails, the other is functional and IIS continues to function until we can get the other server up.

Here are the two options I am looking at:

1) Using NLB in Server 2008 to load balance both servers.
2) Hardware NLB device to perform the load balancing.

My question is, is one a better option than the other? I know the hardware NLB is more expensive, but it may be worth it by avoiding the NLB config on the two servers. Just looking for expert advice on this. Also, if hardware NLB is the better way to go, could someone recommend which devices seem to perform the best with least chance of failure?

Paul WolffAsked:
Who is Participating?
It's not a matter of "better" - each option has strengths and weaknesses.

In your case, I would say maintaining Server 2008 NLB would require significantly less administrative effort than adding hardware load balancers and keeping the webserver configuration in sync.  

It would also keep the level of complexity down - adding hardware load balancers adds a lot of extra network complexity as well.  If you don't have a network engineer who really understands layer 2 through 7 packet flows, you will struggle to make things work effectively.

If you want to learn more about hardware load balancers, F5 (http://www.f5.com) is the "gold standard".  I like A10 (http://www.a10networks.com) a lot and have had good success with their products.  They attempt to compete directly with the big boys like F5, but they're a pretty new company, so probably not quite as well baked technology.  Kemp (http://www.kemptechnologies.com/) is another fairly new company and probably the best from a price perspective.  I've had good luck with them when I needed support.

I've used all three companies products and would be willing to use any of them again.  Get them competing with each other and the pricing will be much better.
I used to manage a platform consisting of multiple servers behind a Hardware load balancer.   We also had a redundant platform offisite, including an extra Hardware LB.  One thing for sure is that it all created a great deal of complexity.  You need to make sure you have enough knowledgable staff to manage all the equipment, networking, programming, etc.  So if you don't have that staff, be sure to include that as part of the cost of the project.

One thing to keep in mind is that when you introduce a load balancer of whatever sort, if you just have just one, then you still have a single point of failure.  So you will also want to consider the cost to account for that as well.
ee_reach makes a good point...  

I would only deploy hardware load balancers in a redundant configuration, which also implies redundant switching, firewalls, routing, and multiple ISPs.  If you don't have that level of redundancy in the network, it makes the case even stronger to steer to the Server NLB solution.
Paul WolffAuthor Commented:
We currently are in a data center with redundant circuits (HSRP), power, etc. So the infrastructure is there. Mostly looking to avoid a single server failure.
Does that change anything with regards to rscottvan's advice on the level of networking required? Our staff is capable of handling various networking topologies, however none of us are CCNA's or anything. We run various configs on Cisco gear, and work with config files in various capacities, but when it starts getting into anything deeper than running packet monitors, etc that may deter us. Good to know that the Server 2008 NLB may be easier to manage. We have run Server 2008 NLB on some of our Sharepoint installs, however I was not sure about management of a hardware load balancer.

Let me knwo if you have any additional thoughts on that. I am going to split points with more leaning to rscottvan, but you both bring up good points, so will reward you both.

Probably rscottvan will be able to speak to the hardware lb details in more detail.  I came to own our platform as an architect, server programmer, and project mgr, among other things, but never had to do hands-on on the hardware LB itself.  

However, I do recall my network guys having to make mods to the hardware lb and whether they had to write actual programs or just scripts, the changes impacted the work the programmers on the server team had to do as well.  

Also, re the complexity, every new item added to the chain of hardware, etc, means additional testing complexity.

E.g., My end-to-end test plans had to test failure and failover every step of the way.  We were a Fortune 500 company, and always had about 25 people involved in the actual testing, which was a week long event that we did twice a year.  In addition to the 25 people involved in the testing iteself, we had about 50 people involved in reviewing the test plans.  

This is aside from any testing involving power, ISP redundancy, etc.  

Not to discourage you from adding the  redundancy you are considering, just wanted to mention the extra work required for the test plan so you will budget for that as well.

Hope this helps
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.