George Kesler
asked on
Best high availability solution for windows services - Web Server
I have the opportunity to re architect many of the services on our network while virtualizing the servers (ESX 4). My goal is to provide 100% redundant servers for each service. I'll ask a separate question for each service.
WWW server requirements:
- scale performance among 2 or more nodes
- have the ability to take any single node off line for maintenance without affecting users
- simple to implement and manage
We have a Sonicwall NSA 3500, if it has the ability to load balance two web servers would most likely take it as simplest solution.
Currently using IIS 7 with shared configuration on a DFS share. Not exactly simple...
WWW server requirements:
- scale performance among 2 or more nodes
- have the ability to take any single node off line for maintenance without affecting users
- simple to implement and manage
We have a Sonicwall NSA 3500, if it has the ability to load balance two web servers would most likely take it as simplest solution.
Currently using IIS 7 with shared configuration on a DFS share. Not exactly simple...
ASKER
NLB is what I have now. It starts simple but gets complicated. With IIS 7 you can use a network share for the sites content and configuration (shared by al nodes). Now you need to make that share redundant as well. DFS is the way to go. Oh, now you also need a domain controller for DFS to work. Wait a minute, the domain controller is a single point of failure, make that two.
So instead of two web servers you end up with six servers in a domain configuration. All needs updating, maintenance etc.
So instead of two web servers you end up with six servers in a domain configuration. All needs updating, maintenance etc.
You are right...
Maybe just simple DNS round robin is enough in your case?
Maybe just simple DNS round robin is enough in your case?
Round robin is even worse than NLB because if a node is down traffic is still sent to it AND it doesn't support affinity so you can run into session management problems.
You can go overboard with redundancy. If you dig deep enough it would take a fortune to eliminate all single points of failure. You have to balance the risk.
In some cases design decisions create single points of failure. For example, I do NOT use a Windows share for web site content and configuration across multiple servers because it creates a single point of failure! By making it easy to deploy a new web site you are adding risk.
In fact, if I have two web servers that are using NLB and I am upgrading the web application I take one node offline in NLB. I then upgrade the web application on that node. I then test the web application to confirm the configuration and deployment were successful by bypassing the NLB and going directly to the server.
When it passes I then switch the NLB to the new web application and upgrade the second node.
Even with lots of web servers you can automate deployment and avoid a single point of failure.
So when considering your architecture you have might have to consider making configuration management decisions that don't create single points of failure.
You can go overboard with redundancy. If you dig deep enough it would take a fortune to eliminate all single points of failure. You have to balance the risk.
In some cases design decisions create single points of failure. For example, I do NOT use a Windows share for web site content and configuration across multiple servers because it creates a single point of failure! By making it easy to deploy a new web site you are adding risk.
In fact, if I have two web servers that are using NLB and I am upgrading the web application I take one node offline in NLB. I then upgrade the web application on that node. I then test the web application to confirm the configuration and deployment were successful by bypassing the NLB and going directly to the server.
When it passes I then switch the NLB to the new web application and upgrade the second node.
Even with lots of web servers you can automate deployment and avoid a single point of failure.
So when considering your architecture you have might have to consider making configuration management decisions that don't create single points of failure.
It all depends on priorities...
I suggested NLB in my first answer but it looks too expensive for this scenario. So RR may be useful in such case. Of course it has serious disadvantages but if you know about it - you can live with this.
I suggested NLB in my first answer but it looks too expensive for this scenario. So RR may be useful in such case. Of course it has serious disadvantages but if you know about it - you can live with this.
ASKER
anyone tried the sonicwall solution?
http://www.sonicwall.com/us/support/230_7504.html
http://www.sonicwall.com/us/support/230_7504.html
Did not try it but in your document it looks pretty nice.
You have some kind of configurable affinity, active probing and it should be 10% invisible.
So I guess it will work... ;)
You have some kind of configurable affinity, active probing and it should be 10% invisible.
So I guess it will work... ;)
If you want high availablilty don't use a software load balancer like NLB. Use a hardware load balanced solution.
A hardware solution you can easily take the server out of the stream in the web farm.
Look for a level 7 compatabile load balancer.
TedBilly you said,
" For example, I do NOT use a Windows share for web site content and configuration across multiple servers because it creates a single point of failure! "
Generally true but for very large setups what they have the content/shared config on shared storage/SAN/NAS that way you still get the redundancy. Keeping hundreds of servers in one farm in synch is tricky no matter how you do it mass depolyment tools or shared storage are the 2 solutions.
A hardware solution you can easily take the server out of the stream in the web farm.
Look for a level 7 compatabile load balancer.
TedBilly you said,
" For example, I do NOT use a Windows share for web site content and configuration across multiple servers because it creates a single point of failure! "
Generally true but for very large setups what they have the content/shared config on shared storage/SAN/NAS that way you still get the redundancy. Keeping hundreds of servers in one farm in synch is tricky no matter how you do it mass depolyment tools or shared storage are the 2 solutions.
ASKER
I'll try the sonicwall as a poor man's hardware load balancer. My setup is not complicated and NLB seams to be more of a problem when a solution (had multicast issues with the firewall. If a node is acting up some user are not able to get on the website while other have no issues)
Yeah it should cope better then NLB as a poor man hardware load balancer. :)
However from the sounds of the scale of your infrastructure I think it woudl be wise to invest in a cheap h/w lb
However from the sounds of the scale of your infrastructure I think it woudl be wise to invest in a cheap h/w lb
ASKER
for example?
Some like this from F5
http://www.f5.com/products/big-ip/product-modules/local-traffic-manager.html
http://www.f5.com/products/big-ip/product-modules/local-traffic-manager.html
ASKER
hard to find prices,but somehow I think inexpensive for this kind of stuff means ~$20K
I am not a networking guy but there are alternatives
http://www.kemptechnologies.com/uk/server-load-balancing-appliances/product-matrix.html?lang=uk&jkId=8a8ae4cc21fadc22012237d639331eb3&jt=1&jadid=3886129452&js=1&jk=f5%20big%20ip&jsid=15702&jmt=1&&gclid=CMCm8IuepKICFVWY2AodA2pOwg
http://www.barracudanetworks.com/ns/products/balancer_overview.php
The cheap entry level all you need unless you get really large amount of traffic (then a softwrae lb will not function either) are about $3k USD or less
For big players like F5 and Cisco are likely to be more. Getting online prices out of these companies is liek blood out of a stone at times but I would expect maybe double or so in cost.
http://www.kemptechnologies.com/uk/server-load-balancing-appliances/product-matrix.html?lang=uk&jkId=8a8ae4cc21fadc22012237d639331eb3&jt=1&jadid=3886129452&js=1&jk=f5%20big%20ip&jsid=15702&jmt=1&&gclid=CMCm8IuepKICFVWY2AodA2pOwg
http://www.barracudanetworks.com/ns/products/balancer_overview.php
The cheap entry level all you need unless you get really large amount of traffic (then a softwrae lb will not function either) are about $3k USD or less
For big players like F5 and Cisco are likely to be more. Getting online prices out of these companies is liek blood out of a stone at times but I would expect maybe double or so in cost.
ASKER
Over the next week or so I will test how it works on my backup Sonicwall before introducing yet another device.
So far the consensus for small web sites seems to be - keep it simple, stay away from NLB. Shared storage depends on what is available. I may be able to place it on a SAN volume.
Any new ideas?
Actually I'll ask a new question about using the ESX features.
So far the consensus for small web sites seems to be - keep it simple, stay away from NLB. Shared storage depends on what is available. I may be able to place it on a SAN volume.
Any new ideas?
Actually I'll ask a new question about using the ESX features.
Hi guys
Actually I think keeping it simple is using Windows NLB. It's very easy to configure and because it's O/S aware will respond better to server OS issues than a hardware solution.
At my company where money isn't an issue I always choose Windows NLB over hardware for our intranet. I only choose hardware for very large web farms (5+ web servers)
Actually I think keeping it simple is using Windows NLB. It's very easy to configure and because it's O/S aware will respond better to server OS issues than a hardware solution.
At my company where money isn't an issue I always choose Windows NLB over hardware for our intranet. I only choose hardware for very large web farms (5+ web servers)
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
L7Tech - If you are still looking for a load balancing solution and the F5 Big-IP are out of your price range take a look at the Coyote Point E250. These go for under $2K.
http://www.coyotepoint.com/products/e250.php
Ted -
"all my internal farms using Windows NLB. ... I've never had a problem due to Windows NLB. "
I don't much about your setup but you should play the lottery because you are very lucky.
"The hardware cannot detect if the web application is working"
Not so, any hardware load balancer worth an ounce has the ability not only to monitor the health of the application but also the performance of the server and take those into consideration.
The fact that you are having to use WMI and PowerShell to manage and script taking nodes out of service shows that you are already going outside of simple NLB and making things more complex.
@Cloz: I mentioned Powershell and WMI as possible choices to help manage NLB. I actually don't use them for that and haven't had to yet. I'm not lucky at all and am actually risk adverse. Considering that NLB is free with my Windows servers I'm 100% satisfied with the solution.
You're lucky, because NLB doesn't come close to providing an adequate industry standard SLA. If they did then majority of the mission critical shops out there would be deploying it as a cost effective load balancing and high availability solution. If you were risk adverse you would prefer a low risk solution not just a cheap one.
Whether or not you use Powershell and WMI, the fact is you are acknowledging they can be used to manage NLB, which is the point. NLB doesn’t scale very well and becomes complex the larger an enterprise’s deployment gets.
We're not even discussing performance yet because unlike hw load balancers, NLB's performance begins to degrade as the number of nodes in the farm goes up. This is not even mentioning the amount of network chatter that is created when using NLB, and how it increases as the number of servers in the farm increase. NLB requires that you configure your switch to act as a hub, you're not a network guy but this is a bad thing in networking as it increases the broadcast domain.
But L7Tech has said their implementation has been using NLB and it’s become unviable, so its moot.
ASKER
I haven't learned much except that there isn't an easy solution....
Look at http://technet.microsoft.com/en-us/library/cc754833(WS.10).aspx for some detailed explanations.