MS SQL 2005 - HA with Load balancing

Posted on 2009-12-25
Last Modified: 2012-05-08

I have a query regarding using a cluster of MS SQL 2005 servers in both HA and load balancing mode using native technologies of microsoft. I had come to know that using Microsoft Cluster services (MSCS) and NLB on physical servers, the above can be achieved.

I currently have a a cluster using MSCS on Windows 2003 Ent. servers configured to provide Active/Passive configuration of SQL. I need to covert this into n+1 failover/load balancing mechanism where both my nodes as of now, can be used in Active mode and when budget permits I will bring in the third server to the cluster to act as a standby.

How do I achieve this without losing the data that currently resides on my A/P SQL cluster?
Question by:fahim
    LVL 13

    Expert Comment

    I am not sure I understand the question. Are you asking if you can use SQL Server in an "Active-Active" cluster scenario where both nodes are serving the same need and thereby providing load balancing?

    The simple answer is, this is not really possible as requested.

    The longer answer is:

    Yes, you can have, what Microsoft calls, multi-instance clusters where you have more than one active node in a cluster at a time and more than one instance of SQL Server up at the same time in said cluster.

    This is not load balancing. You have two separate instances up on different nodes. For instance you could have your financials back end alive on one node and your reporting on the other. In the event of a failure of one node they either both live on the same node or transfer to a common 3rd passive node.

    If you wanted to do some sort of load balancing you would really have to look into complex architectural or design questions. Perhaps one function lives on one instance and another function on the other instance and the application middle-tier pieces know this, etc.

    The quick answer is SQL clusters are for availability, not load balancing/performance needs. If I misunderstood the question, please let me know how and I am sure myself or someone else can give a better answer.

    Author Comment

    No Mike, You made your point quite correctly. The concern arose on the scalability aspect and the need to equate MS SQL offering with that of Oracle's RAC.

    I currently have multiple SQL databases spread around low performance servers and I wanted to consolidate them all on one cluster while taking into consideration the future scalability needs. If I deploy a SQL cluster on high performance server grade machines today, tomorrow with growth of business, I might need to keep adding more and more clusters. Someone told me and that's why I posted this question, that there is a way to have 7+1 or N+1 nodes cluster in SQL where, if and when we want to scale the performance of SQL servers, we just keep adding nodes and that's all it takes.

    I guess it was a case of misplaced and incorrect information then.

    But the question still remains, with more and more applications coming into my datacenter that involves SQL server services, what's the best way to architect the design?
    LVL 13

    Accepted Solution

    fahim - What you describe is something we will likely see on the horizon. With SQL Azure we sort of see a model of horizontal scale and someday, through "private clouds" that technology will likely be available to us in our own enterprises. Yeah RAC allows you to add nodes if you are experiencing memory or CPU limits which isn't typically where the bottleneck is in a database application. You still have I/O access issues and performance of bottlenecks down to the disks there. Not to mention the cost difference between SQL Server installations and RAC installations... I am not saying RAC = bad but it is not always all it is cracked up to be.. There is a document here from the MS perspective with marketing and sales info thrown in but it goes into some of the technical differences:

    The question remaining on the best way to architect the design is one with several answers, all right in different situations.

    For me, I try to balance performance, physical server sprawl, instance sprawl and the cost of rackspace when deciding where a new database need goes.

    A rough idea of the plan I use is something like this:

    1.) If it is a huge application with perforance implications (the data warehouse's main production instance, SAP, etc.) it typically gets it's own physical server. Sometimes it gets it's own cluster or it is a node in an existing cluster (with either a 3rd party failover or an assumption and SLA note documenting that in the event of a failure, node n is now doing twice as much work and performance will be degraded until the original node of application x is restored).

    2.) If it is something that plays well with other databases, the vendor doesn't have huge security holes and missteps requiring me to give higher privs than I want, I have a couple shared clusters with misc. apps. Monitoring tools with low DB impact, smaller finance apps, departmental apps, etc.

    3.) if it is a one off that for some reason must live on it's own, on one of those clusters I have more than one instance on the active node. I'll put something that plays alright from a performance standpoint but for some reason I want to keep it in it's own SQL instance.

    4.) One off's, can't share an instance, needs to go up quick and unknown perf characteristics/etc. I'll spin up a virtual. With V-Motion or Hyper-V's live migration, I can move it to a different host if performance characteristics grow, I can always migrate it to one of the physicals if need be also without too much headache.

    For most of my heavy hitting performance environments, I've been able to handle most loads with the existing server configurations that we planned on. Perhaps increasing RAM or Spindle count behind the scenes. Processor bound workloads do happen but, in my experience, not as frequently as Disk-bound or memory-bound.

    Write Comment

    Please enter a first name

    Please enter a last name

    We will never share this with anyone.

    Featured Post

    How your wiki can always stay up-to-date

    Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
    - Increase transparency
    - Onboard new hires faster
    - Access from mobile/offline

    In this article I will describe the Detach & Attach method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
    How to leverage one TLS certificate to encrypt Microsoft SQL traffic and Remote Desktop Services, versus creating multiple tickets for the same server.
    Migrating to Microsoft Office 365 is becoming increasingly popular for organizations both large and small. If you have made the leap to Microsoft’s cloud platform, you know that you will need to create a corporate email signature for your Office 365…
    Get a first impression of how PRTG looks and learn how it works.   This video is a short introduction to PRTG, as an initial overview or as a quick start for new PRTG users.

    760 members asked questions and received personalized solutions in the past 7 days.

    Join the community of 500,000 technology professionals and ask your questions.

    Join & Ask a Question

    Need Help in Real-Time?

    Connect with top rated Experts

    11 Experts available now in Live!

    Get 1:1 Help Now