Considerations for converting a shared quorum Windows cluster for DR/HA

Published:
I was supporting a handful of Windows 2008 (non-R2) 2 node clusters with shared quorum disks. Some had SQL 2008 installed and some were just a vendor application that we supported. For the purposes of this article it doesn’t really matter which so we’ll assume we’re talking about SQL 2008.

The existing configuration was a 2 node Active/Passive SQL 2008 Cluster on Windows 2008 using shared EMC storage and a quorum (Q:\) to hold the vote. They also had a private NIC (hard-wired crossover cable) and a public NIC on the 192.168.100.0/24 subnet. This is a high-availability (HA) environment.

The company purchased a new datacenter. For disaster recovery (DR) purposes wanted to extend the cluster down to the new datacenter. This would allow us to have a cluster with both HA and DR (i.e. able to recover almost immediately and also to come up in case the datacenter disappeared).

There are several decision points when it comes to how you would extend your cluster to the new site:
1. Will you need to “stretch” your public VLAN down to the new site (i.e. have the same VLAN on both sides of the WAN) or will you be able to put the new cluster node on a new subnet.
2008 supports having cluster nodes on different subnets, 2003 doesn’t. That’s your first answer. The second answer is that some applications (including SQL 2008) do NOT support clusters that are NOT on stretched subnets
The next answer is is your network person willing/able to stretch the subnet. Everywhere I’ve worked the first answer from the network team is a resounding NO, but eventually you can wear them down!
2. How will your replicate your data to the new site? Microsoft does not inherently replicate the data for you, the cluster just expects it to be there.
There are several solutions for this, but in my case we were a EMC shop so ended up using EMC RecoverPoint, which does block level copies on the SAN over the WAN. Note that whatever you use it has to be something can copy the data either asynchronously or synchronously. It just depends on how quickly you want your cluster up.
Also note that your cluster nodes at Site 1 (nodes 1 and 2) can STILL share their storage between them. The cluster nodes at the other sites will have their own copy of the storage (and can even share between multiple nodes there). That’s where your storage software (PowerPath, etc.) comes in handy.
3. How many nodes will you put in your new cluster and what quorum model will you choose?
This is a very contentious issue and everyone has their own opinion. As always, it depends.
If your data center (DC) model is that 1 DC is primary and the other is only for DR then you want  your primary DC to win the “vote” if the link between the 2 DC’s goes down. You don’t want there to be a voting storm or the 2ndary site to ever think he can win the vote.
As far as the vote concerns the primary machine needs to be able to win a majority vote. In a 2 node shared quorum model it takes 2 votes to win, thus each node has a vote and then the quorum has a vote. So whoever owns the quorum disk gets the vote.
In a 3 node majority node set (MNS) model, there is no shared quorum anymore so it still takes 2 votes to win. If you have 2 votes at DC1 and 1 vote at DC2, DC2 will never take primary on its own (altho you can certainly force it). If you lose the link your primary site should still be okay, which is what you’d want.
So if you went with a 4 node MNS cluster, 2 nodes at each site, you can see that if you lose the link you’d need 3 votes to be majority… and you’d NEVER get it. In that case the cluster resources would all go offline, since no one can get majority
If you went with a 5 node MNS, you’d still need 3 votes, but then you have the quandary of where to put the 5th node. You can put it at your primary site and be fine, but then you have to ask what adding the last 2 nodes really buys you (ignore the question of Active/Active clusters)
In the best world scenario you conceivably have a THIRD DC and you put the 4th or 5th node (or 12th for that matter) at the 3rd site and it has independent connections to both the other data centers. Then his vote always counts. But you still always have the problem of what happens if any/all of the datacenters become isolated and what you want to have happen when that happens.
Your other option, rather than stand up a whole 5th node to cast a vote, is you can use what’s called a file share witness (FSW) on a file server, which is simply a file share that has the ability to cast a vote. Other than that it can be treated the same as any other node.

4. Your next question is how you want Windows to manage who owns the disks in the cluster and who gets to make them active.
This is usually dictated by your replication software. You always have the option to do it manually (i.e. bring up the disks manually in a failover scenario). In our case we were using EMC RecoverPoint so used EMC Cluster Enabler to manage the disks from the OS side.

As you can see there are lots of decision points to make when you want DR and how to create/convert clusters when you want to add nodes and have full HA and DR.
0
8,647 Views

Comments (0)

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.