Borgs8472
asked on
Suitable Storage for SQL Cluster?
I've been asked by my boss to investigate making our own high availability systems at our data center to avoid the increasing costs of managed hosting.
We currently have a couple of DNS servers, a few dedicated web/sql servers, a few shared webservers and a few shared SQL servers. Since the process of clustering (or more likely load balancing) the webservers has rather different challenges, my question pertains to how best to make a small SQL cluster without a single point of failure.
Messing around in VMware I've setup a DC and two servers and begun to cluster them, it seems I'm going to be able to do this. However I achieved my setup by pointing both machines to the same virtual disk on the host OS. Good for my test environment, bad for working out what my production needs will be.
I have not found a definitive answer, but it seems that HA resource storage for ones cluster is never(?) done on the actual servers themselves, rather on the somewhat cryptically mentioned "external disk arrays" wherever I look.
What I need is a decent solution for this. If one can store the SQL data on the clustered servers themselves (maybe with SQL replication?) then that's probably what I'm after.
However I think I need a reliable external storage solution. This area is rather new to me as an administrator designing this system so I've researched a number of solutions.
1. SCSI storage devices. Presumably RAIDed. However I've read these only do fail over and don't support different machines writing to them. Also I'd need power redundancy.
2. iSCSI or Fibrechannel. I assume using some software or other to first create some file storage redundancy across a couple of servers, then addressing these devices over the network from the cluster.
3. NAS box. Opinion seems to be against using as NAS for clustered storage but I'm keen because I could get a solution in a box if this were suitable.
Something like this with redundant everything, or would I still be putting my eggs in one basket if so?
http://www.thomas-krenn.com/en/system-solutions/storage-solutions/nas-server/2u-sas.html.
4. My own NAS. I have quite a few servers to play with, no high speed connectivity as of yet though. Would yet another cluster to cluster a file server be the way to go? And would I need some fast cabling to make this viable in any case?
So in conclusion, I have no idea what kind of storage is required for this SQL cluster I'm researching, could someone demystify the subject at all?
Windows Server 2003 Cluster with SQL 2000.
We currently have a couple of DNS servers, a few dedicated web/sql servers, a few shared webservers and a few shared SQL servers. Since the process of clustering (or more likely load balancing) the webservers has rather different challenges, my question pertains to how best to make a small SQL cluster without a single point of failure.
Messing around in VMware I've setup a DC and two servers and begun to cluster them, it seems I'm going to be able to do this. However I achieved my setup by pointing both machines to the same virtual disk on the host OS. Good for my test environment, bad for working out what my production needs will be.
I have not found a definitive answer, but it seems that HA resource storage for ones cluster is never(?) done on the actual servers themselves, rather on the somewhat cryptically mentioned "external disk arrays" wherever I look.
What I need is a decent solution for this. If one can store the SQL data on the clustered servers themselves (maybe with SQL replication?) then that's probably what I'm after.
However I think I need a reliable external storage solution. This area is rather new to me as an administrator designing this system so I've researched a number of solutions.
1. SCSI storage devices. Presumably RAIDed. However I've read these only do fail over and don't support different machines writing to them. Also I'd need power redundancy.
2. iSCSI or Fibrechannel. I assume using some software or other to first create some file storage redundancy across a couple of servers, then addressing these devices over the network from the cluster.
3. NAS box. Opinion seems to be against using as NAS for clustered storage but I'm keen because I could get a solution in a box if this were suitable.
Something like this with redundant everything, or would I still be putting my eggs in one basket if so?
http://www.thomas-krenn.com/en/system-solutions/storage-solutions/nas-server/2u-sas.html.
4. My own NAS. I have quite a few servers to play with, no high speed connectivity as of yet though. Would yet another cluster to cluster a file server be the way to go? And would I need some fast cabling to make this viable in any case?
So in conclusion, I have no idea what kind of storage is required for this SQL cluster I'm researching, could someone demystify the subject at all?
Windows Server 2003 Cluster with SQL 2000.
ASKER
Nor does iSCSI or fibre channel [support different machines writing to each hard drive], at least in as much both machines can write to the same filesystem, that's why you use clustering, one server is active and one is passive.
===
Uh, unless I've missed an important fact about clustering all machines in the cluster get used. I'm not interested in a fail over only scenario particularly.
==
If you're an HP house you could use an MSA500G2 for this, works fine for clusters; not actually a shared SCSI bus, just has SCSI attachment to the servers. There are SAS attached boxes that do the same thing but using the newer Serial attached SCSI cables.
==
Oooh, shiny stuff. Paying a few grand as a one off is potentially doable. However the question I need to know, this is something that'd last about 3 years and not something I'd ever have to swap the whole lot out when it dies right? (just the hard disks or PSUs I'd hope!). Such an investment would be a big commitments for our small company. So the SCSI cables come out of the back of the servers I'm clustering and straight into this big box eh?
And to build my own I'd need at least a couple of servers with multiple scsi controllers you say?
Looking up doubletake it appears to offer no more than a fancy image and restore mechanism. Thing is, I'm not after faster DR, even semi automated DR, I need a next to zero downtime idea.
Thanks for pointing out I'm going to need SQL 2000 Enterprise edition, I hadn't noticed that. (I forget what edition it is we use)
===
Uh, unless I've missed an important fact about clustering all machines in the cluster get used. I'm not interested in a fail over only scenario particularly.
==
If you're an HP house you could use an MSA500G2 for this, works fine for clusters; not actually a shared SCSI bus, just has SCSI attachment to the servers. There are SAS attached boxes that do the same thing but using the newer Serial attached SCSI cables.
==
Oooh, shiny stuff. Paying a few grand as a one off is potentially doable. However the question I need to know, this is something that'd last about 3 years and not something I'd ever have to swap the whole lot out when it dies right? (just the hard disks or PSUs I'd hope!). Such an investment would be a big commitments for our small company. So the SCSI cables come out of the back of the servers I'm clustering and straight into this big box eh?
And to build my own I'd need at least a couple of servers with multiple scsi controllers you say?
Looking up doubletake it appears to offer no more than a fancy image and restore mechanism. Thing is, I'm not after faster DR, even semi automated DR, I need a next to zero downtime idea.
Thanks for pointing out I'm going to need SQL 2000 Enterprise edition, I hadn't noticed that. (I forget what edition it is we use)
It looks like you have missed something important about clustering (at least MS clustering). Only one server is active on any database at one time, the other is in standby. It is possible to have a SQL active/active cluster but only through having multiple databases running under more than one instance. Each instance is still active/passive. If one node fails the other node takes up all the instances of SQL, it isn't instant, the downtime is roughly the same as if you killed the SQL process on a single server then started it again.
more to come, have to drive to work.
more to come, have to drive to work.
ASKER
Oh okay, this is news to me.
We're planning to consolidate initially our 2 shared SQL servers onto this system with about 70 databases total. Are you saying that I should run these under 2 instances if I want both servers to keep busy? That's possible. We also have another 70 with our managed hosts we'd want to move back to here eventually.
But that concerns me about scalability, if I have 3 physical machines, in order to actually use them all (other than for failover) I'd need 3 instances? So effectively one instance per server? That's probably not much of a problem, other than overheads of moving databases between instances every now and then to manually load balance them.
We're planning to consolidate initially our 2 shared SQL servers onto this system with about 70 databases total. Are you saying that I should run these under 2 instances if I want both servers to keep busy? That's possible. We also have another 70 with our managed hosts we'd want to move back to here eventually.
But that concerns me about scalability, if I have 3 physical machines, in order to actually use them all (other than for failover) I'd need 3 instances? So effectively one instance per server? That's probably not much of a problem, other than overheads of moving databases between instances every now and then to manually load balance them.
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
Thanks for your help so far. I've been planning our future implementation as much as I can and I'm learning at a great rate, unfortunately a lot of what I'm learning is that I've a lot more to learn! I'm still keen on clustering, thanks for clearing up what it can and can't do. We'd probably aim for an active-active-active-activ e-active-p assive-pas sive, or 2 active-active-passives.
Fast fail over or not, I don't see us going back to a 'just have a backup server', it would require many gigs of storage with replication from all the other servers and then that redundancy would go to waste when it was actually failed over to. Admittedly this saves on licensing costs but let's put that issue aside and focus on fancy hardware.
I've worked out that we can run a redundant public network over two switches no problem since our intranet is woefully underutilized we don't actually need it. I'm planning a backup firewall machine and getting an extra port from our ISP so we CAN have a totally redundant network.
So I'm still left with the question of storage for this cluster, every announcement to my bosses I make about the increased feasibility of us rolling our own hardware solutions is met with more noises about being willing to invest more so my requirement is this. Looking up the price of 2 SAN switches on top the price of what I know is already expensive SAN storage has put me right off. Maybe one day, but we're aiming for a lowish investment pilot scheme, not looking to move off our managed hosts tommorow.
So my conclusion is that I'm going to need an iSCSI box which sounds satisfactory to me. My final follow up question, can one get such boxes so robust that it is safe to run ones cluster forever without worrying about it's hardware failure? Or is that crazy talk and should I be looking at 2 of whatever I get?
I'd invest in 1 fancy one if they're reliable, or 2 cheaper ones if that's the only way to potentially achieve that 100% uptime dream.
Fast fail over or not, I don't see us going back to a 'just have a backup server', it would require many gigs of storage with replication from all the other servers and then that redundancy would go to waste when it was actually failed over to. Admittedly this saves on licensing costs but let's put that issue aside and focus on fancy hardware.
I've worked out that we can run a redundant public network over two switches no problem since our intranet is woefully underutilized we don't actually need it. I'm planning a backup firewall machine and getting an extra port from our ISP so we CAN have a totally redundant network.
So I'm still left with the question of storage for this cluster, every announcement to my bosses I make about the increased feasibility of us rolling our own hardware solutions is met with more noises about being willing to invest more so my requirement is this. Looking up the price of 2 SAN switches on top the price of what I know is already expensive SAN storage has put me right off. Maybe one day, but we're aiming for a lowish investment pilot scheme, not looking to move off our managed hosts tommorow.
So my conclusion is that I'm going to need an iSCSI box which sounds satisfactory to me. My final follow up question, can one get such boxes so robust that it is safe to run ones cluster forever without worrying about it's hardware failure? Or is that crazy talk and should I be looking at 2 of whatever I get?
I'd invest in 1 fancy one if they're reliable, or 2 cheaper ones if that's the only way to potentially achieve that 100% uptime dream.
ASKER
Thanks for the help. I decided that though I could implement this, it's not our core business and my company couldn't afford me (or to replace me) if we did so I recommended we move to 100% managed hosting.
Nor does iSCSI or fibre channel, at least in as much both machines can write to the same filesystem, that's why you use clustering, one server is active and one is passive. If you're an HP house you could use an MSA500G2 for this, works fine for clusters; not actually a shared SCSI bus, just has SCSI attachment to the servers. There are SAS attached boxes that do the same thing but using the newer Serial attached SCSI cables.
> iSCSI or Fibrechannel. I assume using some software or other to first create some file storage redundancy across a couple of servers, then addressing these devices over the network from the cluster.
Similar to shared SCSI boxes but can hoave more than 2 nodes in the cluster. Not cheap since if you don't have dual controllers you are just transferring the single point of failure to the storage.
>3. NAS box
SQL needs block level access although there is limited support to offload tables to NAS, you can't cluster with NAS using SQL
4 see 3, can't cluster with NAS.
There's one you've missed; rather than clustering use local storage and replicate between the two server using DoubleTake or similar. Much cheaper than buying SQL Enterprise which you need for a true cluster.