Link to home
Start Free TrialLog in
Avatar of kenshaw
kenshaw

asked on

Attaching multiple external SCSI storage arrays to a two node cluster

Ok, so Im purchasing a two node win 2k3 cluster that will run SQL 2000 in an active/passive failover HA config.

I want to have an external SCSI storage array and have sourced the IBM EXP400 that will support 14 drives.

As I understand it - a single SCSI adapter can have 14 drives connected to it.

So in my external storage - one of the disks will be a quorum disk leaving me with 13 active drives for data.

What i want to know is whether i can have two SCSI adapters in each of my servers, and connect the two node cluster to two external SCSI storage enclosures.  That way I'd have 13 active drives in the first array and 14 active drives in the second array.  Is there any restriction on doing this?  Can I have two SCSI adapters running in my servers?  Would there be any restrictions on this imposed by Win 2k3 or SQL Server 2000 (i.e. will they only recognize a limited total number of SCSI disks?)

Basically - I don't want to be limited by the 2TB of storage I can get out of one external storage array.  I want to get about 4 TB but i don't to jump up to SAN.  I also want to stick with the major hardware vendors - as a very high level of SLA is required - if there's hardware failure I want the part replaced by the vendor within 10 hours or so.  Thats why I'm looking at IBM, Dell, HP.
Avatar of Duncan Meyers
Duncan Meyers
Flag of Australia image

Yes, you can. I've been involved with a 2 node cluster with each node connected to 2 Dell Powervault 220S SCSI arrays via LSI/AMI RAID controllers (aka Dell PERC 3/DC).

The only restriction is a slightly ludicrous one - you run out of drive letters (you'll have external 27 discs connected)! I'd suggest using RAID controllers so that you also have some redundancy in your configuration. Run the external discs as RAID 1/0 sets or RAID 5 stes depending on what you intend.

Hope this helps.
BTW - if you go with a RAID controller, you'll need one taht supports clustering - that is, you need to be able to change the controller SCSI address. Not all RAID controllers will let you do this.
Avatar of kenshaw
kenshaw

ASKER

ok... thanks for that.  When you say if I go for a RAID controller I'm a bit confused.  Won't my SCSI controller support RAID?  i.e. in my servers I'll have two adapters, and each adapter will be a SCSI/RAID controller won't they?  

Also - why do I need to be able to change the controller SCSI address for clustering?  To make sure that each node's SCSI cards are set up properly?  I don't get this bit.
>why do I need to be able to change the controller SCSI address for clustering?  To make sure that each node's SCSI cards are set up properly?  I don't get this bit.

Each node has to have different SCSI addresses on the controller, typically 7 for one node and six for the other. If you don't change the controller SCSI addresses you'll have an instant conflict and the cluster simply won't work. So you'd set the controller SCSI addresses on one node to 7 (same on each SCSI card) and on the other node 6 (again, same on each SCSI card).

>When you say if I go for a RAID controller I'm a bit confused.  Won't my SCSI controller support RAID?  i.e. in my servers I'll have two adapters, and each adapter will be a SCSI/RAID controller won't they?  

Not necessarily. Depends what you've got. For example, an Adaptec 39160 is a SCSI controller only - no RAID. If you want to set up RAID, you'd need to use software RADI within your OS (Dynamic discs in Disk Manager for Windows, Software RAID in Disk Druid in Red Hat Linux and so on). An Dell PERC 3/DC is both a SCSI and a hardware RAID controller. With this, you can group a number of discs into a RAID array - RAID 5, 1/0, 1 and so on.

Given what you've posted about you're storage requirements, I'd strongly reccommend you look at an entry level SAN such as EMC's CX300. It  gives you a greater degree of performance, flexibilityand expandability than direct attach storage. And it isn't *that* much more expensive. Apparently they start at around AUS$20K - which is not that much more tahn what you'll spend with the direct attach anyway (I'd estimate AUS$15K-16K for what you're describing). The extra money buys you an extra *heap* of reliability and robustness.

FWIW. Sorry to complicate it for you, but it's far better to make the right decision now, rather than try and make a crappy system work well.
HP's DL380 packaged clusters can be used with redundant RAID controllers if you use the high availability kit for the MSA500 and you don't lose 2 SCSI addresses as it's not a shared SCSI bus so you still get 14 or 28 drives and because it has an internal RAID controller you don't have to dedicate a disk for the quorum but simply define a 1GB slice of an array for it.

Have you thought about using SQL replication rather than clustering or using something like doubletake? For clustering you need Windows enterprise and SQL enterprise and that can cost more than the hardware.
ASKER CERTIFIED SOLUTION
Avatar of Lee W, MVP
Lee W, MVP
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
IBM's unit is a shared SCSI bus, you have to use special Y cables to provide termination when one of the servers is powered off. That's why I like HP's offering, whether fibre or SCSI attached each SCSI is buffered onto a PCI bus in the RAID box, then through the internal RAID controller onto another SCSI bus for the drives. With dual attached servers there's SecurePath software so that the OS doesn't see both sets of drives and think you've got twice as much storage than you really have.

Our firm sold a packaged cluster in all good faith as a stand alone solution so we had to impliment a domainlet since there were no other servers to hold the Global Catalog etc. Got called out because it went wrong and when I arrived on site the software developer said he didn't want a cluster! Made me take the shared drives out and put them in the servers instead, remove AD (which was what broke) and said he was going to use sql replication instead. Now it's going to a co-lo hosting site.

Anyone want a 3 month old MSA 500, rack, UPS, KVM switch and rackmount keyboard? (joke, I know EE isn't eBay).