Link to home
Start Free TrialLog in
Avatar of CompTechNS
CompTechNS

asked on

Advice for setup of Server 2008 R2 Enterprise with SQL Server 2008 R2 moving from Server 2003

We have a customer that we will be moving from Server 2003 Enterprise with SQL Server 2000 to Server 2008 R2 with SQL Server 2008 R2. There's a lot of back story as to how we got here, but we're finally completing what began a couple years ago as a hardware upgrade reusing the original Windows Server & SQL Server (due to compatibility issues with their software). At the time we originally set this up, we were limited as to our options due to having to move the old Windows installation to the new system, and due to it also at the time acting as the DHCP, DNS & domain controller.

We have since installed an SBS 2011 system that has taken over all domain management roles, so we have the option to completely reconfigure this server prior to installing the new versions of Windows and SQL server.

The server is Intel-based, with dual Xeon X5560's on an S5520HC motherboard with an LSI 8708EM2 RAID card (with the optional battery backup) and 6 x 500GB hard drives in RAID 10 (the hard drives are SATAII Seagate Constellation ES - please don't lecture me about SAS, I could not get them to spend the money on SAS drives at the time this was built) and  as a part of this upgrade we're also replacing the original memory with 48GB (6 x 8GB for full 6-channel mode) of Registered ECC memory.

I'm looking for opinions on how best to setup the hard drives. The entire RAID array is currently setup as one drive with one partition. We're planning on at least recreating the partition using the Server 2008 setup rather than leaving the current file system in place, and I was debating partitioning to separate the OS from SQL, which raised a number of questions. They currently have 4 x 25GB databases and about 30 users. One of the databases is used daily with the other three active as prior year-end snapshots for reference only as needed.

1) With the server only acting as an SQL & File server, I want to verify that it is in fact best to partition the drives, and I'm looking for suggestions as to how best to do so. We have about 1.3TB total to work with. I was thinking 3 partitions - System, SQL Storage, and a 3rd for file shares and SQL Server database backups.

2) For partitioning, should we leave the 6-drive RAID 10 array in place or reconfigure the setup? In either case, is it best to handle any partitioning in the RAID controller or to let Windows create the partition(s)?

I may be over-thinking things, but I figure since we have the chance now to go the best route possible with the hardware that we might as well do so.
ASKER CERTIFIED SOLUTION
Avatar of comnuts
comnuts
Flag of Singapore image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of CompTechNS
CompTechNS

ASKER

Comnuts,

Thanks for the info.

I think the amount of traffic to this database might surprise you (or I'll admit, I might not know enough about what's stored in transaction logs). We have log backups running every 2 hours between 8AM and 5PM (and a final one again at 7PM after end of business), and the 10AM transaction log backup frequently exceeds 20GB, with the rest throughout the day averaging 6GB to 8GB.

The article about single drive RAID does make a really big point regarding IOPs and the fact that a spanned drive doesn't do anything for bottle-necking... I did also read one other posting on another site where someone commented that in this type of scenario breaking up the drives into multiple RAID1 pairs might be more beneficial than one bid RAID10.

I'm leaning toward this:

Reconfigure the drives into 3 pairs, RAID1 @ 500GB

Array 1 - 4 Partitions:
    System @ 200GB
    SQL Main DATA @ 100GB
    SQL 2 DATA & TLOG @ ~200GB

Array 2 - 2 Partitions:
    SQL Main TLOG @ 100GB
    File Share & Some minor file archival @ ~400GB

Array 3 - 1 Partition:
    All Backups (Weekly ref. DATA DB's, Nightly Main DATA DB, Main DATA TLOGs)

Thoughts?

Thanks!
Ok, the transaction log size does suprise me. I believe this is in full recovery model and logs are truncated after each backup though, looking at the backup size you have.

On array, the performance is directly related to the drive itself in this configuration. You might want to run SQLIO to check what kind of performance you will get (SATA might not give you much) and compare it with your existing. I think you can run it off 1 SATA disk to get the numbers since all the hard disk should be the same. Secondly, I think that file share can move to Array 3 (if you can afford the space), as transaction log of SQL main might eat up all the IOPs it can have. A slowness of a few minutes every 2 hours for file share is better slowness every minute. But that really depends on what you can gather also on the pattern of the transaction logs.

On backups, do you intent to send this out to an offsite or at least to another server? It is not useful if the main server itself goes down (or destroyed) as these backup may not be retrievable from itself.

Lastly, this is defininately not a best practice. But if the backups can be send out and you can torlerate a failure of Array 3, you might want to park 1 harddisk on Array 3 and configure RAID 5 for Array 2 for better IOPs. Keeping some HDD as spare will be recommended.
Considering the backups going on array 3 (aside from the main TLOG backups) all run during off peak time, what about making arrays 2 & 3 one big RAID10 which can then be partitioned.

I am also willing to entertain the idea of a single drive in place of array 2 and then a RAID5 for array 3 which can then be split for file share and backups.
I just realized my second comment above does nothing for array 2 IOPs...

I'm going to look into what I have as far as additional drives on hand. We have the 4 drive expander in the chassis so I can add more.
I should also mention the backup drive is just for storage of backups that will be backed up to rising external drived nightly. I have maintenance tasks setup in SQL server to reorganize and check integrity of the main database each evening and then run a backup to a folder. It does the same for the other databases on Sunday afternoons.

The actual database files are excluded from the nightly backup to the external.
If you can utilize the 4 drive expander, consider a hot spare so that you can bring down redundancy percentage and make use of more space. I would still keep spare HDD though, Murphy is always there.
In the interest of not making too many hardware changes to a system that might only last a couple more years, and knowing that a significant improvement should be seen regardless (just with going to a 64bit OS & SQL with a huge memory upgrade) I decided to leave the hard drives physically as-is with the RAID setup I mentioned, with a slight modification:

Array 1 - 4 Partitions:
    System @ 200GB
    SQL Main DATA @ 100GB
    SQL 2 DATA & TLOG @ ~200GB

Array 2 - 2 Partitions:
    SQL Main TLOG @ 100GB
    All Backups (Weekly ref. DATA DB's, Nightly Main DATA DB, Main DATA TLOGs)

Array 3 - 1 Partition:
    File Share & Some minor file archival @ ~400GB

I swapped the backup partition and file share partition. This way shared folder access doesn't affect SQL. As far as the backups are concerned, yes for a few minutes every couple hours the TLOG file access will slow down while it backs up, but otherwise the partition won't be accessed until after hours so this minimizes extra traffic to the drive pair. Also, since the backups are further backed up to an external drive within hours of being backed up to the backup partition, I'm not too worried about what happens in the case of drive failure.

Thanks for your help!