Link to home
Start Free TrialLog in
Avatar of HumbleITguy
HumbleITguyFlag for United States of America

asked on

Which RAID config will give my SQL server best performance?

I know this has been asked before, but it also seems to produce differing thoughts, but all the threads I've read lack something specific for me to glean and make a decision, so I offer this up.  I want to know which route to take to config an SQL server RAID array.

My question is about what type of Drive/RAID setup will give me the best performance for SQL database.   (I have options listed below.)

Details (in case they help):
SQL Server build on a Dell PowerEdge R620, Windows 2012 Server R2, SQL Server 2012 Standard SP2.
8- 10k SAS drives.  Drive SPACE is not an issue.
32GB RAM.  
HW RAID (Perc H710p) to take heat off the processor.
SQL DB is 15GB.  Potential growth to double it's size.  Software that uses the DB is probably NOT very WRITE INTENSIVE.
I'm aware that this server is loaded probably to the idea of overkill.   But other reasons have made this the best server to use on premises for SQL. Plus, the server is critical to us and we also want to config it to last.

Info: I will add, I am having a hard time giving up on the idea of using 1 or 2 of my 8 drives as global hotspares.  I have seen them on the chopping block for some tech advisors.     I am a one man IT shop and have been given a directive to make sure the server stays up while I'm out of office 5 or 6 weeks a year.  Beyond surviving one drive down, I need to know that i can survive the "lost 2 drives before you got back".  I need to know that the server has something that will kick in even on the darkest day.

Also because the current database is operating 'adequately' on Windows 2003, SQL2005 with C:\RAID1 and D:\RAID5 (DB and Data files), I wonder if any option changes in RAID configs will be felt, but I was about to build the server as such:

OPTION1:
C:\OS  (2drives in a RAID1 for OS and SQL executables)
D:\DATA   (4drives in a RAID10) (for the SQL database and DATA directories)
2 drives as global hot spares.

But I have found threads suggesting OPTION 2:  
C:\ OS and SQL  (6 drive RAID10)
plus 2 global hotspares?

So before someone tells me to lose the hotspares, I would be curious which route would be suggested between the strengths/weaknesses of OPTION1 vs. OPTION2?

Is RAID10 with 6 drives potentially going to give me better SQL performance than splitting out the OS\RAID1 and a DATA\RAID10.

To confuse my 6 drive issue, I have seen comments such as:  "RAID10 in this scenario should stay with 4 drives, or 8 drives (NOT 6 drives).  A 6-drive RAID10 introduces an inefficiency and a bottle neck."  
I would love to know if there is merit to that point.

Finally if the hotspares are really a detriment to the above thinking, I am open minded.  That would bring into play a potential
OPTION3:  8-drive RAID10

OPTION4:  C:\OS (4-drive RAID10) with D:\DATA (4-drive RAID10)

OPTION5:  A 2-drive RAID1 each for OS, DATA, LOGS, Hotspares?
Some suggestions politely hint that the strengths of that mentality aren't as strong nowadays.
So at this point, I'm pretty confused about small changes and lack the experience to know if they make any perceptible differences.  

The thing that won't change is that I need to config this server and get it into production.  I hope to protect against my worst case scenario, and also give SQL the best response time we've seen yet.  
Reality has me suspecting I won't feel the difference, so it's a waste of ADMIN $$ to chase this much further.  If I had testing time I would do it.   I have done what I can and checking my facts/logic with all of you is the last/best attempt before I make a decision.    Thank you in advance for your wisdom.
Avatar of Frans Brouwer
Frans Brouwer
Flag of Netherlands image

OS on raid 1 with hot spare

sql index on raid 1 with hot spare

sql data on Raid 10 with hot spares - the more spindles the more performance in raid 10

btw sql likes lot of memory the more the better
before i forget , raid 1 solutions with shared hot spare - takes up 3 disks for is and index files.

Remember that rebuilding on raid 10 with large volumes takes lot's of time with potential data lose if it is not ready before a next drive fails!!!!
"I need to know that the server has something that will kick in even on the darkest day"
"Plus, the server is critical to us"


Then you WILL NEED 2 servers and load balancing and replication.

The Darkest day is always the one you have not accounted for.  Your scenario is fine IF the raid controller does not fail, the server does not fail.

At the end of the day the simple equation is this....

Whats the cost of downtime vs The cost of infrastructure.

If having the server OFF costs your company 10,000£/$/€  and additional hardware and software licences will cost you 40,000 then can you afford 4 days a year downtime or not?
SOLUTION
Avatar of Member_2_231077
Member_2_231077

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of HumbleITguy

ASKER

Thank you to everyone that responded.  I knew all answers wouldn't come back identical.  What I needed was a smattering of opinion more educated than my own.  So you've already done that for me and I thank you.  
I'm a double-checker.  Quick replies for clarification:

@ Frans Brouwer: with 8 drives, i must be misunderstanding your suggestion.  
Please clarify. I assume you mean OS AND INDEX on One Raid 1 (2 drives) with a hot spare (3 total drives).   SQL Data on 4 drive RAID10 (plus a hotspare)   =8drives.

But the math isn't crystal to me.    I"m not trying to only hear what I want to hear...but the math doesn't work to have a RAID10 with hotspare at the same time as having OS and INDEXES split out...
Can you clarify?  I feel like I am misunderstanding you.?


@Neilsr:  Thank you actually for a sobering thought about where else failure might occur.  With all the focus on the 'common' Dell stand alone server options, I don't go deep on options for RAID controllers enough to actively see it on the radar as a point of failure.  
I of course can't comply with 2 servers but again.  Point well taken.

Question for you: In your sentence  "Your scenario is fine IF the raid controller does not fail, the server does not fail".    it looks like you do concur that my RAID1/RAID10 is functionally sound?    Maybe it wasn't as resounding as I'd secretly heard in my head as I typed my note...LOL.
Just trying to make sure I understand everyone's inflection.  
If not, please clarify.  

I'm sensing my last question would be to ask each of you if you see RAID10 as a 'fragile' RAID config in any sense.  Because one you get past $, it seems commonly seen as fastest response potential for SQL.  But as Andy points out, it is sometimes not the first choice for data safety/server uptime hardening?

@andyalder:  I think I've got my head around RAID6 enough but will be reading up more today. I certainly want to be open-minded and my first thoughts when I saw your suggestion have me assessing why I've been so focused on RAID10.  
It appears simply that a decision to go RAID6 over RAID10 is one of sacrificing speed for mitigating potential TOTAL loss.  
How do you see RAID10's risk factor in comparison?
Is that because you see RAID10 as fragile in some sense (sorry same question I threw at Neil)?

My understanding:  RAID10- I can lose 2 drives as well and keep going.
(yes, if it's the wrong 2 drives, I'm sunk)
but that's a reason for the hotspares- to minimize losing 2 drives in such close time.  
Point taken that during rebuild I am vulnerable.   Here we find a point of failure or at least enhanced risk.  

I understand this becomes philosophic at some point too based purely on whether the person doing the install will see their major concern as speed versus safety.  But I don't know where I fall yet.  Hence my question to you guys.  

Unless we satisfy Neil's comment for redundancy (sounds like a virtual conversation), there is something somewhere that is a risk.  (wow, enlightenment).  
Where am I drawing my line?
Thanks for the input.

I hope you'll give me one more round of insight by replying a little to my clarifiers above.
With 8 spindles - size disks ? we can build a RAID 1 with one hotspare (takes 3 spindles)

nou we have remaining 5 disks where at least one disk is a hot spare so we have 4 disks remaining to build up a RAID - we have 3 options here.

RAID 5 and RAID 6 or RAID 10

depends on how much storage you need . RAID10 will have de size of 2 disks
Sorry.  I realized I never mentioned that it is a total of 8- 300GB 10k SAS drives
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Avatar of Scott Pletcher
Scott Pletcher
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thank you everyone. Sorry for the delay as I got caught up in upgrade/migration day and the subsequent laundry list that inevitably comes regardless of how well prepped you are.   I hope not to misunderstand Experts Exchange and policy but since I needed a smattering of educated opinion for my sounding board, and I received it, I thank you each for your comments and wisdom and am splitting points evenly among you.  Your wisdom opened my eyes and helped me concretely look at the situation.

I did indeed choose to go with a
2 Drive RAID1:  OS and SQL Backup Files.  
And a 4 Drive RAID10:  SQL DB and Data Files.  
All with 2 Global Hotspares. (Hotspares can be shared).  

But the fact that I chose my initial route isn't because I didn't heed advice. Many of you concur that it is one of the 'sound' configs.  And as I thought things through to decide my demarc of performance vs safety, took into account that I DO exercise "nightly backup and data in 3 places", I also have the server DRAC reporting many hardware and HD warnings to me, I realized I have achieved what I determined a comfortable level of risk.  
And if that was so, I decided my RAID path was to be the one furthest from my current RAID1/RAID5 experience.  So, RAID10 it is.   But I did so now with full knowledge of the other logical paths to take and will of course, always be watching to see if the next time I'm here, I might choose another path.

In talking to the software tech for the SQL DB/Program, the common (if any) move would be to put logs, data, and backup on separate arrays.   And with my DB size, and the relatively light SQL load the program bears, he agreed that not separating logs wasn't going to impact to any crucial level.   We did put DB and Data files onto the RAID10 and are sending the backup .bak file to the C: drive just to keep it on a different array than the RAID10.  
I didn't LOVE that idea, but it seems sound PLUS my network backup grabs that file within an hour anyway...so I have it on a second server that fast.  I feel pretty good about this approach.

Thus far, I will say that searching the DB is noticeably faster than the previous hardware/SQL version/OS.  So by hook or crook, it's always good when the userbase feels a difference (whether it was by my design or accidental  LOL).  

Thank you all again.
Are the log files on the same drive set as the data files?  Just be aware if you do that, that the loss of the drive set means everything is gone and you'll be forced to go back to your last backup.

If necessary, though, you could take differential backups periodically and store them with the full backups to reduce the potential time of data lost.