craig007
asked on
Win 2003 & SQL server 2005 disk configuration
I have a new server coming which need installing which consis of 10 73gb disks,2 quad core and 2 raid controllers which can each support 2 logical drive with different raid system. I want to get the maximuum i/o performance and i was wondering how i should configure the disks.
the server needs win 2003
windows sql 2005 and IIS 6.0 installing
i understand all different type of raid systems ect
but i am intrested in other peoples views on how they would allocation the disks and space out and which control they would use for each raid.
the server needs win 2003
windows sql 2005 and IIS 6.0 installing
i understand all different type of raid systems ect
but i am intrested in other peoples views on how they would allocation the disks and space out and which control they would use for each raid.
One first controller, you could put 2 x 73 GB drives in RAID 1 for OS. You can either use empty space not allocated to OS for application volume or on the same controller mirror 2 more the 73 GB drives or use RAID 5 for 3 or more if you need more space for MSSQL basecode. If use separate drive set for MSSQL in this fashion, you can use the left over space on first RAID 1 partition to create a volume used for IIS installation and other applications you might install that are not MSSQL to still keep separation from OS drive but not requiring separate spindle for performance.
Then on the other controller, you can create one volume/drive to hold all the data files for SQL and one for transaction logs. Those can be separate mirrors for each. If space is a concern, you can use 2nd controller to make one big RAID 5 set out of remaining drives not used for OS/Applications and format two or more volumes to separate your data and log file(s) however you see fit. Some folks will even split each instance to it's own volume.
Then on the other controller, you can create one volume/drive to hold all the data files for SQL and one for transaction logs. Those can be separate mirrors for each. If space is a concern, you can use 2nd controller to make one big RAID 5 set out of remaining drives not used for OS/Applications and format two or more volumes to separate your data and log file(s) however you see fit. Some folks will even split each instance to it's own volume.
ASKER
Raid 1(2 physical disks): : Tran log + plus windows 2003 + IIS
Raid 1(2 physical disks): : Tempdb
Raid 10 (6 physical disks): databases
Raid 1(2 physical disks): : Tempdb
Raid 10 (6 physical disks): databases
ASKER
I was thinking a setup like this would be a good solution what does everyone else think
or would people just use Raid 5 not 10
Controller 1
Raid 1(2 physical disks): : Tran log + plus windows 2003 + IIS
Raid 1(2 physical disks): : Tempdb
Controller 2
Raid 10 (6 physical disks): databases
or would people just use Raid 5 not 10
Controller 1
Raid 1(2 physical disks): : Tran log + plus windows 2003 + IIS
Raid 1(2 physical disks): : Tempdb
Controller 2
Raid 10 (6 physical disks): databases
I would do:
RAID 10 (4 disks) for your DB Data Files
RAID 10 (4 disks) for your DB Log Files + TempDB
RAID 1 (2 disks) for your OS.
As for Tempdb, I believe the best place for it would be on one of the 10's, but I don't have a definitive reason for why it should be one one vs. the other although I know the facts exist.
RAID 10 (4 disks) for your DB Data Files
RAID 10 (4 disks) for your DB Log Files + TempDB
RAID 1 (2 disks) for your OS.
As for Tempdb, I believe the best place for it would be on one of the 10's, but I don't have a definitive reason for why it should be one one vs. the other although I know the facts exist.
Unless you are doing a really huge amount of Temp table stuff, I wouldn't bother to put TempDB on a separate volume. I would modify your idea thus:
Controller 1
Raid 1(2 physical disks): : windows 2003 + IIS
Raid 1(2 physical disks): : Tran log
Controller 2
Raid 10 (6 physical disks): databases
MWVisa's ideas looked pretty good as well.
Controller 1
Raid 1(2 physical disks): : windows 2003 + IIS
Raid 1(2 physical disks): : Tran log
Controller 2
Raid 10 (6 physical disks): databases
MWVisa's ideas looked pretty good as well.
First, make SURE the disk partitions are properly aligned so that you don't do extra I/O. If you don't, that alone could cost you far more performance than picking the perfect config for the setup itself.
>> or would people just use Raid 5 not 10 <<
No, since you have the space, use 10, it will perform better overall.
Several good suggestions. For 2005, I would, however, put tempdb on a raid set separate from the dbs and logs (although it could share with the OS / apps, if necessary).
>> or would people just use Raid 5 not 10 <<
No, since you have the space, use 10, it will perform better overall.
Several good suggestions. For 2005, I would, however, put tempdb on a raid set separate from the dbs and logs (although it could share with the OS / apps, if necessary).
>>For 2005, I would, however, put tempdb on a raid set separate from the dbs and logs (although it could share with the OS / apps, if necessary).
Why? Isn't TempDB used only with #Temp tables?
Why? Isn't TempDB used only with #Temp tables?
Basically you want to align @ 64K or 1M (1024K) w/ a 64K logical size as well (seems to be the best default overall for best performance).
select disk
create partition primary align=64 --or 1024
If the disks are already formatted, they will have to be reformatted.
But, again, this is VITAL to full performance for SQL Server.
select disk
create partition primary align=64 --or 1024
If the disks are already formatted, they will have to be reformatted.
But, again, this is VITAL to full performance for SQL Server.
>> Why? Isn't TempDB used only with #Temp tables? <<
True, but SQL itself creates some of those temp tables as work areas. For example, SQL can use temp table/space for queries with GROUP BY and/or ORDER BY (esp.). It also creates temp tables when doing optimizations on large dbs/tables. And so on.
In 2005, it is used even more. For example, to hold pages for snapshots, as needed. There are other new uses as well.
So, tempdb can get quite busy on a SQL box; if you have the drives available, it's best to isolate tempdb.
*Technically* you don't even have to mirror it; if, for example, you had a disk with good pre-failure warnings and you really needed to save *every $* you could. Of course that's still much more viable for a test box than a prod one.
True, but SQL itself creates some of those temp tables as work areas. For example, SQL can use temp table/space for queries with GROUP BY and/or ORDER BY (esp.). It also creates temp tables when doing optimizations on large dbs/tables. And so on.
In 2005, it is used even more. For example, to hold pages for snapshots, as needed. There are other new uses as well.
So, tempdb can get quite busy on a SQL box; if you have the drives available, it's best to isolate tempdb.
*Technically* you don't even have to mirror it; if, for example, you had a disk with good pre-failure warnings and you really needed to save *every $* you could. Of course that's still much more viable for a test box than a prod one.
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
don't understand this
Raid10 -tempdb (number of datafiles=CPU - sizes must be same more:
this server will have 2 quad core?
also i was going to align my disks by only by 1 block - 512 bytes because the master boot record is only 63 blocks and there for the server needs to read 128block for every 64 blocks of data
only one member has mention indexes what do this normally came under are these not the same as the (database files.)
Raid10 -tempdb (number of datafiles=CPU - sizes must be same more:
this server will have 2 quad core?
also i was going to align my disks by only by 1 block - 512 bytes because the master boot record is only 63 blocks and there for the server needs to read 128block for every 64 blocks of data
only one member has mention indexes what do this normally came under are these not the same as the (database files.)
If you are only using one filegroup (PRIMARY) then indexes and everything else will always be on that filegroup. But if you add more filegroups (perhaps calling one indexes) then you can specify in your index creation statement that they be created on that filegroup.
Raid10 -tempdb (number of datafiles created for tempdb database recommended by MS need to be equal (=) to number of CPU used by sql server on the box ; also sizes of the files must be same, please read more:
Working with tempdb in SQL Server 2005
http://www.microsoft.com/t
about
64 blocks
http://searchsqlserver.techtarget.com/tip/0,289483,sid87_gci1262122,00.html
>> Raid 1 +hot spare - os -iis (BTW: iis is not recommended to be installed with sql servert)
Raid1 +hot spare - sql server binary <<
The hot spares seem like overkill to me for only 30G worth of data. Modern drives that warn ahead of time about potential failures *and* having a cycle to automatically replace ageing servers and/or replace older hard drives every <n> years would seem to be me to be enough.
Raid1 +hot spare - sql server binary <<
The hot spares seem like overkill to me for only 30G worth of data. Modern drives that warn ahead of time about potential failures *and* having a cycle to automatically replace ageing servers and/or replace older hard drives every <n> years would seem to be me to be enough.
ASKER
so filegroup are really multiple databases on a system, is this right?
I would not depend on a system where the drive tells you it is going to fail. You should have some sort of notification system alert you when a drive fails. Either through event log monitoring or alerts through the controller software.
That being said, if you have space and an available drive it's a good idea.
The hot spare is meant to bridge the gap between a drive failure and a user being available to replace the failed drive.
That being said, if you have space and an available drive it's a good idea.
The hot spare is meant to bridge the gap between a drive failure and a user being available to replace the failed drive.
If you are familiar with distributed file systems it KINDA works the same. It takes a single item (a database) and allows you to break it up into different files which may vary in purpose. Maybe one for indexes, one for high usage tables, one for the remainder of tables. Heck, you may decide to partition an extremely large table and put each partition on a different filegroup.
>> I would not depend on a system where the drive tells you it is going to fail. You should have some sort of notification system alert you when a drive fails. Either through event log monitoring or alerts through the controller software. <<
I wasn't saying or even attempting to imply that a drive that pre-alerts a failure would not also need some type of notification system if it actually failed. It's just that, from what I've been told by our disk people, it's far better to have the drive pre-notify you BEFORE it fails than to deal with an actual failure. But again, that is not to say of course that I believe actual faillures should not be alerted as well.
If the drives are mirrored, and one fails, hopefully the system can still run until the other drive can be replaced. No, that is not an ideal situation, but just how much money is one to put into a small (30-50G) db server just for drive space?
We have literally thousands of Windows servers and have almost never had enough drives in a mirror/RAID go out to lose data on the server.
I wasn't saying or even attempting to imply that a drive that pre-alerts a failure would not also need some type of notification system if it actually failed. It's just that, from what I've been told by our disk people, it's far better to have the drive pre-notify you BEFORE it fails than to deal with an actual failure. But again, that is not to say of course that I believe actual faillures should not be alerted as well.
If the drives are mirrored, and one fails, hopefully the system can still run until the other drive can be replaced. No, that is not an ideal situation, but just how much money is one to put into a small (30-50G) db server just for drive space?
We have literally thousands of Windows servers and have almost never had enough drives in a mirror/RAID go out to lose data on the server.
A RAID 5 drive will operate at reduced performance in a failure and recovery state.
As for RAID 1 and 10, I would imagine that the performance hit would only be in the recovering state, but I have no experience with that.
As for RAID 1 and 10, I would imagine that the performance hit would only be in the recovering state, but I have no experience with that.
Yes, the drive will operate more slowly during a recovery, but:
1) what are the odds of actually needing a recovery?
2) how much is response time really going to be an issue with only a 30G db and 2 quad-core processors?
1) what are the odds of actually needing a recovery?
2) how much is response time really going to be an issue with only a 30G db and 2 quad-core processors?
craig007:
please clarify what is it 10 drives x73G on server
or what?
What is it DELL; HP; IBM..? SAN?
-----------------------
it was about the best configuration:
we can go from the best - > to good and down
to :
1 controller
for OS +sql server install
let say
2 x 73g Raid 1 - C: drive - OS
- sqlserver install
2 controller
4x 73 GB Raid 5 - data
2x 73GB RAID 1 - Trans log
2 x 73 GB Raid 1 - TempDB
please clarify what is it 10 drives x73G on server
or what?
What is it DELL; HP; IBM..? SAN?
-----------------------
it was about the best configuration:
we can go from the best - > to good and down
to :
1 controller
for OS +sql server install
let say
2 x 73g Raid 1 - C: drive - OS
- sqlserver install
2 controller
4x 73 GB Raid 5 - data
2x 73GB RAID 1 - Trans log
2 x 73 GB Raid 1 - TempDB
ASKER
The server is an X86 Intel model with 2 quad core Xeon processors, 6GB memory and 10x73GB disks with 2 raid controllers. (each raid can accommodate 2 logical drives of different raid types)
so will these card support raid 10 / 1+0
so will these card support raid 10 / 1+0
ASKER
Yes 10 drive on server
ASKER
I like this configuration for the 2nd controller but I am now confussed. The spec for these controllers state that is each controller can handle 2 logical drive, each have a different raid system. But does this not need 3 logical drive creating?
2 controller
4x 73 GB Raid 5 - data
2x 73GB RAID 1 - Trans log
2 x 73 GB Raid 1 - TempDB
2 controller
4x 73 GB Raid 5 - data
2x 73GB RAID 1 - Trans log
2 x 73 GB Raid 1 - TempDB
if you have HW limitations there is nothing you can do about it: do what spec said or/and contact vendor or/and buy better HW...
ASKER
I never ordered the hardware, it is just what i have inherited.
I don't really understand this.
I have two raid controller with each controll capable of supporting 2 lociacl drive each with different raid type does this mean i can only have a maximum of 4 raid type
two on each controller all what?
I don't really understand this.
I have two raid controller with each controll capable of supporting 2 lociacl drive each with different raid type does this mean i can only have a maximum of 4 raid type
two on each controller all what?
ASKER
Can i just confirm that logical drives are the same as logical partitions
ASKER
I think this is going to be my finally installation plan what do people think.
1 controller
2 x 73g Raid 1 - OS/sql server/iis
2 x 73 GB Raid 1 - TempDB
2 controller
4x 73 GB Raid 10 - database
2x 73GB RAID 1 - Trans log
1 controller
2 x 73g Raid 1 - OS/sql server/iis
2 x 73 GB Raid 1 - TempDB
2 controller
4x 73 GB Raid 10 - database
2x 73GB RAID 1 - Trans log
how did the interview go Craig007??
ASKER