Block and Stripe Size for iSCSI SAN Setup for Virtual Machines

I have a hyper-v server with virtual machines locally on it. I have two identical hyper-v server machines. One is not in use cause i don't have my SAN built yet. I want to create a fail over heartbeat so I need to have the SAN to do it.

I am building the SAN server. It has 16 SAS drives. The drives are 300GBs each. It will be used to put mt virtual server VHDs on.

What is the best configuration?

Should I make all 16 Drives it into one big Logical Drive - RAID 60? (Has One 16 Port Raid Card) Or Split it up?

What Stripe and Block Size should the Logical Drive Be?

What about Read Cache Policy?
Options Are: No Cache, Read Cache, or Read Ahead

What about Write Cache Policy?
Options Are: Write Back or Write Through
lesterszurkoAsked:
Who is Participating?
 
Duncan MeyersConnect With a Mentor Commented:
Stripe size is determined by block size multiples by the number of drives. So if you use a 64k block and a RAID 1/0 set with your 16 drives, then stripe size will be 8 x 64 = 512kB.

Block size is bet determined by your RAID controller and/or storage software.  Windows Storage Server, FreeNAS?

Read cache should be read ahead. Write caching policy should be determined by whether or not the RAID controller has battery backup for the write cache
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
how much resilience do you want one or two disk failures per raid? how much storage?

16 drives in single RAID wil give you the best performance.
0
 
lesterszurkoAuthor Commented:
I am doing 16 drives in a single raid. (I'm not sure about disk failure) If there is one I have two identical brand new drives I can hot swap and rebuild.)
0
Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

 
lesterszurkoAuthor Commented:
I will use all 16 as a single raid. It is only one raid card so if the card fails and I had two raid sets they both would go down anyway.

So what about the other settings - Stripe Size, Block Size, Read cache, Write Cache?

It is for virtual VHDs.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)Connect With a Mentor VMware and Virtualization ConsultantCommented:
if you use RAID 10, best for performance, you can have a single drive fail

RAID 5, single drive failure

RAID 6 and 50 two disk failures.

enable read and write cache, 25%, 75%
0
 
Duncan MeyersCommented:
Write through cache will provide better performance but if the cache is not protected with battery backup, data integrity is at risk if there's a power outage or other failure onyour storage server.
0
 
lesterszurkoAuthor Commented:
What do you think is better performance read and write Raid 60 or Raid 10?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
RAID 10 and RAID 60 read performance is similar if not the same.

RAID 10 write performance is better than RAID 60.
0
 
lesterszurkoAuthor Commented:
Last question, more to just clarify for myself or anyone else that may come across this post.

Do you think it is worth the storage capacity loss using raid 10? Is the write really that much faster or is it minimal compared to raid 60?

if i go with raid 10 i can make one logical drive about 500GB or if I go with 60 i can get i think around 800GB of space.

I plan on having this server as my iscsi connected SAN using 1GB Lan cards and GB Switch with jumbo frame support. It will hold my virtual VHD servers and I will have two identical hyper-v servers (1) as main the second as failover should the first server fail. I will have 4-5 virtual servers on this SAN eventually.
0
 
Duncan MeyersConnect With a Mentor Commented:
If you go with one big RAID 1/0 set, you get 8 x 300GB = 2400GB available space. If you go with RAID 6, you'll get 14 x 300GB = 4200GB available space. But the write performance of RAID 6 is poor. I usually use RAID 1/0 for small block random I/O or RAID 5 for mixed workloads.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
depends on your workload, what VMs you require, and what performance needs versus storage you require
0
 
Duncan MeyersCommented:
Thanks! Glad I could help.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.