I have a silly yet very puzzling question.
I've done a reasonable amount of research on the subject but I cant seem to get my head round it. Since most of the research is coming from forums, alot of the questions are situation specific. What I really want answered is a general.
Anyway.. What is all this Strip Size subject along with Windows Cluster Sizes along with the best configuration. Sites say it depends on your IO and some sites say stripping at 64kb all the way up to 256kb is old school nowadays it is about stripping at 1meg or even 2meg.
When the RAID controller strips the data what I understand its actually dividing the data chunk if you say and writes it to the disks but NTFS is 4kb so why should 1meg or 64kb even make a difference. I mean it has to spread it over 4kb cluster sizes. Then you have arguments saying reads should also be around the size of the files you are going to store on the disks?
Then you have smaller strip sizes for RAID 5 as it has to calculate the parity bit? But it still needs to cut it down to 4kb? If the file is 10meg, it still needs the same reads? I mean is it reading the 10meg file in 1 meg chunks?
All this to save reads and writes? But I cant make sense of it. I mean when its reading in 1meg chunks it still has to read it from 4kb clusters so why having 1meg strip size beneficial? Same with the writes?
I understand writeback, write through, read ahead and all those technologies.
I just cant make sense of how these strpping sizes and clusters make sense?
I mean doesnt it all depend on the file structure..? What am I missing?
Can someone explain how all this works?
I'm sure its simple but I just need a helping hand?