caching raid controllers - environments they are most useful for

There has been a debate among a few colleagues of mine centered around what can be gained by having larger cache on RAID controllers. While all kinds of 'tests' can be performed that show a larger cache can help with this or that specific type of IO on  a test basis, the other half of the question is that for a very large enterprise - hundreds of users  - with a wide mix of processing going  on - a great deal of word processing/Excel of relatively small files, some line of business app with some kind of SQL database, e-mail, the occasional large file transfer - the argument is that in general the users will not see any difference.

I would appreciate feedback on this especially if it's  backed up my real world stats as opposed to 'simulation' stats produced by  testing software or simply theories/logical analysis of why one or another alternative should make a difference that does not have some actual real world numbers to back it up.
lineonecorpAsked:
Who is Participating?
 
andyalderCommented:
Without battery backed write cache you risk the RAID 5 write hole rearing it's ugly head so should a power failure occur at one time and then months later a disk fail and be replaced the user experience could be "oh crap that is not what I wrote in my letter" when they read it many months after the two seemingly unrelated failures. But you say you know the theory so you already know that writes are meant to be atomic in theory but can't always be in the real world.

Also just google for" <controller name> and slow and battery" and you'll get pages and pages of people bemoaning the horrendous performance of RAID 5 without write cache. Of course you may be using RAID 10, in that case BBWC isn't anywhere near as important.
0
 
Aaron TomoskySD-WAN SimplifiedCommented:
The cache on the raid controller really just helps to smooth out the disk io. It also helps with iops as the controller can report things are written before the disk head ever actually writes it.

If you want a real cache, look to zfs. I'm still a relative noob but it's definitely the direction I'm going.
0
 
lineonecorpAuthor Commented:
Please re-read my question. I specifically asked for stats or real world experience - I was looking for an Expert not a noob.  I am not a noob in this area, I know the theory of caching as do my colleagues - we have all had experience with caching controllers. I wanted someone who knew even more than us, had more facts to contribute so as to help resolve the discussion at a level beyond our circle of networking expertise.  I gave a very specific case/scenario to deal with.  Not sure why you chose to answer this as nothing you wrote matches up with what I asked  but the upshot of you answering will probably mean that I have to possibly request attention to get somebody else to answer or just delete the question and repost. Either way it's a hassle.
0
 
ravenplCommented:
In my noob opinion DAC with cache is very handy if
- application is using direct I/O (like enterprise databases, but mysql can do that too)
- application sync files/filesystems often(like after every mail queued/saved)
- note, the two above refers to bypassing OS buffercache somehow
- hw raids (where actual data to write is much larger than the one kept in cache, OS would have to use more memory)
0
 
andyalderCommented:
I do not think that was worth a C grade, I'll ask CS to take a look.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.