Write cache and performance hit.
Posted on 2011-09-09
ML 110G4 Server has built in RAID controller and RAID 1 is configured using 2x Seagate ST3160812AS SATA drives.
Recently I had issues with a bad spot on the RAID which I assume was the cause of (so far) two BSOD incidents.
I have run a RAID verify and it picked up and fixed 107 errors.
In the process I discovered I had been running the server with write cache enabled on the RAID controller but BIOS warning on POST recommends turning off write cache because the controller apparently can't handle this without the risk of data integrity issues.
The server has been running with cache write enabled for a long time (two years or more).
After disabling the write cache, server performance has dropped significantly to the point that it is almost unusable/unproductive (takes ages to boot up and CPU goes crazy).
The other thing to note is that for a long time (two years or more), one of the Seagate drives shows up on POST as 3.0GB/s and the other as 1.5GB/s.
I have not been able to work out why and if this is having a major impact on performance?
Please advice on possible fix.
How can I improve performance?
Should I just leave the write cache enabled and ignore the warning message on POST?
Should I try to get both drives to function at 3GB/s and if so how?