How to monitor drive health for a HW RAID 1 Array under linux
Posted on 2004-09-25
I recently setup a Linux server running RH9. The server includes a SATA RAID controller. I setup a RAID 1 array with two drives in the BIOS and I used 'linux dd' with an aarich driver disk to install the OS. Everything seems to be working OK, but I would like to know if there is any way to monitor the health of the array. For example, if one of the drives fail, how will I know that it failed. I assume that the computer will continue to function normally (with the RAID array in degraded mode), but I would like to be notified by the machine that one of the drives has failed so that I can replace and rebuild the failed drive. It would also be nice if the log contained enough information to tell me which of the two drives had failed. Does anyone have any experience with this setup. I have included a copy of the output I receive at boot time for the RAID controller.
scsi0 : Vendor: ADAPTEC Model: AAR-ICHx Version: 2.01.016
Vendor: ADAPTEC Model: RAID 1 Rev: 1.0
Type: Direct-Access ANSI SCSI revision: 00
Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
Thanks in advance.