So I think I have a pretty good grasp on what I need to do here, but I thought I'd double check because I'm not as used to dealing with RAID1. I'm dealing with an old Dell PowerEdge 2850, PERC4e/di RAID controller here.
Two virtual disks:
Virtual Disk 0: RAID1 (physical disk 0:0 and 0:1, these are both 73GB Seagate Cheetah ST373307LC drives).
Virtual Disk 1: RAID5 (physical disks 0:2, 0:3, 0:4, 0:5)
OS (server 2003 R2) is on the RAID1, data is on the RAID5. Noticed the other day that drive 0:1 was blinking amber. Looked that up, and that apparently means the drive has failed (a "predictive failure" would've been blinking amber+green, while this was a straight amber light that blinked rapidly).
Dell OMSA shows Virtual Disk 0 with a status of "degraded" and Virtual Disk 1 with a status of online. So virtual disk 0 has issues because that drive failed, but virtual disk 1 is happy.
Virtual Disk 0 only shows one disk as a member (there is no "missing" or "failed" listing for the other drive...it's just not acknowledging that the second drive in that virtual disk ever existed at all).
I went ahead and pulled the drive with the blinking amber light. Server still runs fine and boots fine (just with the blinking amber lights and degraded status on that one virtual disk). Am I correct to assume that I can just replace the failed drive with an equal or greater capacity drive, give it some time, and let it do its thing since this is RAID1?
We run regular backups, and the important data is on the RAID5 set (which, again, is virtual disk 1, the "happy" one). The OS + one important application are the only things on the RAID1 set. I'd still like to avoid being forced into reloading the OS if possible. I'm assuming that just sticking a good drive in that slot will work (and most documentation I've been able to find seems to indicate that as well), but I thought I'd see if anyone else has opinions on that first.