Are drives in a Failed state a lost cause or is there hope??
Posted on 2006-06-07
I just had a server go down. It has a RAID 10 with a hot spare. There was a bad storm and the UPS backup didn't appear to do it's job - or something. Anyway, the controller is an LSI MegaRAID 300 SATA 8xLP. I had 5 x 500 GB Seagate SATA drives, 4 in a RAID 10, with one hot spare. Currently, I show:
Port 0: A1: online 426837 MB
Port 1: A1: online: not responding
Port 3: A2: failed: not responding
Port 4: A2: failed: 476837 MB
I haven't done anything tonight - I guess I want tech support to hold my hand... Anyway, would a rescan or something possibly reactivate these drives enough for me to boot the server? It's interesting that these drives show as failed. Out of all the 15 or so computers, the only ones to have problems during the storm are the ones in the server. (I assume port 2 was there as well and the drive in port 4 jumped in to take its place...???)
Also, could this be caused by a bad controller?? When adding batteries to the UPS, the technician shorted out the UPS and the server went down hard. Ever since then, a cold reboot would pop up a question from the LSI RAID controller asking for configuration information. Selecting the configuration from the disks allowed the server to reboot just fine. (A replacement controller has been ordered.) But now, I have beeping and show failed drives. Any chance these drives can be recovered (or at least one of the drives in A2:)? (We have made some backups, but haven't been that diligent, so I hope we haven't lost all...) Comments and suggestions welcome.