NVRAID Degraded RAID1 Please Help

Hi, we have Tyan server with nForce chipset running Windows 2003 Server. Our RAID1 array became degraded after server locked up and we had to power cycle it. When I logged in to the server console it appears we have two separate hard drives with volumes C:, D: and F: , G:.  Under normal circumstances I used to remove one of the drives from the raid array using MediaShield utility and initiate the rebuild, however this time the MediaShield is not showing any raid volumes making it impossible to rebuild this RAID1 array.

Furthermore, it appears windows is booting up from F: volume. If I remove the disk that has C: and D: volumes on it from the server and attempt to boot up I cannot log in to windows console. I suspect the system path drive letter has changed after RAID became degraded. My guess the second disk is in good shape, however since MediaShield utility is not showing anything I cannot rebuild this RAID1 array.

Help is appreciated, thanks.
ihostAsked:
Who is Participating?
 
gikkelConnect With a Mentor Commented:
Enter raid bios and delete one of the drives from the array.  Reboot.  Make sure your single drive (degraded) array is set to boot.  If it doesnt pick up the other drive and start the rebuild, see if it'll get to Windows.  Check mediashied (what version are you using?)  If unsuccessful, remove the additional drive. If you still get the bsod, you have some other options...
 
0
 
gikkelCommented:
NVraid is horrible, I feel your pain.  You can still get into Windows, correct?  If the drive that is still marked as C is still letting you get into windows and everything is ok corruption/file wise, update your device drivers and restart.  If your array is still not showing up, just format the other drive and recreate the array from drive 0.   Make sure you have a backup image on separate media...
0
 
ihostAuthor Commented:
Both physical drives are OK. I connected each drive to a different machine and performed chkdsk. Both appear to be in good condition.

Here is whats happening when I leave only one drive in place.
Drive 0 (C: D:) - Windows boots up and allows me to login, however before destop is rendered it goes BOSD and reboots.

Drive 1 (F: G:)  - Windows boots up, however I cannot login. I suspect the system path and other environment variables are pointing to the wrong volume.

If I leave both disk drives in place, I can boot up and login. However there are now four disk volumes and MediaShield is not listing anything at all.
0
Protect Your Employees from Wi-Fi Threats

As Wi-Fi growth and popularity continues to climb, not everyone understands the risks that come with connecting to public Wi-Fi or even offering Wi-Fi to employees, visitors and guests. Download the resource kit to make sure your safe wherever business takes you!

 
DavidPresidentCommented:
Chkdsk is not a hardware diagnostic. All it looks at is if a part of each file is assembled properly.  If, for example, only 10% of a disk is partitioned then chkdsk will only look at that 10%.  The other 90% of disk drive could be a molten pile of crud and as long as the 10% of the disk responds to the limited number of READ commands chkdsk sends out, the the program will cheerfully tell you that the drive is just fine.


Run real hardware diagnostics to make sure that the disk is suitable.  Did it occur to you that the server locked up because there  was a drive problem and your crappy RAID controller just gave up?   This is most likely scenario.

I would put in a fresh new known good disk and let it do a rebuild rather than trust that other disk drive.   Think of it this way, if the drive really did fail, then you have to replace the disk anyway.  Your system is now one drive failure away from total loss.  Is it worth the $100 or so to risk losing everything ?   The RAID controller doesn't trust that disk drive.  Why should you?

0
 
ihostAuthor Commented:
@dlethe I agree with you 100%, we have many spare HDs in the storage it is not an issue to replace the HD. My concern is odd behaviur of MediaShield utility, I cannot add/remove disks from the array using it. Currently we are backing up data to extranl HD
0
 
gikkelCommented:
Check your bios settings, is raid enabled?  When you enter the raid configuration bios, what is shown (F10 when it splashes during boot)?    If you're not up to date with mediashield/drivers, do that first.  Can you post shots of the storage configuration screen?  Is "rebuild an array" available from the management menu (advanced menu)?  What about syncronize array?  
I'm assuming that there are 4 volume because there were originally 2 volumes, correct (if not that would just be weird, haha)?  Try removing the drives and deleting the array from from the raid bios.  When you reinsert, it should find the foreign configuration and allow you to import.
0
 
DavidPresidentCommented:
Well, to be blunt, MediaShield and the RAID firmware in the nForce chipset is a bottom-barrel. Google mediashield with bugs problems or other such things and you get lots of hits from people that just had to blow it all way and start over.

It likely doesn't have ECC protected metadata or tolerance for drive roaming or device reattachment or restoring metadata.  A WAY to get your data back, assuming the hardware is really OK is to use something like runtime's reconstructor  (runtime.org)
0
 
ihostAuthor Commented:
@gikkel NVRaid is enabled in BIOS. During POST the raid array shows both drives and flashing  orgage "DEGRADED" status. Both Drives are visible to the controller and are are visible in the hadware manager. The MediaShield utility only has the "Hot Plug Array" otion under system tasks, otherwise it is completely empty.

You are correct, there are 4 volumes currently because we originally had 2 C:\ and D:\. After array becomes degraded, windows suddenly splits them to four volumes.

I'm familiar with NVRaid, we have 38 servers just like this one with RAID1 enabled. Like I said in the past, all we did was remove one of the drives from the system and rebuild the array. However this time, MediaShield is not showing anything.
0
 
gikkelCommented:
You won't need raid reconstructor with your raid 1 array...especially since you can currently access your drives and have a backup. Like I said, nvraid is horrible and you should really look for another solution - preferably hardware raid.  However, if you want to get your array back up and running, I'm here to help :).   Let me know how you're progressing.
0
 
ihostAuthor Commented:
@dlethe The server is running although in degraded mode. we have daily backups and are in process of moving evrything to a separate external disk just in case. I just want to make sure that I have all my bases covered before we commit to rebuilding entire server from backups which is time consuming process.

We have high end server with dedicated 3WARE RAID controllers, however to keep costs low we also operate farms of low cost 1U servers with a chipset RAID controllers for specific roles. I understand that it is not optimal to use NVRaid, I hope we can just rebuild this array and move on with life.
0
 
DavidPresidentCommented:
While the reconstructor is not usually necessary with RAID1, reason I mentioned it is that it can be used to strip off the metadata and copy the remaining blocks to another disk, which you can then use as a source device.  There are more elegant techniques if you know internals to just get what you need, but reconstructor has its place as a general tool whenever ANY RAID configuration is critical.

But in your case, I interpreted you can "access your drives" as " The disk is 100% operational, there are no bad blocks, and 100% of the blocks on the disk are accessible.   A reconstructor will report unreadable blocks, and you could very well have unreadable blocks on both disks in the RAID1, and reconstructor can deal with this situation.  Just copying one of the disks over to another device will result in partial data loss if any blocks are unreadable.
0
 
gikkelCommented:
Last time I checked RAID Reconstructor doesn't deal with RAID 1...metadata should be able to stripped by the controller.  Its really just important to determine which disk was mark as failed and rebuild from the good disk.  When dealing with nvraid degraded arrays, failure is typically due to the controller, not the hdd.  You should really just be able to remove the drive from the array and boot...typical boot issues occur when the driver is installed for the raid controller and the controller is set to ide emulation mode.
0
 
ihostAuthor Commented:
@gikkel I removed one of the drives in NVRaid BIOS (F10). I managed to boot in to safe mode and is currently using MediaShield to rebuild this array. Currently it's 7%, I'm holding my fingers crossed.
0
 
ihostAuthor Commented:
Good suggestion tha saved the day. Thanks.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.