Link to home
Start Free TrialLog in
Avatar of bkebbay
bkebbayFlag for United Kingdom of Great Britain and Northern Ireland

asked on

Exchange 2007: Select "F2" to accept data loss and to re-enable logical drive(s),Select "F1" to continue with logical drive(s) disabled

Hi Guys,

I was doing for memory upgrade this morning. As soon as I boot up my exchange server i got the following error messages

779-Slot 3 Drive array - Replacement drive (s) detected OR previously failed drive(s) now appear to be operational:
SCSI Part1: SCSI ID's 0,1,2,3,4
Logical drive(s) disabled due to possible data loss
Select "F1" to continue with logical drive(s) disabled
Select "F2" to accept data loss and to re-enable logical drive(s)

I have not done anything because I am afraid of causing more damage to it. Please can you advise.


I am unable to mount my Exchange Databases because the missing drive is where they are located.

Thanks for helping.
Avatar of David
David
Flag of United States of America image

This is one of the most poorly written diagnostic screens I have ever seen.   So you need to assess what the heck this all means safely.

You must NOT boot the O/S.  Do the F2 thing, stay at the BIOS, and then run a data consistency check ... or media validation, or whatever that stupid controller of yours calls a situation where it reads and verifies parity. You do not want to repair it (yet).  Doing so involves writes.  You want to do a read-only test.   If the read-only test comes up clean or small number of errors, then everything is ok.  

If it comes back with lots of errors, then you need to get a data recovery firm involved if you want to save the data.   Or you can try a product called raid recovery at runtime.org   The RAID recovery will let you clone the RAID into a scratch drive and assess what is going on.  (But you will need a JBOD controller and a scratch drive big enough to hold a reconstructed array).

No guarantees that the runtime.org will ultimately work because you did not supply enough info.  Is this RAID5 and did you loose 2 drives, or did you lose one drive and run it for a while, and a drive came back, or maybe another drive failed during reconstruction?   Lots of possible scenarios here.   If you just don't know what happened, call a professional recovery firm.  You don't know if the hardware is even healthy.
Furthermore, my response is only valid if you have a RAID level that incorporates redundancy.  If this is RAID0, then call in a professional recovery firm, and don't waste time with runtime.org it won't help you.
SOLUTION
Avatar of Member_2_231077
Member_2_231077

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Remove the new memory and reinstall the old and see if the issue goes away.
Avatar of Member_2_231077
Member_2_231077

I doubt it is RAM related, just the reboot. Probably drive fell asleep previously.

As dlethe says it is a horribly misleading error message, it means that if you press F2 you will lose any data on the drive that it wants to build back into the working array.

Say for example you have a 5 disk RAID 5 array, power it off and take one out and put a second hand one in. Pressing F2 will wipe the data off the second hand replacement. It won't do anything to the production data.
I would also NOT poweroff, Any cheesy RAID controller that gives out such a message probably has a volatile event log (if any).   Is there an event log?  Post all the information you can provide.   Your data depends on it.
It was not an instruction to switch it off, just an example of when the F1/F2 boot message appears. HP/Compaq Smart Array Controller's aren't 'cheesy' although some of the english messages get lost in translation.
At this point, I believe I would grab an  HP SmartStart CD,
and reboot the server from the CD,  make the CDROM drive the first boot device,  press  F1  at that prompt.

You want to run the  HP Array Configuration Utility, so you can see which logical drive(s) are disabled.
And run the Array Diagnostic Utility to check for any anomolies.

Re-enable the drives from there,  once you are satisfied.

Once you understand what's going on, or what's not going on,  then you can make the decision on whether or not to re-enable the drives from the Array config utility, or press F2 during boot, and attempt to boot from the array.


If your array has drive redundancy, you will _probably_, be OK,  but better safe than sorry.

Do you have backups of your data?

where did the author mention what raid controller he is using?  It would be nice if he did, but just don't see any mention of anything specific on the config.
He did not.  I would recognize that particular 'diagnostic message' from a mile away.

Even if you don't.  the "779-Slot 3 Drive array "  should be a dead giveaway.

I am assuming this is probably a HP DL380..  these are the main ones that use the 77x SmartArrays.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
That is cool. After it was back and working, did you check the array diagnostic utility to make sure all drives show as online?

You may want to kick off a manual array verification. There is a significant chance that there is an inconsistency or loss of redundancy (possibility of drives out of sync), and that you could be in bad shape in the future, or if a drive ever fails.     It is possible you are out of the woods,  but it is also possible you have presented a unique opportunity for silent data corruption,  that could rear its ugly head in the future.

Unfortunately, it may not be possible to know exactly what or where.     It could be anything from no corruption, to a small corruption in free space,   or some unimportant text file nobody cares about,   to a  512 byte clobbered sector of filesystem metadata or a critical database file,  that won't be detected until months or years later.


Of course, if you keep frequent backups and verify those backups of primary database and separate transaction log backups, this may not really be an issue at all  (aside from the latter possibility of unexpected downtime considerations,  especially should this turn out to be an ongoing hardware issue).