I have a question I need answering for educational purposes.
HP ML110 G4 server (little tower that I have crammed with RAM for virtual tests).
Now basically it comes with an onboard HP Embedded RAID controller made by Adaptec.
Here is the deal...
I created a mirror between two 74 gig raptors (lets call them A and B). Both work fine and Window server 2008 installation seems them as one. All good.
First test... I run windows, simply unplug a drive (A) to simulate a failure and windows keeps running perfectly.
I reboot and windows runs of the other drive as it should (B). Obviously in degraded state.
I shut the server down and do the following:
I plug the old drive in to simulate the return of drive (A) and unlplug drive (B).
Windows CANNOT start... Now what I want to know is WHY? and is this by nature even with the most expensive SmartArray 400p or Dell Perc 5i (and the other big boys)?
I 'BELIEVE' its because once drive A is removed it removes its register from the Array so when B goes and A returns it has no place in the array anymore?
Something to do with B becoming the new and final drive?
Once I rebuild the array in the BIOS (driver issues as to why I cant do it in windows), I can fail any of the two again while windows continues working.
I just cant plug the failed one back in and fail (unplug) the opposite one in the hope windows will boot up... It starts then blue screens.
Is this by RAID design or by the fact that this controller is just a cheap embedded sata controller?
When you shutdown and switch drives and restart as far as RAID is concerned you have two failed drives.
They will be considered failed until you rebuild the array.
I 'BELIEVE' that's by RAID design, but I haven't used a wide variety of controllers.
On the other hand I've never used -any- embedded controllers. They've all been add-in cards.