This is just a basic sanity check because I've never actually done it before.
I'm putting together a server - basically it is just a high end consumer computer. Regular ASUS motherboard, intel i7 CPU, 8gb DDR3 non-ECC memory, etc. It's not anything crazy, and I'm trying to avoid the cost of buying an enterprise grade server - BUT I want it to have full *proper* RAID storage.
Here is my plan for setting up the RAID:
ASUS P8Z68-V LE motherboard
Intel (SRCSATAWB) 8 Ports SATA RAID Controller PCIe x4
(+ whatever cables are necessary)
8x Western Digital RE4 500gb SATA hard drives:
I'll then plan to configure it into a RAID6 or RAID10 array.
So here are my questions:
1) How do I configure the RAID array prior to booting the OS?
Enterprise servers typically let you configure the RAID in the BIOS by hitting a key at some point in the boot process.... is it the same for my custom built server?
Does the motherboard somehow know to provide me with the option to enter the raid controller's configuration utility while it is POSTing? Or is there some other way I will need to configure it?
2) Will Ubuntu Server 11.04 Natty support this raid controller natively?
Once the RAID is configured I want to install Ubuntu Server 11.04 Natty.... will Ubuntu detect the logical disk on its own, or am I going to have a nightmare of a time getting the drivers working before I can even install Ubuntu? If I booted into an Ubuntu Live CD for repairs later will the Live CD detect the storage without a problem?
3) Is there a more compatible RAID controller card I should be considering instead?
If ubuntu won't support this card out of the box, what brand/model of 8-port SATA RAID controller do you recommend that will work out of the box? I've been looking at some LSI MegaRAID cards but haven't decided yet. Please make a recommendation!
4) What software do I need on Ubuntu to monitor and manage the RAID array?
How can I have Ubuntu be able to check the health of the drives, report and log drive errors etc? Is there built in software to handle this or is it proprietary to the vendor?
When a failure does happen, how will I know which physical disk needs replacing? Are there LEDs I can set up somehow to indicate the health of each disk like an enterprise grade server or do I need to set something else up?
Thank you for your insights!