• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1535
  • Last Modified:

Installing OpenSuse 11.0 on RAID Array

I want to install OpenSuse 11.0 on a RAID array for redundancy. I have four 750GB SATA drives connected to a HighPoint RocketRAID 2300 series RAID controller. HighPoint's web site for this model controller shows that it is supported up to OpenSuse 10.3 (see http://www.highpoint-tech.com/USA/bios_rr2300.htm).

How do I get the OpenSuse installer to recognize the RAID array set up with the RockerRAID BIOS utility? At present the array is created in the BIOS utility but the OS installer only sees the four individual drives and it then suggests the default partitioning scheme on the first drive only. I understand that I probably need to load a RAID driver for Suse but I don't know how to do this given I do not have a floppy drive available on the server, nor do I have a precompiled driver for Suse11.0.

HighPoint provides a generic open source driver that's been tested with kernel 2.6.25. which is what shipped with OS11.0 if I am not mistaken. The README that comes with the opensource driver download talks about compiling the driver module for the kernel but I cannot do this during the installation can I?

Unless I change my strategy entirely and add one more drive to install Suse11.0 on and then compile the RAID driver afterward so I can use it later but this means if my primary Suse drive fails I have to redo practically everything to get the machine up and running again which kind of defeats the purpose of using a RAID array methinks.

1. Does anybody have any ideas about how to get the RAID array going for the installer to use it as a single installation destination?
2. Can I install the operating system on a RAID array, or should I rather keep it on a dedicated drive, and only use the RAID array to store more important data on? If so, in case of a drive failure, the way I understand it is that my data will be OK but I'll have to rebuild the operating system and mount the data array again when it's done?
3. Is there a better way to do this?

Any ideas?
0
Ravelstaff
Asked:
Ravelstaff
  • 2
1 Solution
 
SysExpertCommented:
I would start with a single disk for the OS. there are plenty of cloning and backup tools that can put the backup data on the RAID, so that the first drive fails, you can easily restore.

Also, if at some point you do figure out how to get the RAID enabled, then you can backup the OS and data to external media, wipe everything, and reinstall the OS, and restore the data.

DOn't be in a rush, and see if you can find a spare small drive just for the OS if needed.


I hope this helps !
 
0
 
RavelstaffAuthor Commented:
Thanks for the response SysExpert.

So you are suggesting that I install the OS on a dedicated disk, then compile the RocketRAID driver after the install and get the RAID array up and running and use it to manage all the critical data. Then use a disk cloning utility and clone the OS drive to a file on an external drive for backup from where I could recreate the OS drive using a new HDD if the original fails?
0
 
alextoftCommented:
I would not take SysExpert's approach.

The fact that your OS can see individual drives connected to a raid controller means it's not a proper hardware raid controller at all! A hardware raid completely abstracts the presence of individual drives, presenting a single logical volume to the OS. Many of the modern "raid controllers" are nothing of the kind, are designed for Windows, and are simply software raid pretending to be hardware. Don't forget, Windows has no software raid capability, so these cheap cards let it pretend that it does.

I would simply use the excellent built-in Linux RAID tools. Software raid will actually give you better performance (think about it; how many raid cards have Core 2 Duo chips?).

With 4 discs you can easily create yourself a RAID5 array (ie. the capacity of 3 discs, with 750Gb used for parity). An example configuration could be:

/dev/sda: 256Mb, 740Gb, 2Gb
/dev/sdb: 256Mb, 740Gb, 2Gb
/dev/sdc: 256Mb, 740Gb, 2Gb
/dev/sdd: 256Mb, 740Gb, 2Gb

..all "Linux RAID" partitions. Then create 3 RAID5 arrays: sda1, sdb1, sdc1 & sdd1 form md0, sda2, sdb2, sdc2 & sdd2 form md1 (and so on with md2).

Format md0 and md1 as ext3 and mount them as /boot and / respectively, then designate md2 as swap. All this is trivially easy using the yast setup wizard.

Once you're all installed, simply use grub_install to ensure each disc is bootable and you're sorted.
0
 
RavelstaffAuthor Commented:
Thanks for the post alextoft. I agree your approach makes much more sense. I suspect my problem is basically because of a crappy "RAID" controller. I'll swap that out sometime but in the mean while I tried going the way you suggested and I'm got the install to go down fine without a hitch.

Thanks for the input.
:o)
0

Featured Post

[Webinar] Database Backup and Recovery

Does your company store data on premises, off site, in the cloud, or a combination of these? If you answered “yes”, you need a data backup recovery plan that fits each and every platform. Watch now as as Percona teaches us how to build agile data backup recovery plan.

  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now