How do I setup Linux software RAID 5?

jasonsfa98
jasonsfa98 used Ask the Experts™
on
I am building a 1U server that is going to run Suse Enterprise Linux 10 SP2. This server has 3 hot-swap drive bays and on-board Intel RAID (fake RAID). Linux does not appear to support these RAID devices and due to certain add-on cards that I MUST have, I do not have room for a hardware RAID card.

I want to know how best to setup a linux software RAID 5 on 3 separate disk. As I write this I am testing a 3 drive setup in VMWare Workstation 6.5 and it appears to be working. However, I was warned that I did not create a swap partition and that the "/" (root) file system was on a software RAID array and may not boot. I also worry that the lack of a swap partition may cause performance issues. Speed is a serious issue for us.

My goals are to be able to use the 3 drives for redundancy and be able to easily replace them if a failure occurs. In order to achieve that, how should I setup my partitions?
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
FYKI, you cannot use Software RAID for Redundancy.
The onboard Intex Matrix RAID Solution is a fake raid which you have already mentioned.

Both LVM and Software RAID provide you with the ability to dynamically increase your partition sizes but redundancy cannot be expected in both these solutions.

You would need a dedicated hardware RAID Card for ensuring redundancy.

Commented:
Linux software raid is not as good as a real hardware raid.
The two key issues are:
1) Might not boot in any failure cases.
2) Lower performance (the CPU has to do all the calculations; no Cache on controller; etc.)

But if you have no choice I would recommend the following setup:

/dev/md0: A Raid 1 (!) mirror with three Disk's. Only 100MB size used as /boot. Install boot sector in md0. This will replicate your boot config accross the three disk's.

/dev/md1: Raid 5 with a LVM Physical volume on it.

Volume Group A: With root partition and swap partition + your desired data partitions.

And don't forget to configure mdadm/smartd to inform you when a failure occurs.

I'm not really familiar with the SuSE Setup procedure but I've tested it with CentOS 5.x and it works well.

Commented:
WTF?

Firstly, performance is not an issue. How many RAID controller cards have quad core Xeon chips? Exactly... the CPU overhead is negligable. Make sure EVERYTHING is striped across the discs; /boot (if you use one), / and swap. RedHat assume you want to use LVM by default, although personally I suggest you research whether that suits your environment before you go for it.

You can quite happily boot from a software RAID with a dead disc. Simply ensure that GRUB is installed on all discs in the array, the BIOS will try each disc as a separate boot device and that the boot device in the GRUB config is the RAID device name. Redundancy can ABSOLUTELY be expected in a software RAID. I've seen scores of discs ranging from cheap IDE to $1000+ fibre-channel installed in software arrays fail; never taken the box down or prevented booting, you simply need to take a few basic factors in mind when designing your system.
Success in ‘20 With a Profitable Pricing Strategy

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Author

Commented:
Alex,

That is what I've been reading when it comes to performance. That is why I have gone ahead and accepted it as a solution, assuming I can get it up and running. What I am having problems with is /boot. If I do not create a separate, non-RAID partition and assign it /boot I get "No operating system found" after the install reboots to continue installation.

Is there a trick to get /boot, /swap, and / all setup on RAID?

Commented:
I've never had a problem when using SuSE; in fact it was so easy I just had to check a server config to remind me what I did as I AutoYasTed the other boxes...

During the Yast partition wizard I'd create /dev/sda1 /dev/sdb1 /dev/sdc1 as linux raid partitions, then the same with 2 and 3... After that, create a RAID5 volume as md0 using sda1 sdb1 and sdc1, same with 2 for swap (md1), same with 3 for root (md2).

Make sure the GRUB config has the filesystem root configured on each disc. By default it'll set things up as "root (hd0,0)" on the first disc, which needs adjusting for each disc's MBR to refer to that respective disc's hard disc identifier. (hdDISC,PARTITION).

Here's a link which explains the GRUB install process for additional discs:
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch-p2

It's for Debian, but the instructions are generic in this instance.

Commented:
Here is good benchmark Hardware vs. Software. It shows that both solutions have vantages - depending on the application:

http://linux.com/news/hardware/servers/8222-benchmarking-hardware-raid-vs-linux-kernel-software-raid

Author

Commented:
Alex,

What exactly do you mean by "Make sure the GRUB config has the filesystem root configured on each disc. By default it'll set things up as "root (hd0,0)" on the first disc, which needs adjusting for each disc's MBR to refer to that respective disc's hard disc identifier. (hdDISC,PARTITION)."

Understand that I am doing this all on fresh installs. I poked around in the partitioner but couldn't find any more options.

I attached a screenshot of what I am doing. The setup pictured does not work. I get "no operating system found" on reboot.
sles-raid.jpg
Commented:
The partition setup looks fine.

What I was getting at was the scenario if a drive failed. Say sda dies, the array will boot just fine, but only if one of the other drives is in the boot order, and it has grub correctly installed on it.

I would check a couple of things:

1. Boot off the install CD, enter repair mode, mount the array (may need to modprobe raid5) and check /boot/grub/menu.lst for the correct grub boot config.

2. Check that /etc/sysconfig/kernel contains the raid5 module in the INITRD_MODULES section. If it doesn't, add it, chroot to the installed system and run mkintrd.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial