Please Help! RAID 5 in Linux 6.1

I,m running red hat 6.1 with 5 hard disks,

How can I create the RAID 5 fault tolerance across the 5 hard disks?

What poblem happened if one hard disk fail?

How to recover the RAID 5 setting by replacing a new hard disk?

Please give me the detail procedure. Thank a lot!


 
kevinc73Asked:
Who is Participating?
 
pops120297Connect With a Mentor Commented:
Sounds like iharding was answering your question.  He should have submitted it as an answer.  I currently run a software RAID5 on my slackware system.  Follow ihardings instructions and you should be ok in setting up the raid and getting it running.  He did leave out a few things I'd like to add on too though.

If you actually install 6 drives you can list a drive as a hotspare in your /etc/raidtab file and the raid will automagically start using the spare in the event of a disk failure.

If you don't have 6 drives you can do as iharding suggested and use the raidhotremove command to remove the failed device and with a little known scsi command you can remove the scsi device from the chain, unplug the drive, plug in a new drive and use the raidhotdd command to re-add the replaced drive to the array.

The command to remove and add scsi devices is as follows:
echo "scsi remove-single-device host channel ID LUN " > /proc/scsi/scsi

echo "scsi add-single-device host channel ID LUN " > /proc/scsi/scsi

ex. For a drive with scsi id 1
echo "scsi remove-single-device 0 0 1 0" > /proc/scsi/scsi

Then cat /proc/scsi/scsi to see that the drive is no longer listed.

I have in fact used this procedure and found that it is much better to replace the drive with the machine running than it is to shutdown and replace the drive.  The raid code seems to be much more willing to re-add the drive while the machine is still running.
0
 
ihardingCommented:
You will need a RAID5 device.  You will also need free space on all 5 disks.

1 - Make sure you have Kernel support and the latest RAIDTOOLS.  If you installed the stock RH6.1, you should be OK.

2 - Make partitions on each drive, make sure that you set the System ID for each partition to "fd".

3 - Edit /etc/raidtab.  Use the syntax contained in /usr/doc/raidtools-0.90

4 - "/sbin/mkraid /dev/md0"

5 - "cat /proc/mdstat" to check the status.

6 - When a drive fails, replace the disk, create a parition on the drive the same as above and /sbin/raidhotadd /dev/???.

I would recommend you play with the recovery proceedures on a test system before you use it in production... ;-)
0
All Courses

From novice to tech pro — start learning today.