Please Help! RAID 5 in Linux 6.1

I,m running red hat 6.1 with 5 hard disks,

How can I create the RAID 5 fault tolerance across the 5 hard disks?

What poblem happened if one hard disk fail?

How to recover the RAID 5 setting by replacing a new hard disk?

Please give me the detail procedure. Thank a lot!


 
kevinc73Asked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

ihardingCommented:
You will need a RAID5 device.  You will also need free space on all 5 disks.

1 - Make sure you have Kernel support and the latest RAIDTOOLS.  If you installed the stock RH6.1, you should be OK.

2 - Make partitions on each drive, make sure that you set the System ID for each partition to "fd".

3 - Edit /etc/raidtab.  Use the syntax contained in /usr/doc/raidtools-0.90

4 - "/sbin/mkraid /dev/md0"

5 - "cat /proc/mdstat" to check the status.

6 - When a drive fails, replace the disk, create a parition on the drive the same as above and /sbin/raidhotadd /dev/???.

I would recommend you play with the recovery proceedures on a test system before you use it in production... ;-)
0
pops120297Commented:
Sounds like iharding was answering your question.  He should have submitted it as an answer.  I currently run a software RAID5 on my slackware system.  Follow ihardings instructions and you should be ok in setting up the raid and getting it running.  He did leave out a few things I'd like to add on too though.

If you actually install 6 drives you can list a drive as a hotspare in your /etc/raidtab file and the raid will automagically start using the spare in the event of a disk failure.

If you don't have 6 drives you can do as iharding suggested and use the raidhotremove command to remove the failed device and with a little known scsi command you can remove the scsi device from the chain, unplug the drive, plug in a new drive and use the raidhotdd command to re-add the replaced drive to the array.

The command to remove and add scsi devices is as follows:
echo "scsi remove-single-device host channel ID LUN " > /proc/scsi/scsi

echo "scsi add-single-device host channel ID LUN " > /proc/scsi/scsi

ex. For a drive with scsi id 1
echo "scsi remove-single-device 0 0 1 0" > /proc/scsi/scsi

Then cat /proc/scsi/scsi to see that the drive is no longer listed.

I have in fact used this procedure and found that it is much better to replace the drive with the machine running than it is to shutdown and replace the drive.  The raid code seems to be much more willing to re-add the drive while the machine is still running.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Linux Distributions

From novice to tech pro — start learning today.