Setup a RAID on CentOS 5 without losing data?

Hi Guys,

I have a question about Software RAID on CentOS 5. About a year ago I've setup a CentOS 5 Server and installed a Web hosting control panel DirectAdmin. During my CentOS install I really thought that I’ve already setup the software RAID function but it was my first time doing it and I sort of misread a part of the instruction. I don't have a hardware RAID controller at this moment but I can implement it if it is a better solution.

The server has 4 500 GB Hard Drives and I would like to have a mirror RAID over all of them.

Is there a way to do this without loosing all the current data on the server (databases, files, emails ,...)?
Would a hardware RAID controller be easier to achieve this?

I'm looking for a solution with minimum server down time and to be as easy as possible.

I've attached the image of the current file system (i think one Hard Drive is still unallocated). I've followed the suggested /tmp /var /usr /home /boot setup that is described in the DirectAdmin support, but I’ve obviously messed up with the raid as it became a partition.

This is the current partitioning of the system
Thank you for all your help.
 
xNejXAsked:
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

x
 
joolsConnect With a Mentor Commented:
Sorry, forgot to include the disk partitioning as an example...
diskpart.txt
0
 
arnoldCommented:
Software raid you seem to have one /dev/md0.

Software raid has to be setup in two parts.
you first have to create a partition of about 100MB which will be the /boot

A hardware raid is better. note some SATA/ATA type of built-in type raids are referred to as fakeraid since they rely on the system's CPU for processing.

http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-raid-config.html
0
 
xNejXAuthor Commented:
If I do this will I loose all the files on it? How can I ensure a proper and complete backup of the system?
0
Never miss a deadline with monday.com

The revolutionary project management tool is here!   Plan visually with a single glance and make sure your projects get done.

 
arnoldCommented:
Yes if you will do this you would lose files.

One option if you have space left, is to create a raid group on the free space of each drive
i.e. and then LVM?mount the item over /mnt
then you go into /boot and run find . | cpio -pdvmu /mnt
What this will do is will allow you to use grub to now point to the new location for the /boot.
And then you would repeat the same thing for each group to transition.

You would essentially have to copy using CPIO a partition at a time and then alter the fstab to mount the new one instead of the old one.
i.e. instead of /dev/sda1 /boot
you will have
/dev/mapper/VolGroup01-LogVol00 /boot
instead of
/dev/mapper/VolGroup00-LogVol00 /
/dev/mapper/VolGroup01-LogVol01 /

But this process requires that you have enough space to go through this process.  The same exists for the SWAP.

At this point you have 900GB for /home

If you can trim/resize it down to 150GB it would leave you enough manuvering room to create the various raids while freeing up the used space.

Note make sure that you can boot the system with the RAID groups as the mounted partition before deleting the old partitions to free up space.

http://www.linuxquestions.org/questions/linux-newbie-8/setting-up-software-raid-1-after-install-296557/

0
 
joolsCommented:
this is just offered as a suggestion, arnold's suggestion is equally valid.

> I don't have a hardware RAID controller at this moment but I can implement it if it is a better solution.
Some would say so, however, software RAID is easy to manage and good enough for most server uses but I guess it depends on what you are using the server for. With heavy I/O it may suffer.

If it were me (and I've just reorganised my server) I'd organise some maintenance down time and do a full backup to another hard disk then reinstall from scratch, I found (at least im my case) it was easier to start from the install and restore than to move chunks of data about ad-hoc.

You would need to create two raid arrays, md0 at about 200MB using two partitioned disks and then you could create another raid 1 array using the remaining space, if you wanted the flexibility of using lvm then create a raid 1 using the other disks and then create an lvm vg on the raid array.

Example layout used on one of the systems I use is attached, I only have 2 x 500GB disks but you should see what it's doing.

raidinfo.txt
0
 
xNejXAuthor Commented:
Thank you for your advice.
0
All Courses

From novice to tech pro — start learning today.