Link to home
Start Free TrialLog in
Avatar of InterWorks
InterWorks

asked on

Remove MD RAID

I have a server running CentOS 5 with 3 physical disks. The history of the server is a little weird, but the thing I need to do is get rid of a RAID-1 set on /dev/md0 and leave a good working single partition.

What would my process be? I was thinking of removing the second disk from /dev/md0, and then working on the first disk with a Rescue CD... mainly changing the partition type to 83, modifying fstab, grub.conf, and reinstalling grub.

Will this work and leave me with a properly mountable drive? Is there something else that should be done in the process?

ASKER CERTIFIED SOLUTION
Avatar of marmata75
marmata75
Flag of Italy image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of arnold
Before going the above suggested route could you describe the setup?
I.e. you have a pair of drives that make up md0 as a RAID 1.
You have one remaining Drive.
Better yet, post the fstab before proceeding to trying anything suggested below!

an option could be to break the md0 RAID while leaving it running (i.e. remove one of the drives.)
i.e. if /dev/sda and /dev//sdb make up the md0 device, separating the /dev/sdb from the setup might be an option while letting the system continue to operate. (double check syntax)
mdadm manage /dev/md0 --fail /dev/sdb
mdamd manage /dev/md0 --remove /dev/sdb

now you can repartition /dev/sdb as you need
make new file systems on the /dev/sdb parition created above.
mount each partition one at a time on /mnt
then run on the /dev/md0 partition:
find . -print | cpio -pdvmu /mnt
one you have the data from /dev/md0 copied to /dev/sdb*
you can modify the fstab to point the appropriate partition to the new location.

If something does not work, you'll still be able to enter the OS in singleuser mode and alter the fstab to the prior state.
Avatar of InterWorks
InterWorks

ASKER

I'll give that a whirl in our upcoming maintenance window (1/16), thanks for the speedy response, and I'll close the question based on the outcome.

One final question before then, after running "mdadm --remove /dev/md0", there will still be the ability to reconstruct that original set from one of my two disks? I was thinking of leaving one of the mirrors untouched, while working on the first one.
If you remove /dev/md0, you can rebuild the setup starting from the untouched disk. With the disk zeroed (i.e. without even the raid superblock) you shouldn't have any problem to boot from it. After that, you just create a new raid 1 array, with a missing disk (the disk you're going to use to rebuild the array) and an empty disk:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1

then copy the contents from the existing disk to the raid, reinstall grub, etc. etc. After all is set, you can readd the disk to the array to have the fully functional mirror.
Most of the steps are outlined here:

https://alioth.debian.org/frs/download.php/668/rootraiddoc.97.html

Cheers,
]\/[arco
Once the array is stopped, you cant remove it. Removing is for failed disks, not the entire raid set.

On top of this, I changed the partition types away from FD back to 83 to avoid  boot time errors trying to reconstruct the array.

Thanks for the help!