Adding an IDE hard drive to existing RH linux 6.2 box

I have an existing RH Linux 6.2 server that I inherited.  The box has two IDE HDD that are, I assume, mirrored. Here is the drive structure when running a df-h command:

Filesystem            Size  Used Avail Use% Mounted on
/dev/hda1             4.8G  798M  3.8G  17% /
/dev/hdc5             2.1G  1.9G   31M  98% /usr
/dev/hdc1             2.9G  495M  2.3G  18% /var
/dev/md0               23G   20G  2.2G  90% /storage

The best I can determine is the hda1, hdc1, & hdc5 are partitions on one of the HDD.  The partition md0, I think, is a mirored partition to the second HDD.

Here is my plan:
I have purchased a 200GB HDD that I want to install and move all of the md0 (/storage) data to.  Then I want to un-mirror the md0 partition and reclain that 23gb for use.  The existiing 2 HDD will remin in place and I will be adding the new 200GB HDD.  The /storage data is mission critical and cannot be deleted.  I will probably have to do this procedure on a weekend so the system will not be interrupted during working hours.  No mirroring will be required on the new 200GB HDD.
Can this be accomplished?  If so, what steps do I need to perform.  I have been using linux for about a year, but I still consider myself a novice.  I 'cut-my-teeth' on Windows server.
Thanks in advance!
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

My guess, based on the output of df, is that the IDE configuration looks like:

Primary Master     - disk (hda)
Primary Slave         -  CDrom
Secondary Master - disk (hdc)

The md0 device is probably a slice on hda & hdc that is either mirrored or concatenated. You'd have to look at /etc/raidtab to find out which.

So it appears that you could add the disk as a Seondary Slave and the process would look something like:

1) Configure and connect the drive

2) Create a single partition on the disk with fdisk

3) Create a file system (mke2fs /dev/hdd1)

4) Mount the new file system and transfer the data with:

  # mkdir /mnt/disk
  # mount /dev/hdd1 /mnt/disk
  # cd /storage
  # tar cf - . | (cd /mnt/disk; tar xvpf -)

5) Dismount /storage (umount /storage)

6) Edit /etc/fstab to change the device for /storage to /dev/hdd1 and re-mount it.

7) Rename /etc/raidtab to /etc/no-raidtab and re-boot.

The space used by md0 will then be available for other uses.


Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
EaglePressAuthor Commented:
Looks like this is the solution I need.  It will be about a week before I have time to add the new drive (got to wait until next weekend).  I will let you know the outcome.  Thanks.

If you just rename the raidtab file and don't futz with the disk partitions on hda & hdc you'll have a fall back position if problems develop with the new disk. Once you are satisfied that the system runs fine with the larger disk you can recove the space used by the RAID volume. I'd recommend running it at least a week or two before making any further changes.
Big Business Goals? Which KPIs Will Help You

The most successful MSPs rely on metrics – known as key performance indicators (KPIs) – for making informed decisions that help their businesses thrive, rather than just survive. This eBook provides an overview of the most important KPIs used by top MSPs.

EaglePressAuthor Commented:
So you are saying...
renaming the /etc/raidtab to /etc/no-raidtab will leave the RAID volume intact if the new disk has problems? Oh here is the cat of the /etc/raidtab file if it helps:
[root@eagle /etc]# cat raidtab
# Sample raid-1 configuration
raiddev                 /dev/md0
raid-level              1
nr-raid-disks           2
nr-spare-disks          0
chunk-size              4
persistent-superblock   1
device                  /dev/hda6
raid-disk               0

device                  /dev/hdc6
raid-disk               1
And the /etc/fstab file:
[root@eagle /etc]# cat fstab
/dev/hda1               /                       ext2    defaults        1 1
/dev/cdrom              /mnt/cdrom              iso9660 noauto,owner,ro 0 0
/dev/hdc5               /usr                    ext2    defaults        1 2
/dev/hdc1               /var                    ext2    defaults        1 2
/dev/fd0                /mnt/floppy             auto    noauto,owner    0 0
none                    /proc                   proc    defaults        0 0
none                    /dev/pts                devpts  gid=5,mode=620  0 0
/dev/hda5               swap                    swap    defaults        0 0
/dev/md0                /storage                ext2    defaults        0 0
Please explain when you say don't fuss w/ the disk partitions hda & hdc.  Are you saying I should skip a step until I verify that the new disk functions correctly?

And lastly, what are the procedures for regaining the used RAID space, assuming all is well after the upgrade?

Sorry for all of the questions. I do not want to make any mistakes and have deleted data or an unusable system.  BTW, I will perform a data backup before I attempt this.

The raidtab shows that md0 is a RAID 1 (mirrored) device using a partition on hda & hdc, which is pretty much what I thought you'd find.

My reference to not messing with the partitions on hda & hdc until you are certain that the new disk is behaving properly and has run long enough to get out of the "infant mortality" stage is to provide a "fall back" position. Should a problem develop you could revert to the existing configuration. Renaming the raidtab should stop the system from attempting to use the RAID volume. The configuration would still exist, it just wouldn't be in use. However, since you'll edit /etc/fstab to mount a different device on /storage it really doesn't matter whether the RAID device come up or not.

Once you are satisfied with the new disk you can remove the raidtab file and use fdisk to change the partition type on hda6 & hdc6 to "linux". Then you can create a new file system on those partitions to be able to use that space.  What I'd do, given the shortage of space in /usr, would be to re-layout the system using a partition schme like:

hda1    100MB   /boot
hda2   5000MB  /
hda3   2000MB  swap
hda5   2000MB  /var
hda6   free-space /home

That layout would allow upgrading to a later version or RedHar with needing to repartition. Given the hardware I'd accomplish the task like:

1) Convert hda6 to a linux partition, make a file system, and copy everything from hdc5 to hda6. Then edit fstab to mount hda6 as /usr.

2) Completely repartition hdc with the new layout and make file systems and swap space on those partitions.

3) Transfer the contents of hda to the appropriate partitions on hdc.

4) Swap hdc with hda, boot from floppy or rescue mode amd make the new hda bootable.

EaglePressAuthor Commented:
Thanks so much for the advice and guidance.  I will use thes notes and give this a try next weekend, Sunday November 8.  I'll let you know how things are going and if all goes well accept the answer and give you the points. You have been very helpfull.  
EaglePressAuthor Commented:
Hello jlevie,
I was able to attempt to install/configure the new HDD this AM.  I began following the procedures that you suggested and ran into a problem.
When using fdisk I attempted to create a partition on the new 200gb HDD.  The drive will be called /dev/hdb as it is cabled as the primary slave.
So I did:
fdisk /dev/hdb
n to create a new partition.
p to make ita primary partition
chose partition 1
default start 1
default end 16530
w to write the partition table

Did a df -h which listed the new drive but with only 7.0 GB of available space instead of 200gb.
Have I missed a step some where?
Does the system BIOS support large disks (i.e. it has LBA support)? After creating the partition did you run 'mke2fs /dev/hdb1'?
EaglePressAuthor Commented:
I will have to wait until tomorrow AM to shut down the server and chace the BIOS.
I did not attempt to mke2fs.
You do need LBA mode enabled in the system BIOS for a drive like this. Most recent motherboards (in the last few years) automatically enable LBA. Older ones, say 4-5 years old, may have the option to enable LBA for a drive. Much older than that you'll probably find that the MB doesn't support LBA at all.

If the drive is still connected to the system you might try running mke2fs and see if it is able to use the entire partition.
EaglePressAuthor Commented:
I went in to the bios setup this AM and the new 200gb drive was set to AUTO instead of LBA mode.  So I made that change.  However the BIOS listed the new drive as only 138gb instead of 200gb.  I figured the bios large drive limit was 138gb.  I might need to update the bios.  Afterwards I ran through the rest of the procedures that you had outlined and all went accordingly.  After I update the bios I will probably go through the steps again to see if I can regain the full 200gb.  I will keep you informed.  Thanks.
Yeah, it sounds like you need a BIOS update. But, even without that 138Gb is significantly better than the 23Gb you have access to now.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.