What is the procedure for replacing 18gig drives with 73gig drives in a JBO which is mirrored on a UNIX system?

I have a UNIX High Availability system in which the JBOD hold a database for our application (hospital patient demographics database) and the system is due for an upgrade.  The upgrade mandates larger drives of which I have 6 each 73gig drives that will replace the 6 18gig drives presently in use.  I know that there is a need to halt the system, break the mirror, transfer the data to another storage device (dds tape or raid) and then pull the drives, replace them, reload data and then remirror the 6 drives.  I also am aware that there are some unmouts / mounts and logical volumn management that has to happen. The logical volume that I will have to concern myself with is a vgwise and I add this only in allowing anyone responding the opportunity to use a label within their explanation.  Thanks much.
Who is Participating?
Cyber-DudeConnect With a Mentor Commented:
The generic steps for a RAID upgrade.

-400. Backup everything!!!!

1. halt the system.
2. Break the mirror by leaving the source and install the new drive in the destination slot.
3. Rebuild the mirror.
4. Halt the system once again.
5. Break the mirror and relocate the destination hard drive to be as the source drive and insert the additional new 73GB drive to be as the destination.
6. Rebuild the mirror and extend the partitions to your will.

What flavor of Unix are you running?

Are the disks SCSI or IDE?

If you can give a bit more detail about the environment- we can get you some good solid answers.
rhodsonAuthor Commented:

Responding to Caseybea's request for additional info...  The UNIX flavor is 11.0 and our setup is two L class 9000 servers which are clusterd (something that I should have added late last night but only two brain cells were workin).  On the first server is our package called WISE which has the database with the patient demographics.  On the second server is our IMS4 package which handles the image importing function from the X-ray dept. The actual layout of the /dev/vgwise/lvolor 1 through 9 is as follows:

                   4096000 2981960 1044420   74% /wisedb/data
                    131072    3183  119903    3% /wisedb/redo1
                    131072   69581   57655   55% /wisedb/system
                    786432  247061  505668   33% /wisedb/temp
                   8388608 6628452 1650154   80% /wisedb/index
                    131072    3183  119903    3% /wisedb/redo2
                    786432   47029  693198    6% /wisedb/rbs
                   8388608 1421582 6751016   17% /wisedb/arch
                   8388608 4586986 3686276   55% /wisedb/backup
                    524288    7992  487155    2% /var/opt/sectra/ha/wise      

The disks that are associated with these volume groups are:            

primary      c5t15d0            c11t8d0            c5t4d0            c11t2d0            c5t1d0            c11t0d0
alternate      c11t15d0            c5t8d0            c11t4d0            c5t2d0            c11t1d0            c5t0d0

I hope that this is enough.  Knowing that my level of compentency would make more than a few people hurl in laughter is something that I hope to increase to a level that would permit me to be on the answersing side of these forums but for now the sum total of my ignorance is a quality appreciated only by those who know more...    Thank you fer the the response.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.