Link to home
Start Free TrialLog in
Avatar of rhodson
rhodson

asked on

What is the procedure for replacing 18gig drives with 73gig drives in a JBO which is mirrored on a UNIX system?

I have a UNIX High Availability system in which the JBOD hold a database for our application (hospital patient demographics database) and the system is due for an upgrade.  The upgrade mandates larger drives of which I have 6 each 73gig drives that will replace the 6 18gig drives presently in use.  I know that there is a need to halt the system, break the mirror, transfer the data to another storage device (dds tape or raid) and then pull the drives, replace them, reload data and then remirror the 6 drives.  I also am aware that there are some unmouts / mounts and logical volumn management that has to happen. The logical volume that I will have to concern myself with is a vgwise and I add this only in allowing anyone responding the opportunity to use a label within their explanation.  Thanks much.
Avatar of Caseybea
Caseybea

What flavor of Unix are you running?

Are the disks SCSI or IDE?

If you can give a bit more detail about the environment- we can get you some good solid answers.
ASKER CERTIFIED SOLUTION
Avatar of Cyber-Dude
Cyber-Dude

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of rhodson

ASKER


Responding to Caseybea's request for additional info...  The UNIX flavor is 11.0 and our setup is two L class 9000 servers which are clusterd (something that I should have added late last night but only two brain cells were workin).  On the first server is our package called WISE which has the database with the patient demographics.  On the second server is our IMS4 package which handles the image importing function from the X-ray dept. The actual layout of the /dev/vgwise/lvolor 1 through 9 is as follows:

/dev/vgwise/lvolora1
                   4096000 2981960 1044420   74% /wisedb/data
/dev/vgwise/lvolora2
                    131072    3183  119903    3% /wisedb/redo1
/dev/vgwise/lvolora3
                    131072   69581   57655   55% /wisedb/system
/dev/vgwise/lvolora4
                    786432  247061  505668   33% /wisedb/temp
/dev/vgwise/lvolora5
                   8388608 6628452 1650154   80% /wisedb/index
/dev/vgwise/lvolora6
                    131072    3183  119903    3% /wisedb/redo2
/dev/vgwise/lvolora7
                    786432   47029  693198    6% /wisedb/rbs
/dev/vgwise/lvolora8
                   8388608 1421582 6751016   17% /wisedb/arch
/dev/vgwise/lvolora9
                   8388608 4586986 3686276   55% /wisedb/backup
/dev/vgwise/lvolwise1
                    524288    7992  487155    2% /var/opt/sectra/ha/wise      

The disks that are associated with these volume groups are:            

primary      c5t15d0            c11t8d0            c5t4d0            c11t2d0            c5t1d0            c11t0d0
alternate      c11t15d0            c5t8d0            c11t4d0            c5t2d0            c11t1d0            c5t0d0

I hope that this is enough.  Knowing that my level of compentency would make more than a few people hurl in laughter is something that I hope to increase to a level that would permit me to be on the answersing side of these forums but for now the sum total of my ignorance is a quality appreciated only by those who know more...    Thank you fer the the response.