AIX vary off/vary on on cluster node

We  need to install VCS cluster on AIX server, but we need to pre-resting.

IF we map Disks from VIO server to 2 LPARs,  in first node you will create the VG and will add those disk to VG and vary on the VG , and mount the FS

If you go to scond node,  you just need to import the  disk on the VG right? After unmount and vary of from node1 ?
mokkanAsked:
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

x
 
woolmilkporcConnect With a Mentor Commented:
Not quite right.

Of course you'll have to get rid of the filesystesms and logical volumes on the hdisks first.
On node A remove the LVs and FSs and run reducevg,  then run importvg -L on all other nodes.
The above makes the disks "free" and, after unmapping from VIOs, you can run

rmdev -dl hdiskN

on every node for every disk you want to remove.

Just "cfgmgr" is not sufficient!
0
 
woolmilkporcCommented:
Right!

Please take care to create the PVIDs already on VIOS (chdev -dev hdiskX -attr pv=yes) before mapping the disks, instead of letting the PVIDs be created automatically during mkvg/extendvg on the LPAR.

If VCS supports concurrent mode VGs (I only know PowerHA) then in the future (i.e. once both nodes know the VG) you just need to run "importvg -L ..." on the second node after making changes/adding disks on the first node. No more varyoff required.
0
 
mokkanAuthor Commented:
Thank you for the update. do you need to import the disks all the time or can you just vary on?
0
Learn to develop an Android App

Want to increase your earning potential in 2018? Pad your resume with app building experience. Learn how with this hands-on course.

 
woolmilkporcCommented:
When a disk has been added to the VG on node A you must run on node B

importvg -L <vgname> <hdiskname>

where <hdiskname> is the name of a disk already known to node B, not the name of the new disk!

The volume group must be varied on on node A and thus must not be varied on on node B (where the "importvg -L ..." command is run).

Notes:

- Take care to run "cfgmgr" on node B before running "importvg", so that the new disk is actually known to the OS of node B
- It might be necessary to unlock the volume group's disks on node A to run "importvg -L ..." successfully on node B. To unlock run on node A: "varyonvg -b -u <vgname>".  This can be done while the VG is varied on. The VG and its disks will remain active and useable.

- I made a mistake in my first comment: "importvg -L .." works also on non-concurrent VGs, sorry!

VGs varied on in concurrent mode require a cluster process to monitor VG/LV states and accesses. This process will also make new hdisks and LVs known to all nodes in the cluster, provided that the disks have been made available and known to all nodes via "cfgmgr" before running "extendvg" on the primary node.

"importvg -L ..." is then only needed to make new filesystems known to the respective node. At least under PowerHA the cluster process ("gsclvmd") will not see new filesystems.
0
 
mokkanAuthor Commented:
Thano you.  Once you imported the disk. OS should know about the disks and you can just vary on the vg and mount the F'S right? You don't need to import all the time?
0
 
woolmilkporcCommented:
If nothing has changed (no new disks, LVs or filesystems!)  you can just varyoff the VG on node A and varyon on node B. No import required, just mounting the FS's.
0
 
mokkanAuthor Commented:
Awesome thank you.  If you want to remove the hdisk from both nodes, you can unmapped from vios  and if you run cfgmgr  it should go away right?
0
All Courses

From novice to tech pro — start learning today.