Link to home
Start Free TrialLog in
Avatar of sminfo
sminfo

asked on

Concurrent/Enhanced concurret/NON-concurrent on power HA.

OK.. I have read about its differences, but not sure how can they be used on a power HA cluster.  In our case, we have 3 nodes (two sites replicating data with GLVM) and we have setup all shared VGS like this:
(node3):[root] /usr/local/bin -> lsvg vg101
VOLUME GROUP:       vg101                    VG IDENTIFIER:  0009eaca0000d40000000142bd84ed5a
VG STATE:           active                   PP SIZE:        8 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2542 (20336 megabytes)
MAX LVs:            256                      FREE PPs:       1754 (14032 megabytes)
LVs:                8                        USED PPs:       788 (6304 megabytes)
OPEN LVs:           7                        QUORUM:         1 (Disabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Non-Concurrent
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off

Open in new window


If you see COncurrent field it says it's enhanced concurrent but if you see VG mode filed it says Non-concurrent... question: is it concurrent or not? ;)

Other VGs on the same cluster does not shows fields above as you can see it:

(node3):[root] /usr/local/bin -> lsvg vg102
VOLUME GROUP:       vg102                    VG IDENTIFIER:  0009eaca0000d40000000142bd85241c
VG STATE:           active                   PP SIZE:        8 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2542 (20336 megabytes)
MAX LVs:            256                      FREE PPs:       992 (7936 megabytes)
LVs:                6                        USED PPs:       1550 (12400 megabytes)
OPEN LVs:           5                        QUORUM:         1 (Disabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no

Open in new window


So, I'm confused on when do I have to use concurent/non-concurrent/Enchanced-concurrent and how can I check on our VGs..

Thanks..
SOLUTION
Avatar of woolmilkporc
woolmilkporc
Flag of Germany image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sminfo
sminfo

ASKER

Hi wmp... nice to meet you too!! ;)

I'm still confused with all this and our environment. Lets start with your comments:

--OK, so there are two types Enhanced-concurrent and non-concurrent (a simple and normal VG running on only one node).

FOR Enhanced-concurrent:
-- Enhanced-concurrent can be (VG Mode filed of lsvg cmd):
1- VG Mode:            Non-Concurrent
2- VG Mode:            Concurrent

Open in new window


In a cluster of two nodes:
-- Enhanced-concurrent--Concurrent mode is when the VG is varyonvg on boths nodes (one active R/W and the other passive RO)... so, lspv on both nodes  shows something like this:
hdisk2          0009eacabd8523c6                    vg102           concurrent
hdisk3          0009eacasdfghertf4                    vg103           concurrent

Open in new window


--Enhanced-concurrent--Non-Concurrent  dont know really what's means, because VG says enhanced-concurrent but at the same time is not.

FOR non-concurrent:
-- I think it's a normal VG like this output:
(aix1):[root] / -> lsvg tsm01vg
VOLUME GROUP:       tsm01vg                  VG IDENTIFIER:  0000e4720000d9000000011c9bb30f1a
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      11764 (188224 megabytes)
MAX LVs:            256                      FREE PPs:       2522 (40352 megabytes)
LVs:                180                      USED PPs:       9242 (147872 megabytes)
OPEN LVs:           111                      QUORUM:         2 (Enabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      non-relocatable
MIRROR POOL STRICT: super
PV RESTRICTION:     none                     INFINITE RETRY: no

Open in new window


But our cluster is built on 3 nodes and two sites replicating data with GLVM.
On a PMR with IBM says we, in our enviroment, MUST use VGs con glvm on enhanced-concurrent. So I changed them using smitty cspoc/storage/Volume Group/ Enable a Volume Group for Fast Disk Takeover or Concurrent Access for all VGs. Before do this, the output of all VGs were like the lsvg vg102 in the original question, but after the change to EC, the out of all VGs are like lsvg vg101 in the original question.

So, I'm not sure if they are on EC or not...;(

So, what do you think of all this?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sminfo

ASKER

I think I have something wrong in our cluster config... I have setup all VGs to EC, but when I stop/start cluster it does not execute the -c when varyonvg VGS..

Not sure if I must export/import all VGs on all nodes.. and start cluster again...

Let me see cluster logs.. maybe it's complaining about the '-c' when varyonvg
Avatar of sminfo

ASKER

wmp.. I have read about 'Fast disk take over'.. what is it? Is it what powerha uses to varyonvg/varyoffvg when a node is down?
With concurrent VGs the group services on all nodes are always aware of possible changes in the VG structure (changes in LV size, new LVs - but not new filesystems!) and there is also no need anymore to break reserves (because there are none).

That's what they call Lazy Update or Fast Disk Takeover, though.

You can of course (on the passive nodes) run "importvg -L ..." to make the change known there. Best run "varyonvg -b -u ..." beforehand on the node which has the VGs varied on to release possible locks. On the active node "importvg -L ..." is not possible. You must deactivate the cluster and run a normal importvg there.

A preliminary exportvg should not be required, as far as I know.
Avatar of sminfo

ASKER

mm.. why do you say but not new filesystems!) ?? How can I synchronize new filesystems on powerha?

There's no dough I have problems here.. cluster varyonvg VGs on EC but with VG Mode: Non-concurrent. But if I manually varyoffvg and varyonvg -c it shows:

(node3):[root] /usr/local/bin -> lspv
hdisk1          000e052abdc66ea3                    vg101           concurrent
remhdisk1       0009eacabd84ed0b                    vg101           concurrent

Open in new window

and lsvg output:
(node3):[root] /usr/local/bin -> lsvg vg101
VOLUME GROUP:       vg101                    VG IDENTIFIER:  0009eaca0000d40000000142bd84ed5a
VG STATE:           active                   PP SIZE:        8 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2542 (20336 megabytes)
MAX LVs:            256                      FREE PPs:       1754 (14032 megabytes)
LVs:                8                        USED PPs:       788 (6304 megabytes)
OPEN LVs:           0                        QUORUM:         1 (Disabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
[b][b]Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Concurrent[/b][/b]
Node ID:            3                        Active Nodes:
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no

Open in new window

The active node is thus OK, remains importvg -L on the passive nodes followed by varyonvg -c -p ...

Once you have the VGs on the active node in active/read-write/concurrent and on the passive nodes in active/passive-only/concurrent a following cluster restart should vary on the VGs that way.
If it doesn't we will indeed have to go through the logs (hacmp.out and maybe clstrmgr.debug).
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sminfo

ASKER

you're the best!!

I've already understood.. and my problem is due because when try to varyonvg VGs the remote disk are not available and it fails, so after some tries it varyonvg with option -n (don't synchronize data between disks... don't know if you understand me.. remember all  VGs have two disks, one local and the other is remote (RPVs).
I'll open a PMR with IBM regarding this.

Thanks tons!