Solved

Concurrent/Enhanced concurret/NON-concurrent on power HA.

Posted on 2014-02-06
10
2,996 Views
Last Modified: 2014-02-07
OK.. I have read about its differences, but not sure how can they be used on a power HA cluster.  In our case, we have 3 nodes (two sites replicating data with GLVM) and we have setup all shared VGS like this:
(node3):[root] /usr/local/bin -> lsvg vg101
VOLUME GROUP:       vg101                    VG IDENTIFIER:  0009eaca0000d40000000142bd84ed5a
VG STATE:           active                   PP SIZE:        8 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2542 (20336 megabytes)
MAX LVs:            256                      FREE PPs:       1754 (14032 megabytes)
LVs:                8                        USED PPs:       788 (6304 megabytes)
OPEN LVs:           7                        QUORUM:         1 (Disabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Non-Concurrent
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off

Open in new window


If you see COncurrent field it says it's enhanced concurrent but if you see VG mode filed it says Non-concurrent... question: is it concurrent or not? ;)

Other VGs on the same cluster does not shows fields above as you can see it:

(node3):[root] /usr/local/bin -> lsvg vg102
VOLUME GROUP:       vg102                    VG IDENTIFIER:  0009eaca0000d40000000142bd85241c
VG STATE:           active                   PP SIZE:        8 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2542 (20336 megabytes)
MAX LVs:            256                      FREE PPs:       992 (7936 megabytes)
LVs:                6                        USED PPs:       1550 (12400 megabytes)
OPEN LVs:           5                        QUORUM:         1 (Disabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no

Open in new window


So, I'm confused on when do I have to use concurent/non-concurrent/Enchanced-concurrent and how can I check on our VGs..

Thanks..
0
Comment
Question by:sminfo
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 5
  • 5
10 Comments
 
LVL 68

Assisted Solution

by:woolmilkporc
woolmilkporc earned 500 total points
ID: 39839532
OK, here we go again. Nice to meet you!

The "Concurrent:" value indicates whether a VG has been defined as concurrent-capable or not, which means: it can be varied on in concurrrent mode.

The "VG Mode:" value indicates whether a concurent-capable VG has been varied on in concurrent mode.

"Enhanced-Capable" is the only available "concurrent" capability. The old "Capable" thing has long gone (along with the 32-bit kernel).

With PowerHA you should create your VGs Enhanced-Concurrent capable. This setting can speed up failover processing quite a lot.
PowerHA will vary on such VGs in Concurrent mode if at all possible.

Was your cluster active the moment you took the above snapshots?
If so, then it's a bit strange that a concurrent-capable VG has been varied on non-concurrent by the cluster group services.

Or did you vary it on by hand? To vary on a VG in concurrent mode you must use the "-c" flag of "varyonvg", and PowerHA (Group Services) must be up and running.

If PowerHA varied on the VG non-concurrent you should check the relevant logs (cspoc.log, clstrmgr.debug, clutils.log and errpt).

By the way, lsvg doesn't show the related fields for VGs not configured as concurrent capable (your vg102 for example).

"Auto-Concurrent" has no meaning under PowerHA/HACMP, as far as I know. It's a bit confusing, but as I said, PowerHA/HAVMP will vary on a concurrent capable VG in concurrent mode, if possible, regardless of the "Auto-Concurrent" setting.

wmp
0
 

Author Comment

by:sminfo
ID: 39841417
Hi wmp... nice to meet you too!! ;)

I'm still confused with all this and our environment. Lets start with your comments:

--OK, so there are two types Enhanced-concurrent and non-concurrent (a simple and normal VG running on only one node).

FOR Enhanced-concurrent:
-- Enhanced-concurrent can be (VG Mode filed of lsvg cmd):
1- VG Mode:            Non-Concurrent
2- VG Mode:            Concurrent

Open in new window


In a cluster of two nodes:
-- Enhanced-concurrent--Concurrent mode is when the VG is varyonvg on boths nodes (one active R/W and the other passive RO)... so, lspv on both nodes  shows something like this:
hdisk2          0009eacabd8523c6                    vg102           concurrent
hdisk3          0009eacasdfghertf4                    vg103           concurrent

Open in new window


--Enhanced-concurrent--Non-Concurrent  dont know really what's means, because VG says enhanced-concurrent but at the same time is not.

FOR non-concurrent:
-- I think it's a normal VG like this output:
(aix1):[root] / -> lsvg tsm01vg
VOLUME GROUP:       tsm01vg                  VG IDENTIFIER:  0000e4720000d9000000011c9bb30f1a
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      11764 (188224 megabytes)
MAX LVs:            256                      FREE PPs:       2522 (40352 megabytes)
LVs:                180                      USED PPs:       9242 (147872 megabytes)
OPEN LVs:           111                      QUORUM:         2 (Enabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      non-relocatable
MIRROR POOL STRICT: super
PV RESTRICTION:     none                     INFINITE RETRY: no

Open in new window


But our cluster is built on 3 nodes and two sites replicating data with GLVM.
On a PMR with IBM says we, in our enviroment, MUST use VGs con glvm on enhanced-concurrent. So I changed them using smitty cspoc/storage/Volume Group/ Enable a Volume Group for Fast Disk Takeover or Concurrent Access for all VGs. Before do this, the output of all VGs were like the lsvg vg102 in the original question, but after the change to EC, the out of all VGs are like lsvg vg101 in the original question.

So, I'm not sure if they are on EC or not...;(

So, what do you think of all this?
0
 
LVL 68

Assisted Solution

by:woolmilkporc
woolmilkporc earned 500 total points
ID: 39841434
That's quite normal.

Just making a VG enhanced-concurrent capable doesn't change its varyon mode.

You must vary off the VG to then vary it on in concurrent mode (the -c flag of varyonvg).

On the non-active nodes you can use the (quasi undocumented) varyonvg flag -p in addition to -c to vary on the VG in concurrent mode but in passive state.

Or take down the whole cluster and start it up again. The cluster manager should notice the change in VG capabilities and vary them on appropriately on all nodes.
0
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 

Author Comment

by:sminfo
ID: 39841458
I think I have something wrong in our cluster config... I have setup all VGs to EC, but when I stop/start cluster it does not execute the -c when varyonvg VGS..

Not sure if I must export/import all VGs on all nodes.. and start cluster again...

Let me see cluster logs.. maybe it's complaining about the '-c' when varyonvg
0
 

Author Comment

by:sminfo
ID: 39841462
wmp.. I have read about 'Fast disk take over'.. what is it? Is it what powerha uses to varyonvg/varyoffvg when a node is down?
0
 
LVL 68

Expert Comment

by:woolmilkporc
ID: 39841502
With concurrent VGs the group services on all nodes are always aware of possible changes in the VG structure (changes in LV size, new LVs - but not new filesystems!) and there is also no need anymore to break reserves (because there are none).

That's what they call Lazy Update or Fast Disk Takeover, though.

You can of course (on the passive nodes) run "importvg -L ..." to make the change known there. Best run "varyonvg -b -u ..." beforehand on the node which has the VGs varied on to release possible locks. On the active node "importvg -L ..." is not possible. You must deactivate the cluster and run a normal importvg there.

A preliminary exportvg should not be required, as far as I know.
0
 

Author Comment

by:sminfo
ID: 39841519
mm.. why do you say but not new filesystems!) ?? How can I synchronize new filesystems on powerha?

There's no dough I have problems here.. cluster varyonvg VGs on EC but with VG Mode: Non-concurrent. But if I manually varyoffvg and varyonvg -c it shows:

(node3):[root] /usr/local/bin -> lspv
hdisk1          000e052abdc66ea3                    vg101           concurrent
remhdisk1       0009eacabd84ed0b                    vg101           concurrent

Open in new window

and lsvg output:
(node3):[root] /usr/local/bin -> lsvg vg101
VOLUME GROUP:       vg101                    VG IDENTIFIER:  0009eaca0000d40000000142bd84ed5a
VG STATE:           active                   PP SIZE:        8 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2542 (20336 megabytes)
MAX LVs:            256                      FREE PPs:       1754 (14032 megabytes)
LVs:                8                        USED PPs:       788 (6304 megabytes)
OPEN LVs:           0                        QUORUM:         1 (Disabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
[b][b]Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Concurrent[/b][/b]
Node ID:            3                        Active Nodes:
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no

Open in new window

0
 
LVL 68

Expert Comment

by:woolmilkporc
ID: 39841530
The active node is thus OK, remains importvg -L on the passive nodes followed by varyonvg -c -p ...

Once you have the VGs on the active node in active/read-write/concurrent and on the passive nodes in active/passive-only/concurrent a following cluster restart should vary on the VGs that way.
If it doesn't we will indeed have to go through the logs (hacmp.out and maybe clstrmgr.debug).
0
 
LVL 68

Accepted Solution

by:
woolmilkporc earned 500 total points
ID: 39841539
As for the new filesystems - when the group services detect a change they will update the ODM, but they will not touch /etc/filesystems, neither will they create mountpoints.

But "importvg -L ..." can do all this, so after creating a new FS on the active node the underlying LV will already be known on the passive nodes, and an "importvg -L ..." there will do the rest.
Attention "importvg -L ..." will error out in case of LV name clashes - a normal importvg will rename the conflicting LVs.

If you don't do the importvg -L ...  a failover will work nonetheless - but the cluster will have to export/import the VG - no more Fast Takeover/Lazy Update in this case.
0
 

Author Closing Comment

by:sminfo
ID: 39841605
you're the best!!

I've already understood.. and my problem is due because when try to varyonvg VGs the remote disk are not available and it fails, so after some tries it varyonvg with option -n (don't synchronize data between disks... don't know if you understand me.. remember all  VGs have two disks, one local and the other is remote (RPVs).
I'll open a PMR with IBM regarding this.

Thanks tons!
0

Featured Post

Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

I have been running these systems for a few years now and I am just very happy with them.   I just wanted to share the manual that I have created for upgrades and other things.  Oooh yes! FreeBSD makes me happy (as a server), no maintenance and I al…
Java performance on Solaris - Managing CPUs There are various resource controls in operating system which directly/indirectly influence the performance of application. one of the most important resource controls is "CPU".   In a multithreaded…
Learn several ways to interact with files and get file information from the bash shell. ls lists the contents of a directory: Using the -a flag displays hidden files: Using the -l flag formats the output in a long list: The file command gives us mor…
Learn how to navigate the file tree with the shell. Use pwd to print the current working directory: Use ls to list a directory's contents: Use cd to change to a new directory: Use wildcards instead of typing out long directory names: Use ../ to move…

739 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question