Solved

Concurrent/Enhanced concurret/NON-concurrent on power HA.

Posted on 2014-02-06
10
2,592 Views
Last Modified: 2014-02-07
OK.. I have read about its differences, but not sure how can they be used on a power HA cluster.  In our case, we have 3 nodes (two sites replicating data with GLVM) and we have setup all shared VGS like this:
(node3):[root] /usr/local/bin -> lsvg vg101
VOLUME GROUP:       vg101                    VG IDENTIFIER:  0009eaca0000d40000000142bd84ed5a
VG STATE:           active                   PP SIZE:        8 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2542 (20336 megabytes)
MAX LVs:            256                      FREE PPs:       1754 (14032 megabytes)
LVs:                8                        USED PPs:       788 (6304 megabytes)
OPEN LVs:           7                        QUORUM:         1 (Disabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Non-Concurrent
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off

Open in new window


If you see COncurrent field it says it's enhanced concurrent but if you see VG mode filed it says Non-concurrent... question: is it concurrent or not? ;)

Other VGs on the same cluster does not shows fields above as you can see it:

(node3):[root] /usr/local/bin -> lsvg vg102
VOLUME GROUP:       vg102                    VG IDENTIFIER:  0009eaca0000d40000000142bd85241c
VG STATE:           active                   PP SIZE:        8 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2542 (20336 megabytes)
MAX LVs:            256                      FREE PPs:       992 (7936 megabytes)
LVs:                6                        USED PPs:       1550 (12400 megabytes)
OPEN LVs:           5                        QUORUM:         1 (Disabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no

Open in new window


So, I'm confused on when do I have to use concurent/non-concurrent/Enchanced-concurrent and how can I check on our VGs..

Thanks..
0
Comment
Question by:sminfo
  • 5
  • 5
10 Comments
 
LVL 68

Assisted Solution

by:woolmilkporc
woolmilkporc earned 500 total points
Comment Utility
OK, here we go again. Nice to meet you!

The "Concurrent:" value indicates whether a VG has been defined as concurrent-capable or not, which means: it can be varied on in concurrrent mode.

The "VG Mode:" value indicates whether a concurent-capable VG has been varied on in concurrent mode.

"Enhanced-Capable" is the only available "concurrent" capability. The old "Capable" thing has long gone (along with the 32-bit kernel).

With PowerHA you should create your VGs Enhanced-Concurrent capable. This setting can speed up failover processing quite a lot.
PowerHA will vary on such VGs in Concurrent mode if at all possible.

Was your cluster active the moment you took the above snapshots?
If so, then it's a bit strange that a concurrent-capable VG has been varied on non-concurrent by the cluster group services.

Or did you vary it on by hand? To vary on a VG in concurrent mode you must use the "-c" flag of "varyonvg", and PowerHA (Group Services) must be up and running.

If PowerHA varied on the VG non-concurrent you should check the relevant logs (cspoc.log, clstrmgr.debug, clutils.log and errpt).

By the way, lsvg doesn't show the related fields for VGs not configured as concurrent capable (your vg102 for example).

"Auto-Concurrent" has no meaning under PowerHA/HACMP, as far as I know. It's a bit confusing, but as I said, PowerHA/HAVMP will vary on a concurrent capable VG in concurrent mode, if possible, regardless of the "Auto-Concurrent" setting.

wmp
0
 

Author Comment

by:sminfo
Comment Utility
Hi wmp... nice to meet you too!! ;)

I'm still confused with all this and our environment. Lets start with your comments:

--OK, so there are two types Enhanced-concurrent and non-concurrent (a simple and normal VG running on only one node).

FOR Enhanced-concurrent:
-- Enhanced-concurrent can be (VG Mode filed of lsvg cmd):
1- VG Mode:            Non-Concurrent
2- VG Mode:            Concurrent

Open in new window


In a cluster of two nodes:
-- Enhanced-concurrent--Concurrent mode is when the VG is varyonvg on boths nodes (one active R/W and the other passive RO)... so, lspv on both nodes  shows something like this:
hdisk2          0009eacabd8523c6                    vg102           concurrent
hdisk3          0009eacasdfghertf4                    vg103           concurrent

Open in new window


--Enhanced-concurrent--Non-Concurrent  dont know really what's means, because VG says enhanced-concurrent but at the same time is not.

FOR non-concurrent:
-- I think it's a normal VG like this output:
(aix1):[root] / -> lsvg tsm01vg
VOLUME GROUP:       tsm01vg                  VG IDENTIFIER:  0000e4720000d9000000011c9bb30f1a
VG STATE:           active                   PP SIZE:        16 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      11764 (188224 megabytes)
MAX LVs:            256                      FREE PPs:       2522 (40352 megabytes)
LVs:                180                      USED PPs:       9242 (147872 megabytes)
OPEN LVs:           111                      QUORUM:         2 (Enabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      non-relocatable
MIRROR POOL STRICT: super
PV RESTRICTION:     none                     INFINITE RETRY: no

Open in new window


But our cluster is built on 3 nodes and two sites replicating data with GLVM.
On a PMR with IBM says we, in our enviroment, MUST use VGs con glvm on enhanced-concurrent. So I changed them using smitty cspoc/storage/Volume Group/ Enable a Volume Group for Fast Disk Takeover or Concurrent Access for all VGs. Before do this, the output of all VGs were like the lsvg vg102 in the original question, but after the change to EC, the out of all VGs are like lsvg vg101 in the original question.

So, I'm not sure if they are on EC or not...;(

So, what do you think of all this?
0
 
LVL 68

Assisted Solution

by:woolmilkporc
woolmilkporc earned 500 total points
Comment Utility
That's quite normal.

Just making a VG enhanced-concurrent capable doesn't change its varyon mode.

You must vary off the VG to then vary it on in concurrent mode (the -c flag of varyonvg).

On the non-active nodes you can use the (quasi undocumented) varyonvg flag -p in addition to -c to vary on the VG in concurrent mode but in passive state.

Or take down the whole cluster and start it up again. The cluster manager should notice the change in VG capabilities and vary them on appropriately on all nodes.
0
 

Author Comment

by:sminfo
Comment Utility
I think I have something wrong in our cluster config... I have setup all VGs to EC, but when I stop/start cluster it does not execute the -c when varyonvg VGS..

Not sure if I must export/import all VGs on all nodes.. and start cluster again...

Let me see cluster logs.. maybe it's complaining about the '-c' when varyonvg
0
 

Author Comment

by:sminfo
Comment Utility
wmp.. I have read about 'Fast disk take over'.. what is it? Is it what powerha uses to varyonvg/varyoffvg when a node is down?
0
Top 6 Sources for Identifying Threat Actor TTPs

Understanding your enemy is essential. These six sources will help you identify the most popular threat actor tactics, techniques, and procedures (TTPs).

 
LVL 68

Expert Comment

by:woolmilkporc
Comment Utility
With concurrent VGs the group services on all nodes are always aware of possible changes in the VG structure (changes in LV size, new LVs - but not new filesystems!) and there is also no need anymore to break reserves (because there are none).

That's what they call Lazy Update or Fast Disk Takeover, though.

You can of course (on the passive nodes) run "importvg -L ..." to make the change known there. Best run "varyonvg -b -u ..." beforehand on the node which has the VGs varied on to release possible locks. On the active node "importvg -L ..." is not possible. You must deactivate the cluster and run a normal importvg there.

A preliminary exportvg should not be required, as far as I know.
0
 

Author Comment

by:sminfo
Comment Utility
mm.. why do you say but not new filesystems!) ?? How can I synchronize new filesystems on powerha?

There's no dough I have problems here.. cluster varyonvg VGs on EC but with VG Mode: Non-concurrent. But if I manually varyoffvg and varyonvg -c it shows:

(node3):[root] /usr/local/bin -> lspv
hdisk1          000e052abdc66ea3                    vg101           concurrent
remhdisk1       0009eacabd84ed0b                    vg101           concurrent

Open in new window

and lsvg output:
(node3):[root] /usr/local/bin -> lsvg vg101
VOLUME GROUP:       vg101                    VG IDENTIFIER:  0009eaca0000d40000000142bd84ed5a
VG STATE:           active                   PP SIZE:        8 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      2542 (20336 megabytes)
MAX LVs:            256                      FREE PPs:       1754 (14032 megabytes)
LVs:                8                        USED PPs:       788 (6304 megabytes)
OPEN LVs:           0                        QUORUM:         1 (Disabled)
TOTAL PVs:          2                        VG DESCRIPTORS: 3
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         2                        AUTO ON:        no
[b][b]Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Concurrent[/b][/b]
Node ID:            3                        Active Nodes:
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no

Open in new window

0
 
LVL 68

Expert Comment

by:woolmilkporc
Comment Utility
The active node is thus OK, remains importvg -L on the passive nodes followed by varyonvg -c -p ...

Once you have the VGs on the active node in active/read-write/concurrent and on the passive nodes in active/passive-only/concurrent a following cluster restart should vary on the VGs that way.
If it doesn't we will indeed have to go through the logs (hacmp.out and maybe clstrmgr.debug).
0
 
LVL 68

Accepted Solution

by:
woolmilkporc earned 500 total points
Comment Utility
As for the new filesystems - when the group services detect a change they will update the ODM, but they will not touch /etc/filesystems, neither will they create mountpoints.

But "importvg -L ..." can do all this, so after creating a new FS on the active node the underlying LV will already be known on the passive nodes, and an "importvg -L ..." there will do the rest.
Attention "importvg -L ..." will error out in case of LV name clashes - a normal importvg will rename the conflicting LVs.

If you don't do the importvg -L ...  a failover will work nonetheless - but the cluster will have to export/import the VG - no more Fast Takeover/Lazy Update in this case.
0
 

Author Closing Comment

by:sminfo
Comment Utility
you're the best!!

I've already understood.. and my problem is due because when try to varyonvg VGs the remote disk are not available and it fails, so after some tries it varyonvg with option -n (don't synchronize data between disks... don't know if you understand me.. remember all  VGs have two disks, one local and the other is remote (RPVs).
I'll open a PMR with IBM regarding this.

Thanks tons!
0

Featured Post

What Is Threat Intelligence?

Threat intelligence is often discussed, but rarely understood. Starting with a precise definition, along with clear business goals, is essential.

Join & Write a Comment

In tuning file systems on the Solaris Operating System, changing some parameters of a file system usually destroys the data on it. For instance, changing the cache segment block size in the volume of a T3 requires that you delete the existing volu…
FreeBSD on EC2 FreeBSD (https://www.freebsd.org) is a robust Unix-like operating system that has been around for many years. FreeBSD is available on Amazon EC2 through Amazon Machine Images (AMIs) provided by FreeBSD developer and security office…
Learn how to find files with the shell using the find and locate commands. Use locate to find a needle in a haystack.: With locate, check if the file still exists.: Use find to get the actual location of the file.:
In a previous video, we went over how to export a DynamoDB table into Amazon S3.  In this video, we show how to load the export from S3 into a DynamoDB table.

763 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

9 Experts available now in Live!

Get 1:1 Help Now