Solved

expanding VG(consurrent) on PowerHA 6.1

Posted on 2013-02-06
9
2,440 Views
Last Modified: 2013-02-07
Hi wmp,

We know that 'chvg -g VG' on non-concurrent Vgs works great after you increase the size of their disks, but in case VGs on concurrent running on PowerHA 6.1? We also know, if we shutdown powerHA, exportvg, importvg also works, but Is possible to do it without downtime?

What about this command:
/usr/es/sbin/cluster/sbin/cl_chvg -cspoc -nnode1,node2 -an -Qn  -g VG-Concurrent

Note the "-g" on the command.

Thanks much!
0
Comment
Question by:sminfo
  • 6
  • 3
9 Comments
 
LVL 68

Accepted Solution

by:
woolmilkporc earned 500 total points
ID: 38859437
Hi,

"-g" is used to indicate the resource group.

So your above command will not run, because the parameter to the "-g" flag is missing.

And no, you cant grow disks in VGs which are varied on in concurrent mode. Not even PowerHA's CSPOC tools can do this.

wmp
0
 

Author Comment

by:sminfo
ID: 38859485
to tell you the truth, we alter the command (taken from cspoc.log)

/usr/es/sbin/cluster/sbin/cl_chvg -cspoc -nnode1,node2 -an -Qn  VG-Concurrent

into this:

/usr/es/sbin/cluster/sbin/cl_chvg -cspoc -nnode1,node2 -an -Qn  -g VG-Concurrent

We Added the -g before the VG name and my boss says it worked wome time ago. But I wanted to ask you before make a test. ;)

I have searched on IBM without success.
0
 
LVL 68

Assisted Solution

by:woolmilkporc
woolmilkporc earned 500 total points
ID: 38859544
Why don't you just run the command to find out who's right?

What am I supposed to say?

Try

/usr/es/sbin/cluster/sbin/cl_chvg

The command will complain about "Missing command line arguments." and display usage info. Check for yourself!
0
Announcing the Most Valuable Experts of 2016

MVEs are more concerned with the satisfaction of those they help than with the considerable points they can earn. They are the types of people you feel privileged to call colleagues. Join us in honoring this amazing group of Experts.

 

Author Comment

by:sminfo
ID: 38859593
take it ease wmp.. I haven't done this command because I don't have any test cluster now. I have full trust in you ;)

I'll make some test and let you know.
0
 

Author Closing Comment

by:sminfo
ID: 38863107
you're right!!

Thanks wmp... (tons of snow here) ;)
0
 

Author Comment

by:sminfo
ID: 38864609
Hi again wmp, after you told me it's impossible, IBM confirm this on a PMR we open with them, but, today we have created a two node cluster and tested the command and it worked, here I'm sending you part of the PMR we open yesterday:
The test: with the cluster running, one VG with a disk of 2GB, we increased to 5GB and with the 'famous' command the cluster was able to expand the VG on concurrent.

Information before the test.
two nodes AIX 6.1 TL7 SP6
PowerHA 6.1 Sp8
one disk concurrent with 2 GB size
one concurrent VG  named with VGnodes
two filesystems on VGnodes's VG.
Sorry the hdisk's numbers on the two nodes, but we haven't had time to put the same hdisk numbers on all nodes.

On node1:
######
(node1):[root] / -> bootinfo -s hdisk0
2048

(node1):[root] / -> lsvg -o
VGnodes
rootvg

(node1):[root] / -> lsvg VGnodes
VOLUME GROUP:       VGnodes                  VG IDENTIFIER:  0009eaca0000d4000000013cb4954415
VG STATE:           active                   PP SIZE:        4 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      494 (1976 megabytes)
MAX LVs:            256                      FREE PPs:       301 (1204 megabytes)
LVs:                3                        USED PPs:       193 (772 megabytes)
OPEN LVs:           3                        QUORUM:         2 (Enabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Concurrent
Node ID:            1                        Active Nodes:       2
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no

(node1):[root] / -> lspv
hdisk0          0009eacab4949f8d                    VGnodes         concurrent
hdisk6          0009eaca5315b469                    rootvg          active

On node2:
#######
(node2):[root] / -> lspv
hdisk0          0009eaca5315b469                    rootvg          active
hdisk6          0009eacab4949f8d                    VGnodes         concurrent
(node2):[root] / -> bootinfo -s hdisk6
2048
(node2):[root] / -> lsvg -o
rootvg
(node2):[root] / -> lspv
hdisk0          0009eaca5315b469                    rootvg          active
hdisk6          0009eacab4949f8d                    VGnodes         concurrent


Now, we have increase our disk from 2GB to 5GB on our DS's storage.
After that, we have run cfgmgr on both nodes and checked that the disk now has 5GB
(node1):[root] / -> cfgmgr
(node1):[root] / -> bootinfo -s hdisk0
5120

(node2):[root] / -> cfgmgr
(node2):[root] / -> bootinfo -s hdisk6
5120

Here we go, with all hacmp's services running on node1, we have run the 'famous' command:
(node1):[root] / -> /usr/es/sbin/cluster/sbin/cl_chvg -cspoc -nnode1,node2 -an -Qn  -g VGnodes
node1: 0516-1804 chvg: The quorum change takes effect immediately.

Now, he have checked that VGnodes's VG has 5 GB total size:
(node1):[root] / -> lsvg VGnodes
VOLUME GROUP:       VGnodes                  VG IDENTIFIER:  0009eaca0000d4000000013cb4954415
VG STATE:           active                   PP SIZE:        4 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      1262 (5048 megabytes)
MAX LVs:            256                      FREE PPs:       1069 (4276 megabytes)
LVs:                3                        USED PPs:       193 (772 megabytes)
OPEN LVs:           3                        QUORUM:         1 (Disabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Concurrent
Node ID:            1                        Active Nodes:       2
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      non-relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no


So, it worked on node1. To test it on the node2, we have made a 'halt -q' on node1 to move the resource group to node2.
(node2):[root] /  halt -q

Now on node2, we have lsvg VGnode to see if it has 5 GB:
(node2):[root] /var/hacmp/clcomd -> lsvg VGnodes
VOLUME GROUP:       VGnodes                  VG IDENTIFIER:  0009eaca0000d4000000013cb4954415
VG STATE:           active                   PP SIZE:        4 megabyte(s)
VG PERMISSION:      read/write               TOTAL PPs:      1262 (5048 megabytes)
MAX LVs:            256                      FREE PPs:       1069 (4276 megabytes)
LVs:                3                        USED PPs:       193 (772 megabytes)
OPEN LVs:           3                        QUORUM:         1 (Disabled)
TOTAL PVs:          1                        VG DESCRIPTORS: 2
STALE PVs:          0                        STALE PPs:      0
ACTIVE PVs:         1                        AUTO ON:        no
Concurrent:         Enhanced-Capable         Auto-Concurrent: Disabled
VG Mode:            Concurrent
Node ID:            2                        Active Nodes:
MAX PPs per VG:     32768                    MAX PVs:        1024
LTG size (Dynamic): 256 kilobyte(s)          AUTO SYNC:      no
HOT SPARE:          no                       BB POLICY:      non-relocatable
MIRROR POOL STRICT: off
PV RESTRICTION:     none                     INFINITE RETRY: no


Finally, we want to know if it's permitted or not. We'll wait for an answer from you.  We have attached all logs files, and a snapshot of the cluster.
Also find another valid info from both nodes:
(node1):[root] /var/hacmp -> oslevel -s
6100-07-06-1241
(node1):[root] /var/hacmp -> prtconf
System Model: IBM,7778-23X
Machine Serial Number: 659EACA
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 1
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 8 NODE1
Memory Size: 2048 MB
Good Memory Size: 2048 MB
Platform Firmware level: EA350_054
Firmware Version: IBM,EA350_054
Console Login: enable
Auto Restart: true
Full Core: false

Network Information
        Host Name: node1
        IP Address: 172.17.32.101
        Sub Netmask: 255.255.255.240
        Gateway: 172.17.32.110
        Name Server: 172.20.5.11
        Domain Name: bibm.net

Paging Space Information
        Total Paging Space: 8448MB
        Percent Used: 1%

Volume Groups Information
==============================================================================
Active VGs
==============================================================================
VGnodes:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0            active            1262        365         00..00..00..112..253
==============================================================================

rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk6            active            803         99          06..00..00..00..93
==============================================================================

INSTALLED RESOURCE LIST

The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
*   = Diagnostic support not available.

  Model Architecture: chrp
  Model Implementation: Multiple Processor, PCI bus

+ sys0                                                              System Object
+ sysplanar0                                                        System Planar
* vio0                                                              Virtual I/O Bus
* ent0             U7778.23X.659EACA-V8-C4-T1                       Virtual I/O Ethernet Adapter (l-lan)
* vscsi0           U7778.23X.659EACA-V8-C2-T1                       Virtual SCSI Client Adapter
* hdisk6           U7778.23X.659EACA-V8-C2-T1-L8200000000000000     Virtual SCSI Disk Drive
* cd1              U7778.23X.659EACA-V8-C2-T1-L8100000000000000     Virtual SCSI Optical Served by VIO Server
* vsa0             U7778.23X.659EACA-V8-C0                          LPAR Virtual Serial Adapter
* vty0             U7778.23X.659EACA-V8-C0-L0                       Asynchronous Terminal
+ fcs0             U7778.23X.659EACA-V8-C5-T1                       Virtual Fibre Channel Client Adapter
+ fscsi0           U7778.23X.659EACA-V8-C5-T1                       FC SCSI I/O Controller Protocol Device
* sfwcomm0         U7778.23X.659EACA-V8-C5-T1-W0-L0                 Fibre Channel Storage Framework Comm
+ fcs1             U7778.23X.659EACA-V8-C6-T1                       Virtual Fibre Channel Client Adapter
+ fscsi1           U7778.23X.659EACA-V8-C6-T1                       FC SCSI I/O Controller Protocol Device
* hdisk0           U7778.23X.659EACA-V8-C6-T1-W202900A0B8478BA6-L0  MPIO DS5100/5300 Disk
* sfwcomm1         U7778.23X.659EACA-V8-C6-T1-W0-L0                 Fibre Channel Storage Framework Comm
+ L2cache0                                                          L2 Cache
+ mem0                                                              Memory
+ proc0                                                             Processor
(node1):[root] /var/hacmp -> /usr/es/sbin/cluster/utilities/halevel -s
6.1.0 SP8

(node2):[root] /var/hacmp ->  oslevel -s
6100-07-06-1241
(node2):[root] /var/hacmp -> prtconf
System Model: IBM,7778-23X
Machine Serial Number: 659EACA
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 1
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 7 NODE2
Memory Size: 2048 MB
Good Memory Size: 2048 MB
Platform Firmware level: EA350_054
Firmware Version: IBM,EA350_054
Console Login: enable
Auto Restart: true
Full Core: false

Network Information
        Host Name: node2
        IP Address: 172.17.32.102
        Sub Netmask: 255.255.255.240
        Gateway: 172.17.32.110
        Name Server: 172.20.5.11
        Domain Name: bibm.net

Paging Space Information
        Total Paging Space: 8448MB
        Percent Used: 1%

Volume Groups Information
==============================================================================
Inactive VGs
==============================================================================
VGnodes
==============================================================================
Active VGs
==============================================================================
rootvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk0            active            803         99          06..00..00..00..93
==============================================================================

INSTALLED RESOURCE LIST

The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
*   = Diagnostic support not available.

  Model Architecture: chrp
  Model Implementation: Multiple Processor, PCI bus

+ sys0                                                              System Object
+ sysplanar0                                                        System Planar
* vio0                                                              Virtual I/O Bus
* ent0             U7778.23X.659EACA-V7-C4-T1                       Virtual I/O Ethernet Adapter (l-lan)
* vscsi0           U7778.23X.659EACA-V7-C2-T1                       Virtual SCSI Client Adapter
* cd0              U7778.23X.659EACA-V7-C2-T1-L8200000000000000     Virtual SCSI Optical Served by VIO Server
* hdisk0           U7778.23X.659EACA-V7-C2-T1-L8100000000000000     Virtual SCSI Disk Drive
* vsa0             U7778.23X.659EACA-V7-C0                          LPAR Virtual Serial Adapter
* vty0             U7778.23X.659EACA-V7-C0-L0                       Asynchronous Terminal
+ fcs0             U7778.23X.659EACA-V7-C5-T1                       Virtual Fibre Channel Client Adapter
+ fscsi0           U7778.23X.659EACA-V7-C5-T1                       FC SCSI I/O Controller Protocol Device
* hdisk6           U7778.23X.659EACA-V7-C5-T1-W202800A0B8478BA6-L0  MPIO DS5100/5300 Disk
* sfwcomm0         U7778.23X.659EACA-V7-C5-T1-W0-L0                 Fibre Channel Storage Framework Comm
+ fcs1             U7778.23X.659EACA-V7-C6-T1                       Virtual Fibre Channel Client Adapter
+ fscsi1           U7778.23X.659EACA-V7-C6-T1                       FC SCSI I/O Controller Protocol Device
* sfwcomm1         U7778.23X.659EACA-V7-C6-T1-W0-L0                 Fibre Channel Storage Framework Comm
+ L2cache0                                                          L2 Cache
+ mem0                                                              Memory
+ proc0                                                             Processor
(node2):[root] /var/hacmp -> prtconf
(node2):[root] /var/hacmp -> prtconf
(node2):[root] /var/hacmp ->  /usr/es/sbin/cluster/utilities/halevel -s
6.1.0 SP8
0
 

Author Comment

by:sminfo
ID: 38864622
of course, we don't know if it's not safe or not, still waiting for IBM final word..;)
0
 
LVL 68

Expert Comment

by:woolmilkporc
ID: 38864920
Since I don't know this famous way to use the command I can't obviously tell you whether it's safe or not.

I don't have a PowerHA 6.1 cluster running here.

My production clusters are 7.1, and that's what I get when running your command without arguments:

cl_chvg: Missing command line arguments.
Usage: cl_chvg -cspoc "[-f] [ -g ResourceGroup | -n NodeList ]" [-a {n|y}] [-Q {n|y}] [-C] VolumeGroup

I have an old 5.5 cluster, and get the same result:

cl_chvg: Missing command line arguments.
Usage: cl_chvg -cspoc "[-f] [ -g ResourceGroup | -n NodeList ]" [-a {n|y}] [-Q {n|y}] [-C] VolumeGroup
0
 

Author Comment

by:sminfo
ID: 38865737
ok wmp.. just wanted to share our PMR with you.. let´s wait to IBM... it sounds interesting because they said the same as you..:) I´ll let you know.... have a nice day...
0

Featured Post

Active Directory Webinar

We all know we need to protect and secure our privileges, but where to start? Join Experts Exchange and ManageEngine on Tuesday, April 11, 2017 10:00 AM PDT to learn how to track and secure privileged users in Active Directory.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Installing FreeBSD… FreeBSD is a darling of an operating system. The stability and usability make it a clear choice for servers and desktops (for the cunning). Savvy?  The Ports collection makes available every popular FOSS application and packag…
Why Shell Scripting? Shell scripting is a powerful method of accessing UNIX systems and it is very flexible. Shell scripts are required when we want to execute a sequence of commands in Unix flavored operating systems. “Shell” is the command line i…
Learn how to find files with the shell using the find and locate commands. Use locate to find a needle in a haystack.: With locate, check if the file still exists.: Use find to get the actual location of the file.:
Learn how to navigate the file tree with the shell. Use pwd to print the current working directory: Use ls to list a directory's contents: Use cd to change to a new directory: Use wildcards instead of typing out long directory names: Use ../ to move…

820 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question