sminfo
asked on
expanding VG(consurrent) on PowerHA 6.1
Hi wmp,
We know that 'chvg -g VG' on non-concurrent Vgs works great after you increase the size of their disks, but in case VGs on concurrent running on PowerHA 6.1? We also know, if we shutdown powerHA, exportvg, importvg also works, but Is possible to do it without downtime?
What about this command:
/usr/es/sbin/cluster/sbin/ cl_chvg -cspoc -nnode1,node2 -an -Qn -g VG-Concurrent
Note the "-g" on the command.
Thanks much!
We know that 'chvg -g VG' on non-concurrent Vgs works great after you increase the size of their disks, but in case VGs on concurrent running on PowerHA 6.1? We also know, if we shutdown powerHA, exportvg, importvg also works, but Is possible to do it without downtime?
What about this command:
/usr/es/sbin/cluster/sbin/
Note the "-g" on the command.
Thanks much!
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
take it ease wmp.. I haven't done this command because I don't have any test cluster now. I have full trust in you ;)
I'll make some test and let you know.
I'll make some test and let you know.
ASKER
you're right!!
Thanks wmp... (tons of snow here) ;)
Thanks wmp... (tons of snow here) ;)
ASKER
Hi again wmp, after you told me it's impossible, IBM confirm this on a PMR we open with them, but, today we have created a two node cluster and tested the command and it worked, here I'm sending you part of the PMR we open yesterday:
The test: with the cluster running, one VG with a disk of 2GB, we increased to 5GB and with the 'famous' command the cluster was able to expand the VG on concurrent.
Information before the test.
two nodes AIX 6.1 TL7 SP6
PowerHA 6.1 Sp8
one disk concurrent with 2 GB size
one concurrent VG named with VGnodes
two filesystems on VGnodes's VG.
Sorry the hdisk's numbers on the two nodes, but we haven't had time to put the same hdisk numbers on all nodes.
On node1:
######
(node1):[root] / -> bootinfo -s hdisk0
2048
(node1):[root] / -> lsvg -o
VGnodes
rootvg
(node1):[root] / -> lsvg VGnodes
VOLUME GROUP: VGnodes VG IDENTIFIER: 0009eaca0000d4000000013cb4 954415
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 494 (1976 megabytes)
MAX LVs: 256 FREE PPs: 301 (1204 megabytes)
LVs: 3 USED PPs: 193 (772 megabytes)
OPEN LVs: 3 QUORUM: 2 (Enabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: no
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled
VG Mode: Concurrent
Node ID: 1 Active Nodes: 2
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
MIRROR POOL STRICT: off
PV RESTRICTION: none INFINITE RETRY: no
(node1):[root] / -> lspv
hdisk0 0009eacab4949f8d VGnodes concurrent
hdisk6 0009eaca5315b469 rootvg active
On node2:
#######
(node2):[root] / -> lspv
hdisk0 0009eaca5315b469 rootvg active
hdisk6 0009eacab4949f8d VGnodes concurrent
(node2):[root] / -> bootinfo -s hdisk6
2048
(node2):[root] / -> lsvg -o
rootvg
(node2):[root] / -> lspv
hdisk0 0009eaca5315b469 rootvg active
hdisk6 0009eacab4949f8d VGnodes concurrent
Now, we have increase our disk from 2GB to 5GB on our DS's storage.
After that, we have run cfgmgr on both nodes and checked that the disk now has 5GB
(node1):[root] / -> cfgmgr
(node1):[root] / -> bootinfo -s hdisk0
5120
(node2):[root] / -> cfgmgr
(node2):[root] / -> bootinfo -s hdisk6
5120
Here we go, with all hacmp's services running on node1, we have run the 'famous' command:
(node1):[root] / -> /usr/es/sbin/cluster/sbin/ cl_chvg -cspoc -nnode1,node2 -an -Qn -g VGnodes
node1: 0516-1804 chvg: The quorum change takes effect immediately.
Now, he have checked that VGnodes's VG has 5 GB total size:
(node1):[root] / -> lsvg VGnodes
VOLUME GROUP: VGnodes VG IDENTIFIER: 0009eaca0000d4000000013cb4 954415
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1262 (5048 megabytes)
MAX LVs: 256 FREE PPs: 1069 (4276 megabytes)
LVs: 3 USED PPs: 193 (772 megabytes)
OPEN LVs: 3 QUORUM: 1 (Disabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: no
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled
VG Mode: Concurrent
Node ID: 1 Active Nodes: 2
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: non-relocatable
MIRROR POOL STRICT: off
PV RESTRICTION: none INFINITE RETRY: no
So, it worked on node1. To test it on the node2, we have made a 'halt -q' on node1 to move the resource group to node2.
(node2):[root] / halt -q
Now on node2, we have lsvg VGnode to see if it has 5 GB:
(node2):[root] /var/hacmp/clcomd -> lsvg VGnodes
VOLUME GROUP: VGnodes VG IDENTIFIER: 0009eaca0000d4000000013cb4 954415
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1262 (5048 megabytes)
MAX LVs: 256 FREE PPs: 1069 (4276 megabytes)
LVs: 3 USED PPs: 193 (772 megabytes)
OPEN LVs: 3 QUORUM: 1 (Disabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: no
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled
VG Mode: Concurrent
Node ID: 2 Active Nodes:
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: non-relocatable
MIRROR POOL STRICT: off
PV RESTRICTION: none INFINITE RETRY: no
Finally, we want to know if it's permitted or not. We'll wait for an answer from you. We have attached all logs files, and a snapshot of the cluster.
Also find another valid info from both nodes:
(node1):[root] /var/hacmp -> oslevel -s
6100-07-06-1241
(node1):[root] /var/hacmp -> prtconf
System Model: IBM,7778-23X
Machine Serial Number: 659EACA
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 1
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 8 NODE1
Memory Size: 2048 MB
Good Memory Size: 2048 MB
Platform Firmware level: EA350_054
Firmware Version: IBM,EA350_054
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: node1
IP Address: 172.17.32.101
Sub Netmask: 255.255.255.240
Gateway: 172.17.32.110
Name Server: 172.20.5.11
Domain Name: bibm.net
Paging Space Information
Total Paging Space: 8448MB
Percent Used: 1%
Volume Groups Information
========================== ========== ========== ========== ========== ========== ==
Active VGs
========================== ========== ========== ========== ========== ========== ==
VGnodes:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 1262 365 00..00..00..112..253
========================== ========== ========== ========== ========== ========== ==
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk6 active 803 99 06..00..00..00..93
========================== ========== ========== ========== ========== ========== ==
INSTALLED RESOURCE LIST
The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
* = Diagnostic support not available.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
+ sys0 System Object
+ sysplanar0 System Planar
* vio0 Virtual I/O Bus
* ent0 U7778.23X.659EACA-V8-C4-T1 Virtual I/O Ethernet Adapter (l-lan)
* vscsi0 U7778.23X.659EACA-V8-C2-T1 Virtual SCSI Client Adapter
* hdisk6 U7778.23X.659EACA-V8-C2-T1 -L82000000 00000000 Virtual SCSI Disk Drive
* cd1 U7778.23X.659EACA-V8-C2-T1 -L81000000 00000000 Virtual SCSI Optical Served by VIO Server
* vsa0 U7778.23X.659EACA-V8-C0 LPAR Virtual Serial Adapter
* vty0 U7778.23X.659EACA-V8-C0-L0 Asynchronous Terminal
+ fcs0 U7778.23X.659EACA-V8-C5-T1 Virtual Fibre Channel Client Adapter
+ fscsi0 U7778.23X.659EACA-V8-C5-T1 FC SCSI I/O Controller Protocol Device
* sfwcomm0 U7778.23X.659EACA-V8-C5-T1 -W0-L0 Fibre Channel Storage Framework Comm
+ fcs1 U7778.23X.659EACA-V8-C6-T1 Virtual Fibre Channel Client Adapter
+ fscsi1 U7778.23X.659EACA-V8-C6-T1 FC SCSI I/O Controller Protocol Device
* hdisk0 U7778.23X.659EACA-V8-C6-T1 -W202900A0 B8478BA6-L 0 MPIO DS5100/5300 Disk
* sfwcomm1 U7778.23X.659EACA-V8-C6-T1 -W0-L0 Fibre Channel Storage Framework Comm
+ L2cache0 L2 Cache
+ mem0 Memory
+ proc0 Processor
(node1):[root] /var/hacmp -> /usr/es/sbin/cluster/utili ties/halev el -s
6.1.0 SP8
(node2):[root] /var/hacmp -> oslevel -s
6100-07-06-1241
(node2):[root] /var/hacmp -> prtconf
System Model: IBM,7778-23X
Machine Serial Number: 659EACA
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 1
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 7 NODE2
Memory Size: 2048 MB
Good Memory Size: 2048 MB
Platform Firmware level: EA350_054
Firmware Version: IBM,EA350_054
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: node2
IP Address: 172.17.32.102
Sub Netmask: 255.255.255.240
Gateway: 172.17.32.110
Name Server: 172.20.5.11
Domain Name: bibm.net
Paging Space Information
Total Paging Space: 8448MB
Percent Used: 1%
Volume Groups Information
========================== ========== ========== ========== ========== ========== ==
Inactive VGs
========================== ========== ========== ========== ========== ========== ==
VGnodes
========================== ========== ========== ========== ========== ========== ==
Active VGs
========================== ========== ========== ========== ========== ========== ==
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 803 99 06..00..00..00..93
========================== ========== ========== ========== ========== ========== ==
INSTALLED RESOURCE LIST
The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
* = Diagnostic support not available.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
+ sys0 System Object
+ sysplanar0 System Planar
* vio0 Virtual I/O Bus
* ent0 U7778.23X.659EACA-V7-C4-T1 Virtual I/O Ethernet Adapter (l-lan)
* vscsi0 U7778.23X.659EACA-V7-C2-T1 Virtual SCSI Client Adapter
* cd0 U7778.23X.659EACA-V7-C2-T1 -L82000000 00000000 Virtual SCSI Optical Served by VIO Server
* hdisk0 U7778.23X.659EACA-V7-C2-T1 -L81000000 00000000 Virtual SCSI Disk Drive
* vsa0 U7778.23X.659EACA-V7-C0 LPAR Virtual Serial Adapter
* vty0 U7778.23X.659EACA-V7-C0-L0 Asynchronous Terminal
+ fcs0 U7778.23X.659EACA-V7-C5-T1 Virtual Fibre Channel Client Adapter
+ fscsi0 U7778.23X.659EACA-V7-C5-T1 FC SCSI I/O Controller Protocol Device
* hdisk6 U7778.23X.659EACA-V7-C5-T1 -W202800A0 B8478BA6-L 0 MPIO DS5100/5300 Disk
* sfwcomm0 U7778.23X.659EACA-V7-C5-T1 -W0-L0 Fibre Channel Storage Framework Comm
+ fcs1 U7778.23X.659EACA-V7-C6-T1 Virtual Fibre Channel Client Adapter
+ fscsi1 U7778.23X.659EACA-V7-C6-T1 FC SCSI I/O Controller Protocol Device
* sfwcomm1 U7778.23X.659EACA-V7-C6-T1 -W0-L0 Fibre Channel Storage Framework Comm
+ L2cache0 L2 Cache
+ mem0 Memory
+ proc0 Processor
(node2):[root] /var/hacmp -> prtconf
(node2):[root] /var/hacmp -> prtconf
(node2):[root] /var/hacmp -> /usr/es/sbin/cluster/utili ties/halev el -s
6.1.0 SP8
The test: with the cluster running, one VG with a disk of 2GB, we increased to 5GB and with the 'famous' command the cluster was able to expand the VG on concurrent.
Information before the test.
two nodes AIX 6.1 TL7 SP6
PowerHA 6.1 Sp8
one disk concurrent with 2 GB size
one concurrent VG named with VGnodes
two filesystems on VGnodes's VG.
Sorry the hdisk's numbers on the two nodes, but we haven't had time to put the same hdisk numbers on all nodes.
On node1:
######
(node1):[root] / -> bootinfo -s hdisk0
2048
(node1):[root] / -> lsvg -o
VGnodes
rootvg
(node1):[root] / -> lsvg VGnodes
VOLUME GROUP: VGnodes VG IDENTIFIER: 0009eaca0000d4000000013cb4
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 494 (1976 megabytes)
MAX LVs: 256 FREE PPs: 301 (1204 megabytes)
LVs: 3 USED PPs: 193 (772 megabytes)
OPEN LVs: 3 QUORUM: 2 (Enabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: no
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled
VG Mode: Concurrent
Node ID: 1 Active Nodes: 2
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable
MIRROR POOL STRICT: off
PV RESTRICTION: none INFINITE RETRY: no
(node1):[root] / -> lspv
hdisk0 0009eacab4949f8d VGnodes concurrent
hdisk6 0009eaca5315b469 rootvg active
On node2:
#######
(node2):[root] / -> lspv
hdisk0 0009eaca5315b469 rootvg active
hdisk6 0009eacab4949f8d VGnodes concurrent
(node2):[root] / -> bootinfo -s hdisk6
2048
(node2):[root] / -> lsvg -o
rootvg
(node2):[root] / -> lspv
hdisk0 0009eaca5315b469 rootvg active
hdisk6 0009eacab4949f8d VGnodes concurrent
Now, we have increase our disk from 2GB to 5GB on our DS's storage.
After that, we have run cfgmgr on both nodes and checked that the disk now has 5GB
(node1):[root] / -> cfgmgr
(node1):[root] / -> bootinfo -s hdisk0
5120
(node2):[root] / -> cfgmgr
(node2):[root] / -> bootinfo -s hdisk6
5120
Here we go, with all hacmp's services running on node1, we have run the 'famous' command:
(node1):[root] / -> /usr/es/sbin/cluster/sbin/
node1: 0516-1804 chvg: The quorum change takes effect immediately.
Now, he have checked that VGnodes's VG has 5 GB total size:
(node1):[root] / -> lsvg VGnodes
VOLUME GROUP: VGnodes VG IDENTIFIER: 0009eaca0000d4000000013cb4
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1262 (5048 megabytes)
MAX LVs: 256 FREE PPs: 1069 (4276 megabytes)
LVs: 3 USED PPs: 193 (772 megabytes)
OPEN LVs: 3 QUORUM: 1 (Disabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: no
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled
VG Mode: Concurrent
Node ID: 1 Active Nodes: 2
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: non-relocatable
MIRROR POOL STRICT: off
PV RESTRICTION: none INFINITE RETRY: no
So, it worked on node1. To test it on the node2, we have made a 'halt -q' on node1 to move the resource group to node2.
(node2):[root] / halt -q
Now on node2, we have lsvg VGnode to see if it has 5 GB:
(node2):[root] /var/hacmp/clcomd -> lsvg VGnodes
VOLUME GROUP: VGnodes VG IDENTIFIER: 0009eaca0000d4000000013cb4
VG STATE: active PP SIZE: 4 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1262 (5048 megabytes)
MAX LVs: 256 FREE PPs: 1069 (4276 megabytes)
LVs: 3 USED PPs: 193 (772 megabytes)
OPEN LVs: 3 QUORUM: 1 (Disabled)
TOTAL PVs: 1 VG DESCRIPTORS: 2
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 1 AUTO ON: no
Concurrent: Enhanced-Capable Auto-Concurrent: Disabled
VG Mode: Concurrent
Node ID: 2 Active Nodes:
MAX PPs per VG: 32768 MAX PVs: 1024
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: non-relocatable
MIRROR POOL STRICT: off
PV RESTRICTION: none INFINITE RETRY: no
Finally, we want to know if it's permitted or not. We'll wait for an answer from you. We have attached all logs files, and a snapshot of the cluster.
Also find another valid info from both nodes:
(node1):[root] /var/hacmp -> oslevel -s
6100-07-06-1241
(node1):[root] /var/hacmp -> prtconf
System Model: IBM,7778-23X
Machine Serial Number: 659EACA
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 1
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 8 NODE1
Memory Size: 2048 MB
Good Memory Size: 2048 MB
Platform Firmware level: EA350_054
Firmware Version: IBM,EA350_054
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: node1
IP Address: 172.17.32.101
Sub Netmask: 255.255.255.240
Gateway: 172.17.32.110
Name Server: 172.20.5.11
Domain Name: bibm.net
Paging Space Information
Total Paging Space: 8448MB
Percent Used: 1%
Volume Groups Information
==========================
Active VGs
==========================
VGnodes:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 1262 365 00..00..00..112..253
==========================
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk6 active 803 99 06..00..00..00..93
==========================
INSTALLED RESOURCE LIST
The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
* = Diagnostic support not available.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
+ sys0 System Object
+ sysplanar0 System Planar
* vio0 Virtual I/O Bus
* ent0 U7778.23X.659EACA-V8-C4-T1
* vscsi0 U7778.23X.659EACA-V8-C2-T1
* hdisk6 U7778.23X.659EACA-V8-C2-T1
* cd1 U7778.23X.659EACA-V8-C2-T1
* vsa0 U7778.23X.659EACA-V8-C0 LPAR Virtual Serial Adapter
* vty0 U7778.23X.659EACA-V8-C0-L0
+ fcs0 U7778.23X.659EACA-V8-C5-T1
+ fscsi0 U7778.23X.659EACA-V8-C5-T1
* sfwcomm0 U7778.23X.659EACA-V8-C5-T1
+ fcs1 U7778.23X.659EACA-V8-C6-T1
+ fscsi1 U7778.23X.659EACA-V8-C6-T1
* hdisk0 U7778.23X.659EACA-V8-C6-T1
* sfwcomm1 U7778.23X.659EACA-V8-C6-T1
+ L2cache0 L2 Cache
+ mem0 Memory
+ proc0 Processor
(node1):[root] /var/hacmp -> /usr/es/sbin/cluster/utili
6.1.0 SP8
(node2):[root] /var/hacmp -> oslevel -s
6100-07-06-1241
(node2):[root] /var/hacmp -> prtconf
System Model: IBM,7778-23X
Machine Serial Number: 659EACA
Processor Type: PowerPC_POWER6
Processor Implementation Mode: POWER 6
Processor Version: PV_6_Compat
Number Of Processors: 1
Processor Clock Speed: 4204 MHz
CPU Type: 64-bit
Kernel Type: 64-bit
LPAR Info: 7 NODE2
Memory Size: 2048 MB
Good Memory Size: 2048 MB
Platform Firmware level: EA350_054
Firmware Version: IBM,EA350_054
Console Login: enable
Auto Restart: true
Full Core: false
Network Information
Host Name: node2
IP Address: 172.17.32.102
Sub Netmask: 255.255.255.240
Gateway: 172.17.32.110
Name Server: 172.20.5.11
Domain Name: bibm.net
Paging Space Information
Total Paging Space: 8448MB
Percent Used: 1%
Volume Groups Information
==========================
Inactive VGs
==========================
VGnodes
==========================
Active VGs
==========================
rootvg:
PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION
hdisk0 active 803 99 06..00..00..00..93
==========================
INSTALLED RESOURCE LIST
The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
* = Diagnostic support not available.
Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus
+ sys0 System Object
+ sysplanar0 System Planar
* vio0 Virtual I/O Bus
* ent0 U7778.23X.659EACA-V7-C4-T1
* vscsi0 U7778.23X.659EACA-V7-C2-T1
* cd0 U7778.23X.659EACA-V7-C2-T1
* hdisk0 U7778.23X.659EACA-V7-C2-T1
* vsa0 U7778.23X.659EACA-V7-C0 LPAR Virtual Serial Adapter
* vty0 U7778.23X.659EACA-V7-C0-L0
+ fcs0 U7778.23X.659EACA-V7-C5-T1
+ fscsi0 U7778.23X.659EACA-V7-C5-T1
* hdisk6 U7778.23X.659EACA-V7-C5-T1
* sfwcomm0 U7778.23X.659EACA-V7-C5-T1
+ fcs1 U7778.23X.659EACA-V7-C6-T1
+ fscsi1 U7778.23X.659EACA-V7-C6-T1
* sfwcomm1 U7778.23X.659EACA-V7-C6-T1
+ L2cache0 L2 Cache
+ mem0 Memory
+ proc0 Processor
(node2):[root] /var/hacmp -> prtconf
(node2):[root] /var/hacmp -> prtconf
(node2):[root] /var/hacmp -> /usr/es/sbin/cluster/utili
6.1.0 SP8
ASKER
of course, we don't know if it's not safe or not, still waiting for IBM final word..;)
Since I don't know this famous way to use the command I can't obviously tell you whether it's safe or not.
I don't have a PowerHA 6.1 cluster running here.
My production clusters are 7.1, and that's what I get when running your command without arguments:
cl_chvg: Missing command line arguments.
Usage: cl_chvg -cspoc "[-f] [ -g ResourceGroup | -n NodeList ]" [-a {n|y}] [-Q {n|y}] [-C] VolumeGroup
I have an old 5.5 cluster, and get the same result:
cl_chvg: Missing command line arguments.
Usage: cl_chvg -cspoc "[-f] [ -g ResourceGroup | -n NodeList ]" [-a {n|y}] [-Q {n|y}] [-C] VolumeGroup
I don't have a PowerHA 6.1 cluster running here.
My production clusters are 7.1, and that's what I get when running your command without arguments:
cl_chvg: Missing command line arguments.
Usage: cl_chvg -cspoc "[-f] [ -g ResourceGroup | -n NodeList ]" [-a {n|y}] [-Q {n|y}] [-C] VolumeGroup
I have an old 5.5 cluster, and get the same result:
cl_chvg: Missing command line arguments.
Usage: cl_chvg -cspoc "[-f] [ -g ResourceGroup | -n NodeList ]" [-a {n|y}] [-Q {n|y}] [-C] VolumeGroup
ASKER
ok wmp.. just wanted to share our PMR with you.. let´s wait to IBM... it sounds interesting because they said the same as you..:) I´ll let you know.... have a nice day...
ASKER
/usr/es/sbin/cluster/sbin/
into this:
/usr/es/sbin/cluster/sbin/
We Added the -g before the VG name and my boss says it worked wome time ago. But I wanted to ask you before make a test. ;)
I have searched on IBM without success.