Need help in patching Solaris-10 with lu on SVM+ZFS FS with zones

Hello,
I need help in understanding, how lu can work on Solaris-10 on this server. I can detach mirror metadevices of LVM, but zpool looks confusing, which mirror I should break.
server-app01 # : |format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
          /pci@0/pci@0/pci@2/scsi@0/sd@0,0
       1. c0t1d0 <SUN300G cyl 46873 alt 2 hd 20 sec 625>
          /pci@0/pci@0/pci@2/scsi@0/sd@1,0
       2. c0t2d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
          /pci@0/pci@0/pci@2/scsi@0/sd@2,0
       3. c0t3d0 <SEAGATE-ST930003SSUN300G-0868-279.40GB>
          /pci@0/pci@0/pci@2/scsi@0/sd@3,0
Specify disk (enter its number):

server-app01 # zpool status -v
  pool: z
 state: ONLINE
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        z             ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c0t0d0s7  ONLINE       0     0     0
            c0t1d0s7  ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c0t2d0    ONLINE       0     0     0
            c0t3d0    ONLINE       0     0     0

errors: No known data errors
server-app01 # metastat -p
d5 -m d8 d9 1
d8 1 1 c0t0d0s5
d9 1 1 c0t1d0s5
d1 -m d4 d6 1
d4 1 1 c0t0d0s1
d6 1 1 c0t1d0s1
d0 -m d2 d3 1
d2 1 1 c0t0d0s0
d3 1 1 c0t1d0s0

server-app01 # zfs list
NAME                           USED  AVAIL  REFER  MOUNTPOINT
z                              452G  68.0G     1K  legacy
z/export                       379M  68.0G   379M  /export
z/shared                      1.76G  3.24G  1.76G  /export/zones/shared
z/swap2                          1G  68.5G   479M  -
z/swap3                          2G  69.8G   152M  -
z/zones                        447G  68.0G    38K  /export/zones
z/zones/pgpi-factory1        6.65G  13.3G  6.37G  /export/zones/pgpi-factory1
z/zones/pgpi-factory1/var     288M  9.72G   288M  legacy
z/zones/pgpi-oradb1           153G  68.0G   153G  /export/zones/pgpi-oradb1
z/zones/pgpi-oradb1/var       242M  9.76G   242M  legacy
z/zones/pgpi-pin1            21.5G  8.51G  19.1G  /export/zones/pgpi-pin1
z/zones/pgpi-pin1/var        2.38G  7.62G  2.38G  legacy
z/zones/pgpi-webserv1        14.4G  5.60G  14.0G  /export/zones/pgpi-webserv1
z/zones/pgpi-webserv1/var     423M  5.60G   423M  legacy
z/zones/pgpj-factory1        7.50G  12.5G  7.50G  /export/zones/pgpj-factory1
z/zones/pgpj-factory1_local   190M  68.0G   190M  legacy
z/zones/pgpj-factory1_var     293M  9.71G   293M  legacy
z/zones/pgpj-oradb1           191G  39.0G   191G  /export/zones/pgpj-oradb1
z/zones/pgpj-oradb1_local     198M  68.0G   198M  legacy
z/zones/pgpj-oradb1_var      8.09G  1.91G  8.09G  legacy
z/zones/pgpj-pin1            22.7G  7.26G  22.7G  /export/zones/pgpj-pin1
z/zones/pgpj-pin1_local       307M  68.0G   307M  legacy
z/zones/pgpj-pin1_var        3.87G  6.13G  3.87G  legacy
z/zones/pgpj-webserv1        14.9G  5.11G  14.9G  /export/zones/pgpj-webserv1
z/zones/pgpj-webserv1_local   198M  68.0G   198M  legacy
z/zones/pgpj-webserv1_var    1.81G  8.19G  1.81G  legacy
server-app01 #
server-app01 # zpool iostat -v
                 capacity     operations    bandwidth
pool           used  avail   read  write   read  write
------------  -----  -----  -----  -----  -----  -----
z              449G  78.6G      5      4   233K  18.1K
  mirror       233G  17.5G      3      1   170K  6.89K
    c0t0d0s7      -      -      1      0   109K  6.93K
    c0t1d0s7      -      -      1      0   112K  6.93K
  mirror       217G  61.1G      2      2  63.6K  11.2K
    c0t2d0        -      -      0      1  51.1K  11.2K
    c0t3d0        -      -      0      1  51.0K  11.2K
------------  -----  -----  -----  -----  -----  -----

server-app01 #

Open in new window

My plan is to run lucreate and then install patch cluster on alternate disk, but how should I include ZFS file-system on this ?
lucreate -c "Solaris-10" -m /:/dev/dsk/c0t1d0s0:ufs -m -:/dev/dsk/c0t1d0s1:swap -m /var:/dev/dsk/c0t1d0s5:ufs -n "Solaris_10_Patch_BE" -l /var/log/lucreate.log

Open in new window

Abhishek SinghAsked:
Who is Participating?
 
arnoldConnect With a Mentor Commented:
I think you reference, http://www.oracle.com/technetwork/server-storage/solaris10/solaris-live-upgrade-wp-167900.pdf

Breaking an existing redundancy setup increases risk based on age ...


The document, if you have the option, is add another drive/s to have a clone of the existing environment on which to perform the update.
And then test.

My experiences are not recent, hit request attention to have additional people who may have recent/better insight.

Which version do you, to which version are you going?
0
 
arnoldCommented:
Have not done this recently enough to have ......

See if the following helps.
http://sysadmin-tips-and-tricks.blogspot.com/2012/07/solaris-using-live-upgrade-to-upgrade.html
Are you looking to break mirrors as a means to recover in case of a failure, issues?

Do you have a similar system in a test environment on which you can make these attempts.

Usually, the main was updated, and then the zones.......
Check oracle site i think they have guides, references that could be useful.
0
 
Abhishek SinghAuthor Commented:
I have two doubts on this

1- If I am detaching correct disks with below commands. I mean, striping vs concatenation -
zpool detach z c0t1d0s7
zpool detach z c0t3d0

Open in new window

2- I can see below commands from few docs, but my setup is different (SVM+ZFS). Do I need to include z pool and what would I add to below command ?
lucreate -c "Solaris-10" -m /:/dev/dsk/c0t1d0s0:ufs -m -:/dev/dsk/c0t1d0s1:swap -m /var:/dev/dsk/c0t1d0s5:ufs -n "Solaris_10_Patch_BE" -l /var/log/lucreate.log

Open in new window

0
Get expert help—faster!

Need expert help—fast? Use the Help Bell for personalized assistance getting answers to your important questions.

 
arnoldCommented:
My recall tool is not clear, do you still have a maintenance with oracle to consult their documentation.

Not sure what the purpose of your separation.

See,s as though the break of mirrors, is an attempt to retain a version of the system prior to the attempt, I.e. In the event of a failure, you'll have a path back.


Backups.


Test system ?
0
 
Abhishek SinghAuthor Commented:
You are right. I am installing patch cluster. Prior to applying that, I want to break mirror to have backup, in case of any bad patch failure, I should be able to revert back to previous state.
I am trying to get correct commands on oracle documentation.

This is not a test system, but not Production as well :-)
0
 
Abhishek SinghAuthor Commented:
It will be same version, but kernel level will be changed. It is patch cluster (a patch bundle of 300+ patches), which will upgrade many patches to their latest level.
0
 
arnoldCommented:
as noted my experience with pathing is not recent enough to be comfortable advising.

The issue with severing on a functional, live system the state of the OS/system.........to confirm it is functional.....
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.