sunhux
asked on
accidentally "rm -r *" on one of the VxFs mirror disk (which includes root)
I'm practically green with VxFs & was trying to detach a rootdg
VxFs disk member to take flash archive & blundered.
These are what I did on that fateful Solaris 8 while trying to
follow some of the suggestions in the link below :
http://www.sunmanagers.org/pipermail/sunmanagers/2001-June/004650.html
Both / & /la are mirrorred VxFs partitions :
# ufsdump 0uf /dev/rmt/0n /
# ufsdump 0uf /dev/rmt/0n /la
(the other partitions are just swap, /var/run, /proc)
# vxprint -g rootdg -hmvps (hit ENTER at this point - did not complete the line)
# vxprint -g rootdg -hmvps rootvol rootvol-01> /var/tmp/rootdg.txt
# mkdir /rut
# mount /dev/dsk/c0t0d0s2 /rut
# cd /rut
# rm -r * (meant to do "rm -r 8_Recommended")
The server crashes to maintenance mode at this point, so in this mode,
I tried to clone back from the other good disk :
# dd if=/dev/rdsk/c0t8d0s2 of=/dev/rdsk/c0t0d0s2
& it's been 15 hrs & the "dd" is still running.
Should I stop the "dd" & attempt to restore back from the
tape using ufsrestore? As I'm completely green to VxFs,
appreciate if someone could supply the actual commands
/steps to recover from this (whether ufsrestore or any other
methods)
The configuration which I've managed to capture earlier :
# vxprint rootdg | more
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
# vxprint -g rootdg | more
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm disk01 c0t8d0s2 - 35363560 - - - -
dm rootdisk c0t0d0s2 - 35368271 - - - - erase this!!
sd rootdiskPriv - ENABLED 4711 - - - PRIVATE
v rootvol root ENABLED 10243888 - ACTIVE - -
pl rootvol-01 rootvol ENABLED 10243888 - ACTIVE - -
sd rootdisk-B0 rootvol-01 ENABLED 1 0 - - Block0
sd rootdisk-02 rootvol-01 ENABLED 10243887 1 - - -
pl rootvol-02 rootvol ENABLED 10243888 - ACTIVE - -
sd disk01-01 rootvol-02 ENABLED 10243888 0 - - -
VxFs disk member to take flash archive & blundered.
These are what I did on that fateful Solaris 8 while trying to
follow some of the suggestions in the link below :
http://www.sunmanagers.org/pipermail/sunmanagers/2001-June/004650.html
Both / & /la are mirrorred VxFs partitions :
# ufsdump 0uf /dev/rmt/0n /
# ufsdump 0uf /dev/rmt/0n /la
(the other partitions are just swap, /var/run, /proc)
# vxprint -g rootdg -hmvps (hit ENTER at this point - did not complete the line)
# vxprint -g rootdg -hmvps rootvol rootvol-01> /var/tmp/rootdg.txt
# mkdir /rut
# mount /dev/dsk/c0t0d0s2 /rut
# cd /rut
# rm -r * (meant to do "rm -r 8_Recommended")
The server crashes to maintenance mode at this point, so in this mode,
I tried to clone back from the other good disk :
# dd if=/dev/rdsk/c0t8d0s2 of=/dev/rdsk/c0t0d0s2
& it's been 15 hrs & the "dd" is still running.
Should I stop the "dd" & attempt to restore back from the
tape using ufsrestore? As I'm completely green to VxFs,
appreciate if someone could supply the actual commands
/steps to recover from this (whether ufsrestore or any other
methods)
The configuration which I've managed to capture earlier :
# vxprint rootdg | more
Disk group: rootdg
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
# vxprint -g rootdg | more
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg rootdg rootdg - - - - - -
dm disk01 c0t8d0s2 - 35363560 - - - -
dm rootdisk c0t0d0s2 - 35368271 - - - - erase this!!
sd rootdiskPriv - ENABLED 4711 - - - PRIVATE
v rootvol root ENABLED 10243888 - ACTIVE - -
pl rootvol-01 rootvol ENABLED 10243888 - ACTIVE - -
sd rootdisk-B0 rootvol-01 ENABLED 1 0 - - Block0
sd rootdisk-02 rootvol-01 ENABLED 10243887 1 - - -
pl rootvol-02 rootvol ENABLED 10243888 - ACTIVE - -
sd disk01-01 rootvol-02 ENABLED 10243888 0 - - -
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
The various cylinder/partition information are still intact - I compared
the 2 disks' partition (partition, print) info using "format" & the start/end
cylinders for every slices on both disks are still the same.
So I used ufsrestore to restore /dev/dsk/c0t0d0s0 & use dd to clone
it to /dev/dsk/c0t8d0s0 (they are Veritas mirrors to each other) :
a)I loaded the tape (which we took a 'ufsdump 0uf' to last Sat)
b)booted up the server using Solaris CDROM
c) mount /dev/dsk/c0t0d0s0 /mnt
d)cd /mnt
e)ufsrestore rvf /dev/rmt/0n
f)cd
g)umount /mnt
h)dd if=/dev/rdsk/c0t0d0s0 bs=2097152 of=/dev/rdsk/c0t8d0s0
i)rebooted server
But still the boot process has an error & I have to enter Ctrl-D &
boot up in sort of single-user mode.
Should I disable something in /etc/mdd.conf or rebuild the Veritas
mirror rootdg?
Appreciate any help to build back to its previous settings, esp Veritas commands
Problem now is after booting up, it appeared to be looking for
/dev/vx/vol/rootdg but it failed. So the bootup process stops
at the point where it asks :
enter root password to go into maintenance mode or Ctrl-D to continue normal boot
If I press Ctrl-D, it boots up without mounting all the correct partitions/slices but
/dev/dsk/c0t0d0s0 is mounted.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
/etc/vfstab :
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/vx/dsk/swapvol - - swap - no -
/dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no logging
/dev/vx/dsk/sla /dev/vx/rdsk/sla /sla ufs 2 yes -
swap - /tmp tmpfs - yes -
#NOTE: volume rootvol (/) encapsulated partition c0t0d0s0
#NOTE: volume swapvol (swap) encapsulated partition c0t0d0s1
#NOTE: volume la (/la) encapsulated partition c0t0d0s6
/etc/system :
* For OORACLE:
* 1 meg = 1048576
* 200 meg = 209715200
* 4 gig = 4294967295
set shmsys:shminfo_shmmax=4294 967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=50
set semsys:seminfo_semmni=512
set semsys:seminfo_semmsl=500
set semsys:seminfo_semmns=2000
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=3276 7
set shmsys:shminfo_semmnu=500
* end uidata
*
* vxfs_START -- do not remove the following lines:
*
* VxFS requires a stack size greater than the default 8K.
* The following values allow the kernel stack size
* for all threads to be increased to 16K.
*
set lwp_default_stksize=0x4000
* vxfs_END
* vxvm_START (do not remove)
forceload: drv/vxdmp
forceload: drv/vxio
forceload: drv/vxspec
forceload: drv/sd
forceload: drv/scsi
forceload: drv/pci
forceload: drv/ssd
rootdev:/pseudo/vxio@0:0
set vxio:vol_rootdev_is_volume =1
* vxvm_END (do not remove)
set c2audit:audit_load = 1
set abort_enable = 0
* Attempt to prevent and log stack-smashing attacks
set noexec_user_stack = 1
set noexec_user_stack_log = 1
* Require NFS clients to use privileged ports
set nfssrv:nfs_portmon = 1
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
fd - /dev/fd fd - no -
/proc - /proc proc - no -
/dev/vx/dsk/swapvol - - swap - no -
/dev/vx/dsk/rootvol /dev/vx/rdsk/rootvol / ufs 1 no logging
/dev/vx/dsk/sla /dev/vx/rdsk/sla /sla ufs 2 yes -
swap - /tmp tmpfs - yes -
#NOTE: volume rootvol (/) encapsulated partition c0t0d0s0
#NOTE: volume swapvol (swap) encapsulated partition c0t0d0s1
#NOTE: volume la (/la) encapsulated partition c0t0d0s6
/etc/system :
* For OORACLE:
* 1 meg = 1048576
* 200 meg = 209715200
* 4 gig = 4294967295
set shmsys:shminfo_shmmax=4294
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=50
set semsys:seminfo_semmni=512
set semsys:seminfo_semmsl=500
set semsys:seminfo_semmns=2000
set semsys:seminfo_semopm=100
set semsys:seminfo_semvmx=3276
set shmsys:shminfo_semmnu=500
* end uidata
*
* vxfs_START -- do not remove the following lines:
*
* VxFS requires a stack size greater than the default 8K.
* The following values allow the kernel stack size
* for all threads to be increased to 16K.
*
set lwp_default_stksize=0x4000
* vxfs_END
* vxvm_START (do not remove)
forceload: drv/vxdmp
forceload: drv/vxio
forceload: drv/vxspec
forceload: drv/sd
forceload: drv/scsi
forceload: drv/pci
forceload: drv/ssd
rootdev:/pseudo/vxio@0:0
set vxio:vol_rootdev_is_volume
* vxvm_END (do not remove)
set c2audit:audit_load = 1
set abort_enable = 0
* Attempt to prevent and log stack-smashing attacks
set noexec_user_stack = 1
set noexec_user_stack_log = 1
* Require NFS clients to use privileged ports
set nfssrv:nfs_portmon = 1
ASKER
The normal boot (or 'ok boot disk:a') required Ctrl-D to continue booting
& after bootup, 'df -k' is as follows :
# cd etc
# pwd
/mnt/etc
/mnt/etc # df -k
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/rootvol 5043518 4852058 141025 98% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
swap 1356120 16 1356104 1% /var/run
swap 1356248 144 1356104 1% /tmp
/dev/dsk/c0t8d0s4 5043518 4847919 145164 98% /mnt
ASKER
done on one of the VxFx mirror disk member as shown above.
Should I attempt to do "dd ..." to clone back from the other member disk
or should I now try to restore using the ufsdump that I've taken earlier
before the blunder took place?