Solved

Solaris (ZFS): Backup failed

Posted on 2013-01-09
9
1,121 Views
Last Modified: 2013-01-15
Hello Experts,

When I try to do a backup on a sun server (solaris 10 with ZFS), the backup hangs and fails after I type the command zfs send.

I thought the problem was hardware related, so I have replaced the tape drive with a brand new one, but I still have the same errors

errors

zfs send rpool/ROOT/S5@backup20130108x8100                                                                                                                                                  
warning: cannot send 'rpool/ROOT/S5@backup20130108x8100': I/O error    

Open in new window

dmesg
Error for Command: write file mark         Error Level: Fatal                                         
[ID 107833 kern.notice]     Requested Block: 5974873                   Error Block: 5974873       
[ID 107833 kern.notice]     Vendor: HP                                 Serial Number:    9   $DR-1
[ID 107833 kern.notice]     Sense Key: Volume Overflow                                            
[ID 107833 kern.notice]     ASC: 0x0 (end of partition/medium detected), ASCQ: 0x2, FRU: 0x0      
[ID 107833 kern.notice]     End-of-Media Detected                                                 
[ID 107833 kern.warning] WARNING: /pci@0/pci@0/pci@8/pci@0/pci@8/pci@0/scsi@8/st@0,0 (st3):

Open in new window


zfs list
zfs list                                                       
NAME                                   USED  AVAIL  REFER  MOUNTPOINT        
rpool                                 71.0G  63.3G    97K  /rpool            
rpool/ROOT                            58.7G  63.3G    21K  legacy            
rpool/ROOT/S5                       58.7G  63.3G  58.0G  /                 
rpool/ROOT/S5@backup20121210x27192  98.0M      -  57.9G  -                 
rpool/ROOT/S5@backup20121210x220     103M      -  57.9G  -                 
rpool/ROOT/S5@backup20130102x21444   233M      -  57.9G  -                 
rpool/dump                            2.00G  63.3G  2.00G  -                 
rpool/swap                             528M  63.9G    16K  -                 
rpool/var                             9.76G  63.3G    21K  /var              
rpool/var/opt                         9.76G  63.3G    21K  /var/opt          
rpool/var/opt/fds                     9.76G  63.3G  9.76G  /var/opt/fds      
   

Open in new window


zpool status                                                  
 
 pool: rpool                                                                
 state: ONLINE                                                               
 scrub: scrub completed after 0h35m with 0 errors on Tue Jan  8 03:35:04 2013
config:                                                                      
                                                                             
        NAME          STATE     READ WRITE CKSUM                             
        rpool         ONLINE       0     0     0                             
          mirror-0    ONLINE       0     0     0                             
            c1t0d0s0  ONLINE       0     0     0                             
            c1t1d0s0  ONLINE       0     0     0                             
        spares                                                               
          c1t3d0s0    AVAIL                                                  
          c1t2d0s0    AVAIL                                                  
                                                                             
errors: No known data errors

Open in new window

0
Comment
Question by:cismoney
  • 3
  • 3
  • 2
  • +1
9 Comments
 
LVL 47

Expert Comment

by:dlethe
ID: 38761847
You simply ran out of tape.  Nothing is wrong with the source, it is the destination and methodology.  You don't use zfs send for a tape target.


http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Recommendations_for_Saving_ZFS_Data
0
 
LVL 22

Expert Comment

by:blu
ID: 38762686
What happened to the snapshot you are trying to send? It isn't on the list from "zfs list".
0
 
LVL 38

Accepted Solution

by:
Aaron Tomosky earned 400 total points
ID: 38763218
Zfs send receive is best used for sending a snapshot to another zfs box somewhere. I personally use a script on the source box that uses ssh to trigger the receive on the other. It does this hourly daily and weekly. The point of tape is to store a snapshot In time for a file system that doesn't have snapshot capabilities.
That said, you can pipe a zfs send to a tar and put that on external storage and then receive from that tar on another box or at a later date. However if you have ever tried to restore from tape you know your chances of getting every bit back are slim and it won't work without It all intact.
0
 
LVL 47

Assisted Solution

by:dlethe
dlethe earned 100 total points
ID: 38763662
Aarontomosky makes another good point ... even if it did work (which it will as long as there is enough room on the tape) ... then odds are you'll never be able to restore from it.  UNIX is great from the perspective that it lets people do pretty much whatever they want and it won't ask are-you-sure.

But zfs send is the wrong software for streaming to a tape.   You have to save that file to disk, THEN break it up with tar, which is designed to deal with tape headers and end-of-tapes.   (Remember tar stands for Tape ARchive).
0
Give your grad a cloud of their own!

With up to 8TB of storage, give your favorite graduate their own personal cloud to centralize all their photos, videos and music in one safe place. They can save, sync and share all their stuff, and automatic photo backup helps free up space on their smartphone and tablet.

 
LVL 38

Expert Comment

by:Aaron Tomosky
ID: 38763780
how about we start with a question: From a high perspective, what are you trying to accomplish here?
0
 

Author Comment

by:cismoney
ID: 38766323
@aarontomosky , a customer has been using a script for many years to do the daily backups, but the backup hasn't succeed  the last few days

@dlethe , the tape I use has a capacity of 72gb  

 

backup log
more backuplog11
Backup started at: Friday, January 11, 2013 02:00:00 AM GMT

Backing up with method: local_tape
ToC generated, the following sessions will be on tape

     0  ToC
     1  TimesTen Database Backup
     2  rpool Filesystem Backup
     3  rpool/ROOT Filesystem Backup
     4  rpool/ROOT/S5 Filesystem Backup
     5  rpool/var Filesystem Backup
     6  rpool/var/opt Filesystem Backup
     7  rpool/var/opt/fds Filesystem Backup
     8  Backup Log
tar: write error: unexpected EOF
tar: close error: I/O error



TimesTen backup started @ Friday, January 11, 2013 02:00:16 AM GMT
Backup started ...
Backup complete
Backing up TimesTen Database
tar: write error: unexpected EOF
a ./ 0 tape blocks
a ./s_db.0.bac55207 114144 tape blocks
TimesTen backup ended @ Friday, January 11, 2013 02:02:02 AM GMT

+ [[ -z /dev/rmt/0cn ]]
+ RVAL=0
+ IGNOREFILE=/.backup_ignored
+ ZIGNOREFILE=/.backup_ignored_zfs
+ FDSDISK=/bin/false
+ grep ^/var/opt/fds
+ /sbin/mount
+ 1> /dev/null 2>& 1
+ FDSDISK=/bin/true
+ date
+ echo Filesystem backup started @ Friday, January 11, 2013 02:02:02 AM GMT
Filesystem backup started @ Friday, January 11, 2013 02:02:02 AM GMT
+ true
+ + date +%Y%m%d
ZSNAPSHOTNAME=backup20130111x13948
+ + cut -d/ -f1
+ awk {if($2=="/") print $1} /etc/mnttab
RP=rpool
+ awk {if(($1 == "rpool" || substr($1, 0, length("rpool")+1) == "rpool/") && $3 == "filesystem") print $1}
+ zfs get type
+ grep ^rpool$ /.backup_ignored_zfs
+ 1> /dev/null 2>& 1
+ echo Backing up rpool file system
Backing up rpool file system
+ zfs snapshot rpool@backup20130111x13948
+ zfs send rpool@backup20130111x13948
+ 1> /dev/rmt/0cn
+ zfs destroy rpool@backup20130111x13948
+ grep ^rpool/ROOT$ /.backup_ignored_zfs
+ 1> /dev/null 2>& 1
+ echo Backing up rpool/ROOT file system
Backing up rpool/ROOT file system
+ zfs snapshot rpool/ROOT@backup20130111x13948
+ zfs send rpool/ROOT@backup20130111x13948
+ 1> /dev/rmt/0cn
+ zfs destroy rpool/ROOT@backup20130111x13948
+ grep ^rpool/ROOT/S5$ /.backup_ignored_zfs
+ 1> /dev/null 2>& 1
+ echo Backing up rpool/ROOT/S5 file system
Backing up rpool/ROOT/S5 file system
+ zfs snapshot rpool/ROOT/S5@backup20130111x13948
+ zfs send rpool/ROOT/S5@backup20130111x13948
+ 1> /dev/rmt/0cn
warning: cannot send 'rpool/ROOT/S5@backup20130111x13948': I/O error
+ RVAL=1
+ zfs destroy rpool/ROOT/S5@backup20130111x13948
+ grep ^rpool/var$ /.backup_ignored_zfs
+ 1> /dev/null 2>& 1
+ echo Backing up rpool/var file system
Backing up rpool/var file system
+ zfs snapshot rpool/var@backup20130111x13948
+ zfs send rpool/var@backup20130111x13948
+ 1> /dev/rmt/0cn
+ zfs destroy rpool/var@backup20130111x13948
+ grep ^rpool/var/opt$ /.backup_ignored_zfs
+ 1> /dev/null 2>& 1
+ echo Backing up rpool/var/opt file system
Backing up rpool/var/opt file system
+ zfs snapshot rpool/var/opt@backup20130111x13948
+ zfs send rpool/var/opt@backup20130111x13948
+ 1> /dev/rmt/0cn
+ zfs destroy rpool/var/opt@backup20130111x13948
+ grep ^rpool/var/opt/fds$ /.backup_ignored_zfs
+ 1> /dev/null 2>& 1
+ echo Backing up rpool/var/opt/fds file system
Backing up rpool/var/opt/fds file system
+ zfs snapshot rpool/var/opt/fds@backup20130111x13948
+ zfs send rpool/var/opt/fds@backup20130111x13948
+ 1> /dev/rmt/0cn
warning: cannot send 'rpool/var/opt/fds@backup20130111x13948': I/O error
+ RVAL=1
+ zfs destroy rpool/var/opt/fds@backup20130111x13948
+ zfs get type
+ awk {if(!($1 == "rpool" || substr($1, 0, length("rpool")+1) == "rpool/") && $3 == "filesystem") print $1}
+ date
+ echo Filesystem backup ended @ Friday, January 11, 2013 02:04:57 AM GMT
Filesystem backup ended @ Friday, January 11, 2013 02:04:57 AM GMT
+ exit 1


All sessions written (except the log) at: Friday, January 11, 2013 02:04:57 AM GMT

Backup failed at: Friday, January 11, 2013 02:07:24 AM GMT

Open in new window


 script.txt
0
 
LVL 47

Expert Comment

by:dlethe
ID: 38766467
I don't care how long the customer has been doing it incorrectly .. zfs send to tape is incorrect.  Proof is that it has now failed.
0
 

Author Comment

by:cismoney
ID: 38766539
how can I modify the script to make it work?
0
 
LVL 38

Expert Comment

by:Aaron Tomosky
ID: 38768355
what else can you backup to? How large is your pool? The best choice IMO is to make another zfs box (zfsguru is quick, easy, and stable) and send to that. I've got the commands for doing inclusive incremental sends so you keep your snapshot history on the second box.
0

Featured Post

Is Your Active Directory as Secure as You Think?

More than 75% of all records are compromised because of the loss or theft of a privileged credential. Experts have been exploring Active Directory infrastructure to identify key threats and establish best practices for keeping data safe. Attend this month’s webinar to learn more.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

The purpose of this article is to demonstrate how we can use conditional statements using Python.
A quick step-by-step overview of installing and configuring Carbonite Server Backup.
Learn how to get help with Linux/Unix bash shell commands. Use help to read help documents for built in bash shell commands.: Use man to interface with the online reference manuals for shell commands.: Use man to search man pages for unknown command…
Learn how to find files with the shell using the find and locate commands. Use locate to find a needle in a haystack.: With locate, check if the file still exists.: Use find to get the actual location of the file.:

920 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

14 Experts available now in Live!

Get 1:1 Help Now