?
Solved

Dump from Linux to Windows NAS is failing at 2gb

Posted on 2011-10-20
13
Medium Priority
?
520 Views
Last Modified: 2012-06-27
I am running this dump command:

 FSYS=`df -k | grep '^/dev/.*hd' | awk '{print $1}' | sort`          # Create list of filesystems
 DUMP=/mnt/mynas
 for fs in $FSYS ; do
  dump 0fuM - $fs | gzip > $DUMP/`echo $fs | sed -e 's!/!_!g'`.dump.gz
 done

But the last filesystem I am trying to backup is bigger than 2gb & I get this error:

 DUMP: 69.68% done at 4929 kB/s, finished in 0:06
  DUMP: Broken pipe
  DUMP: The ENTIRE dump is aborted.
./dumpme: line 5:  6605 Exit 3                  dump 0fua - $fs
      6606 File size limit exceeded| gzip >$DUMP/`echo $fs | sed -e 's!/!_!g'`.dump.gz

I also modified my script to try to split the dump file:

FSYS=`df -k | grep '^/dev/.*hd' | awk '{print $1}' | sort`          # Create list of filesystems
 DUMP=/mnt/mynas/sdq1
 for fs in $FSYS ; do
  dump 0fuMa - $fs | split --bytes=1500m | gzip > $DUMP/`echo $fs | sed -e 's!/!_!g'`.dump.gz
 done

But I get this error:

  DUMP: Volume 1 started with block 1 at: Thu Oct 20 12:13:26 2011
  DUMP: dumping (Pass III) [directories]
  DUMP: dumping (Pass IV) [regular files]
split: xac: No space left on device
  DUMP: Broken pipe
  DUMP: The ENTIRE dump is aborted.

Apparently I am running out of space on the Linux side since I have 600GB free on the NAS side.

How do i back up a filesystem that is larger than 2GB to a share on the NAS? If the filesystem is smaller than 2GB it backs up to the NAS just fine...


0
Comment
Question by:someITGuy
  • 7
  • 5
13 Comments
 
LVL 68

Expert Comment

by:woolmilkporc
ID: 37003374
Try to increase the filesize limit with

ulimit -f unlimited

(given your hard limit allows)

"man ulimit" for more.

wmp
0
 

Author Comment

by:someITGuy
ID: 37003612
I tried the ulimit -f unlimited

but got this error:


 DUMP: 42.95% done at 4555 kB/s, finished in 0:13
  DUMP: 66.46% done at 4699 kB/s, finished in 0:07
  DUMP: Broken pipe
  DUMP: The ENTIRE dump is aborted.
./dumpme: line 5:  7531 Exit 3                  dump 0fua - $fs
      7532 File size limit exceeded| gzip >$DUMP/`echo $fs | sed -e 's!/!_!g'`.dump.gz
0
 
LVL 68

Expert Comment

by:woolmilkporc
ID: 37003712
So the problem is with the NAS mount.

Try to mount your NAS share with lfs (large file system) option.

If you're using smbmount:

smbmount //Hostname/sharename /local/mountpoint -o username=username,password=password,lfs

If you're using cifs mount (maybe because smbmount is not available on your system anymore) try

mount -t cifs //Hostname/sharename /local/mountpoint -o user=username,password=password,lfs
0
Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
LVL 7

Expert Comment

by:icenick
ID: 37004709
How about posting the output of the following please:

df -hT

Open in new window

mount

Open in new window

more /etc/fstab

Open in new window

0
 

Author Comment

by:someITGuy
ID: 37022137

[root@sdm1 nas-backup]# df -hT
Filesystem    Type    Size  Used Avail Use% Mounted on
/dev/hda2     ext2    7.7G  2.6G  4.8G  36% /
/dev/hda1     ext2     79M  9.6M   66M  13% /boot
/dev/hda5     ext2     66G  6.9G   55G  12% /usr2
none         tmpfs    220M     0  220M   0% /dev/shm
sdq1:/         nfs    3.9G  827M  2.9G  23% /sdq1
sdq1:/usr2     nfs     69G  7.5G   58G  12% /sdq1/usr2
sdm1o:/        nfs    415M  285M  109M  73% /sdm1o
sdm1o:/usr2    nfs    3.6G  983M  2.4G  29% /sdm1o/usr2
sdq0:/         nfs    7.7G  798M  6.6G  11% /sdq0
sdq0:/usr2     nfs     66G  7.4G   55G  12% /sdq0/usr2


The one I am having an issue with is /dev/hda2 which is 55GB...



[root@sdm1 nas-backup]# mount
/dev/hda2 on / type ext2 (rw,noatime)
none on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbdevfs on /proc/bus/usb type usbdevfs (rw)
/dev/hda1 on /boot type ext2 (rw)
/dev/hda5 on /usr2 type ext2 (rw,noatime)
none on /dev/shm type tmpfs (rw)
sdq1:/ on /sdq1 type nfs (rw,soft,rsize=8192,wsize=8192,addr=10.0.50.17)
sdq1:/usr2 on /sdq1/usr2 type nfs (rw,soft,rsize=8192,wsize=8192,addr=10.0.50.17)
sdm1o:/ on /sdm1o type nfs (rw,soft,rsize=8192,wsize=8192,addr=10.0.50.14)
sdm1o:/usr2 on /sdm1o/usr2 type nfs (rw,soft,rsize=8192,wsize=8192,addr=10.0.50.14)
sdq0:/ on /sdq0 type nfs (rw,soft,rsize=8192,wsize=8192,addr=10.0.50.16)
sdq0:/usr2 on /sdq0/usr2 type nfs (rw,soft,rsize=8192,wsize=8192,addr=10.0.50.16)


[root@sdm1 nas-backup]# more /etc/fstab
LABEL=/                 /                       ext2    defaults,noatime  1 1
LABEL=/boot             /boot                   ext2    defaults        1 2
LABEL=/usr2             /usr2                   ext2    defaults,noatime  1 2
none                    /dev/pts                devpts  gid=5,mode=620  0 0
none                    /proc                   proc    defaults        0 0
none                    /dev/shm                tmpfs   defaults        0 0
/dev/hda3               swap                    swap    defaults        0 0
/dev/fd0                /mnt/floppy             auto    noauto,owner,kudzu 0 0
/dev/cdrom      /mnt/cdrom      iso9660 noauto,ro,user  0 0

sdq0:/      /sdq0       nfs     soft,rw,rsize=8192,wsize=8192   0 0
sdq0:/usr2  /sdq0/usr2  nfs     soft,rw,rsize=8192,wsize=8192   0 0

sdq1:/      /sdq1       nfs     soft,rw,rsize=8192,wsize=8192   0 0
sdq1:/usr2  /sdq1/usr2  nfs     soft,rw,rsize=8192,wsize=8192   0 0

sdm1o:/      /sdm1o     nfs     soft,rw,rsize=8192,wsize=8192   0 0
sdm1o:/usr2  /sdm1o/usr2        nfs     soft,rw,rsize=8192,wsize=8192   0 0


So how to dump a 55GB filesystem to a EMC NAS?
0
 

Author Comment

by:someITGuy
ID: 37022569
Actually, how do I back up 6.6GB over to a EMC NAS running CIFS?
0
 
LVL 68

Expert Comment

by:woolmilkporc
ID: 37022819

I don't see any CIFS mount above, neither in fstab nor with mount.

Anyway, I already gave you the command to mount your NAS via CIFS:

mount -t cifs //Hostname/sharename /local/mountpoint -o user=username,password=password,lfs

Mount the share the above way, then use the new mountpoint "/local/mountpoint" (just an example!) to modify your DUMP variable accordingly.

0
 

Author Comment

by:someITGuy
ID: 37025088
Mounting the filesystem as lfs does not seem to work, I am still getting:

DUMP: write error 2097170 blocks into volume 1: File too large


0
 
LVL 68

Expert Comment

by:woolmilkporc
ID: 37025160
2097170  blocks ist just a few (9) KB over 1 GB.

This must be a ulimit problem!

Please double check ulimit -f!

0
 

Author Comment

by:someITGuy
ID: 37025307
[root@sdq0 backup]# ulimit -f
unlimited

0
 

Author Comment

by:someITGuy
ID: 37025401
It turns out I was mounting with smbmount which is mounting as a smbfs, when I try to explicitly mount as cifs I get this error:

[root@sdq1 etc]# mount -t cifs //mynas/TeleInfo-backups  /mnt/mynas -o username=someusername/domain,password=somepassword,lfs
mount error 13 = Permission denied
Refer to the mount.cifs(8) manual page (e.g.man mount.cifs)
0
 
LVL 68

Accepted Solution

by:
woolmilkporc earned 2000 total points
ID: 37025653
Format is

mount -t cifs //mynas/TeleInfo-backups  /mnt/mynas -o username=someusername,domain=domain,password=somepass

or

mount -t cifs //mynas/TeleInfo-backups  /mnt/mynas -o username=domain/someusername,password=somepass
0
 

Author Comment

by:someITGuy
ID: 37026306
Still having issues with cifs.

We decided to take a different tact, we created & mounted a nfs filesystem on my EMC SAN.

so far running a dump command is working on my centos 4.4 box, it is failing on my Whitebox Enterprise Linux 3.0 box on the big filesystem (hda5). I am now trying to run it on my centos 3.9 box & my RedHat 6.2 box.

Here is the command I am running inside a script:

dump  -0ua -f /mnt/mynas/sdq1-hda2-backup   /dev/hda2
dump  -0ua -f /mnt/mynas/sdq1-hda1-backup   /dev/hda1
dump  -0ua -f /mnt/mynas/sdq1-hda5-backup  /dev/hda5
0

Featured Post

Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

A look at what happened in the Verizon cloud breach.
How much do you know about the future of data centers? If you're like 50% of organizations, then it's probably not enough. Read on to get up to speed on this emerging field.
This Micro Tutorial will teach you how to reformat your flash drive. Sometimes your flash drive may have issues carrying files so this will completely restore it to manufacturing settings. Make sure to backup all files before reformatting. This w…
Despite its rising prevalence in the business world, "the cloud" is still misunderstood. Some companies still believe common misconceptions about lack of security in cloud solutions and many misuses of cloud storage options still occur every day. …
Suggested Courses
Course of the Month9 days, 17 hours left to enroll

569 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question