?
Solved

Linux backup on FAT32 network drive

Posted on 2006-05-14
24
Medium Priority
?
777 Views
Last Modified: 2013-11-15
I want to backup the whole of my Linux system on a network drive which is formatted using FAT32. I want to produce a single .tgz file for my backup. I have worked out the way to do it in Linux. The problem is that FAT32 has a maximum limit of the size of a single file of ~4GB. My backup is about 30GB. At the same time I have noticed that the MS Windows Backup Utility can produce backup files in a FAT32 disk which have no size limitation. How does MS Backup overcomes the FAT32 size limitation and is there a way to overcome this limitation through Linux?

Thanks
0
Comment
Question by:natuk
  • 10
  • 9
  • 2
  • +2
24 Comments
 
LVL 97

Expert Comment

by:Lee W, MVP
ID: 16677842
This is NOT true.  The Windows backup utility CANNOT make files larger than 4GB on a FAT32 partition - there ahave been several instances where people have had problems because of the FAT32 limitation and had their backups fail.  There is no way around this other than to use a different file system. or manually backup folders in Linux keeping the size UNDER 4GB each.
0
 
LVL 25

Expert Comment

by:Cyclops3590
ID: 16678205
I've never tried it, but it seems there is a "-L" or "--tape-length" switch for tar where you can specify the length of the "tape" or file in this case.  I use dump and split volumes at a 2GB limit on my old linux boxes that don't support >2GB files and it works great.

I also recommend using dump for backups (my personal preference anyway) because you are given 10 levels of backups to play with.  
level 0 - full backup
level 1 - differential backup of everything that changed since last level <1
level 2 - differential backup of everything that changed since last level <2
and so on
plus you can specify an archive file so you can run a restore -i and interactively look thru the dump archive file to specifiy with files you want to extract from the dump backup file.

At least that's how I recommend getting around the file limit problem, or move to backing up to a different fs that supports larger files like leew mentioned
0
 
LVL 19

Expert Comment

by:jools
ID: 16681380
What type of network hard drive do you have?

I've used the iomega drives and mounting them under samba limits the file size to 2GB, I need to try to mount as cifs to see if I get files upto 4GB but I've not been able to check it out as yet.

I have this entry in my fstab file to mount the drive;
     //d250/nethdd          /shared/backup          smbfs   guest,rw        0 0

And can use this to backup the system, substitute the variables to match information relating to your setup.
Note that I had problems if using gzip as this did not like spammed volumes;
     tar --create --same-permissions --exclude-from=${EXCLUDEFILE} --one-file -system --blocking-factor ${BLOCKSIZE} --multi-volume --new-volume-script ${FILECHANGER} --tape-length ${TAPELENGTH} --totals --label ${VOLLABEL} --file "${NETHDD}/${ARCHFILE}" ${MOUNTPOINT} >> ${LOGFILE} 2>&1

Some enviroment variables used are as follows;
     HOSTNAME="myhostname"
     DATE=`date +%d%m%y`
     NETHOST="d250"
     NETHDD="/shared/backup"
     BACKUPTMP=${NETHDD}"/tmp"
     LOGFILE="${NETHDD}/backup-${DATE}.log"
     BACKUPLIST="${BACKUPTMP}/backuplist.tmp"
     BLOCKSIZE="2048"
     TAPELENGTH="2045952"    # Just under the 2GB limit...  
     FILECHANGER="/secure/scripts/change_backup_file.sh"

I also had to use a filechanger script to rename the archive files if the filesystem was over 2GB (most are).

Note that the scripts were written ages ago and there may well be a more efficient way of doing this, I just can't be bothered to look at the script again because it works.

#!/bin/bash
# Change the name of the archive file so it does not get overwritten.
# Find the basename of the archive file
FILE=`basename ${ARCHFILE} .tar`
echo "New tar archive required - Checking for available filename..." >> ${LOGFILE}
for NUM in 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
do
# If the new archive file exists, do nothing but loop onto the next one.
        if [[ ! -e ${NETHDD}/${FILE}${NUM}.tar ]]
        then
        # So now we have an unused file, copy/move the existing archive
        # to the new archive name/number and exit the loop, I think we need
        # to have an exit status of 0!
                echo "Copying ${FILE}.tar to ${FILE}${NUM}.tar" >> ${LOGFILE}
                mv ${NETHDD}/${ARCHFILE} ${NETHDD}/${FILE}${NUM}.tar
                exit 0
        elif [[ $NUM = 25 ]]
        then
        # We now assume that we've done all the numbers and still havn't found
        # an unused file....write and error and make sure we exit with a non
        # zero halt
                echo "FATAL - All archives used"
                echo "${NETHDD}/${FILE}${NUM}.tar exists, no more archives left!"
                echo "Continuing the archive will result in an incomplete backup!"
                exit 99
        fi
done

I have a full script I'm happy to post here is you like, you can edit yourself to match your own needs if you wish.
0
Get free NFR key for Veeam Availability Suite 9.5

Veeam is happy to provide a free NFR license (1 year, 2 sockets) to all certified IT Pros. The license allows for the non-production use of Veeam Availability Suite v9.5 in your home lab, without any feature limitations. It works for both VMware and Hyper-V environments

 

Author Comment

by:natuk
ID: 16682014
Dear leew,

I am absolutely certain that the windows backup utility produces files larger than 4GB. I am not sitting at the computer right now but I can post a screenshot with the file size maybe tonight. I was as amazed as you are about how this is possible. I must have inadvertently set it up in a strange mode - I don't know.
0
 
LVL 97

Expert Comment

by:Lee W, MVP
ID: 16682059
I never said it didn't - I said it doesn't on FAT32 drives.  NTFS drives it'll make the file 100GB if the drive is large enough.  It is simply not possible to create a file larger than 4GB on a FAT32 partition.  Sorry.
0
 

Author Comment

by:natuk
ID: 16682065
The network drive is an iomega drive. And yes the maximum file size I managed to get from linux was 2GB.
0
 
LVL 19

Expert Comment

by:jools
ID: 16682426
It sounds like the type I have (160GB and 250GB). I do backup to it as well using the method above, it works but to backup 160GB over the network takes all day, this may however have something to do with the network switch I use and that it's a FULL SYSTEM backup.

You could try mount -t cifs <options> <share> <mountpoint>, I can't do it on my system at the moment because I have a script that shuts down the drive every night to stop the noise.

J

0
 

Author Comment

by:natuk
ID: 16694251
Dear leew,

You are right. I am wrong. The documentation I read said FAT32. When I contacted iomega, they said that the file system is samba ext, hence the large files. Which means of course that Windows can mount the network drive in NTFS taking advantage the unlimited file size, whereas Linux mounts the drive as samba (and not samba ext)  which means that there is a 4GB limitation. Which brings us to the question, how do I mount the network drive in Linux as samba ext?
0
 
LVL 19

Expert Comment

by:jools
ID: 16701925
What model drive?

Alll the Iomega network HDD's I've used have all been FAT32.

If it really is an ext filesystems, what makes you think you could mount an ext filesystem as NTFS?

J
0
 

Author Comment

by:natuk
ID: 16703076
It's the iomega StorCenter 250GB. I was convinced it was FAT32, too. When I check the properties of the networked drive in Windows explorer, it appears as NTFS. Does this make sense? How do I mount it as ext though in Linux. Shouldn't it be as simple as mnt //network/location /mount/path ?
0
 
LVL 25

Expert Comment

by:Cyclops3590
ID: 16703495
well, you're going to need to reformat the StorCenter as ext3 in order to do your backups.

Although you can load an ntfs fs module onto your system, the only way linux can write (safely and reliably anyway) is thru smb protocol.  the ntfs fs module for linux will allow reading and thats it (safely anyway).

if you need to access the StorCenter via Windows, then reformat it in FAT32 and use the switch on the tar command to break up the tar files
0
 
LVL 19

Expert Comment

by:jools
ID: 16704308
If you want it available in linux leave it as FAT32, I use both my drives in this way (see posts above).

As Cyclops said, NTFS support in Linux is limited to read only which is no good for you, you do not need to reformat as ext, in fact I don't think you can!

Have you tried my posts from the 15th?

From the command line;
# mount -t smbfs //ipaddress_or_netbios_name/NetHDD /mount/point
<you may be prompted for a password so just press enter>

or....
# mount.cifs //ipaddress_or_netbios_name/NetHDD /mount/point

to backup, use the tar example I posted above...

J
0
 

Author Comment

by:natuk
ID: 16706686
Iomega support said that the drive's filesystem is smb ext. It's not FAT32. As you said, jools, because it is a network drive there is no (obvious) way to reformat it in FAT32 (or indeed anything else). The web interface utility allows reformatting but it doesn't let you choose the filesystem. And I wouldn't want to mess with that anyway. So to recap:

The network drive is smb ext and cannot change.
Windows mounts the network drive as NTFS (and I can read and write) - that's fine.
Linux, according to jools, will mount the network drive as smb, but there will always be a limitation on the file size, is that right?
That limitation is overcome with the tar utility which splits the backup files.

I am happy with that. I am not at my system at the moment so I can't try jools and cyclop's suggestions for a few days, but will do it when I am back, forgive me for the delay.

Isn't it strange though, that Windows can mount an essentially linux filesystem with no filesize limitation, but Linux can't do that?
0
 
LVL 19

Expert Comment

by:jools
ID: 16706882
Mounting as cifs should allow 4GB but as yet I havnt tested this out, I'll be trying this during the day and let you know.

Also, I found this; http://tinyurl.com/mr3sm and I have support emails stating that it's FAT32. I guess they have either had a change in firmware or you've been given duff information.

Either way, their site said upto 4GB file size and in reality it was only 2GB for linux/samba systems. I tested it on a Windows system (98 and 2000) and was able to get the 4GB file limit OK. I had a moan but they could do nothing about it so I wrote the backup script to split at 2GB.

I believe the storcentre models use GB ethernet so as long as you can get a switch that negiotiates at full duplex you should get a good transfer rate.

I'll post the results of using cifs later today or tomorrow sometime.

J



0
 
LVL 19

Expert Comment

by:jools
ID: 16706903
BTW,

Can you mount the drive up on your system and post the output to the `mount` command?

J
0
 
LVL 19

Expert Comment

by:jools
ID: 16707424
I have a backup using cifs running and it can now create files upto 4GB, I did the following;

Create a directory.
# mkdir /shared/backup

Mount the NetHDD.
# mount.cifs //d250/NetHDD /shared/backup

Backup using tar. Note that the exclude file must exist and the file changer script must also work! I've used the long naming method so I can remember what the options are more easily if I need to update it.

# tar --create --same-permissions --exclude-from=/scripts/pictures.excl --one-file-system --blocking-factor 2048 --multi-volume --new-volume-script /scripts/new_file_changer.sh  --tape-length 4000000 --totals --label pictures --file /shared/backup/pictures.tar /shared/pictures

My directory listing looks a bit like the following, note that pictures.tar is still being written to and the pictures(x).tar are the created image files, the pictures filesystem is currently 17GB used so I guess I should have 5 files once it's complete.
     4096786432  /shared/backup/homer/pictures0.tar
     483094528    /shared/backup/homer/pictures.tar
     4096786432  /shared/backup/homer/pictures1.tar
     4096786432  /shared/backup/homer/pictures2.tar

Remember that I had problems if using gzip with spanned files, I vaguely remember it backing up ok but recovery was a problem. This may have been an old version of tar I was using though.
0
 
LVL 4

Expert Comment

by:jack_p50
ID: 16715405
natuk, why don't you use split (man 1 split)?

cd /windoze/backup/directory
tar c ..... /dirs/to/backup .... | gzip | split -b 4000000000 -
0
 

Author Comment

by:natuk
ID: 16790689
I haven't fogrotten this. I am still away but will be back in roughly two weeks. And no it's not holidays...

Thanks for your patience.
0
 
LVL 19

Expert Comment

by:jools
ID: 16790782
:-) enjoy what ever it is you're doing!
0
 

Author Comment

by:natuk
ID: 16976541
Hello again,

I am sorry for the delay, I am back for a day and will be away again for a couple of weeks, so I have tried most of the above. The tar -L command is a solution, but it is too complicated and I can't understand how Windows can mount the drive with no limit, whereas Linux cannot. Further searching online indicated that the drive is formatted with the ext2 filesystem.

I am using this command to mount:
mount -t ext2 //iomega/nethdd /media/nethdd

and the error I get is:
special device //iomega/nethdd does not exist

However when I mount with samba, I get no errors.

Any more ideas?
0
 
LVL 19

Expert Comment

by:jools
ID: 16976768
Mount as cifs using the mount command I showed above on the 18th May.

Why do you think the -L is too complicated? Keep it in a script and there is no complication.

I'm convinced the drive is NOT ext2 but FAT32 or NTFS, The website indicated FAT32 is the default setup and I have emails which also show this. Your original posting even mentioned FAT32. If you convert  the drive to NTFS this will mean that you cannot use the drive to mount r/w in Linux (I think R/w NTFS support is still beta. For this reason I left my drives setup as FAT32. Can you check the drive configuration in the web configuration page to see if it is setup as NTFS or FAT32?

There is a difference with samba and cifs, If I mount with samba (I have the same type of drives remember) I can only create a 2GB file, If I mount with cifs I can go upto 4GB and use the tar to split the volume, I tried the split command but it takes *ages*

I backup to this unit regularly and have no problems. I can even recover the whole system from these backups.

Post back the results of my May 18th posting, the commands typed and the results on the system, include df -k and mount output.

It would be good to get to the bottom of this one.

J


0
 

Author Comment

by:natuk
ID: 16982386
Dear jools,

The reason why I think it is not FAT32 is because Windows can write files bigger than 4GB - my Windows backup on the drive is a file of about 40GB. Also, I found this:

http://www.tomsnetworking.com/2006/02/20/review_storcenter250/page8.html

which says that the format is ext2 (with the old firmware which I haven't changed). FAT32 is used for any additional USB drives as the device has two USB ports on the back, but not for the device itself.

Also, my problem with the tar -L option is if you have, say, 100GB of data to backup, this will result into 25 or 50 different files and then directories or even files are divided between tar files - it's not simple enough and things can go wrong.

When I am back, I will post all the results from the mount commands. As I said I am away again for a couple of weeks, so again bare with me. This has been dragging on for a while but I 'll do my best.

Thanks for your patience.

natuk
0
 
LVL 19

Accepted Solution

by:
jools earned 600 total points
ID: 16984277
OK, it would appear that they have indeed made some mods, perhaps they should update their support site! :-)

I think the confusion may be that although the drive may be ext2, it is being shared by samba (on the actual unit) so you won't be able to mount this as ext2 as the Linux OS will expect this to be a locally attached (IDE/SCSI) type drive, not something over the network. The Storcentre docs say to use the mount -t smbfs but as we both know, there is a 2GB file limit when doing this.

I found some information relating to using largefile support (lfs) when using smbfs but this does not want to work on my system, maybe you could try;
     mount -t smbfs -o username=guest,password=password,lfs //iomega/nethdd /my/mountpoint

to see if it works for you...

Catch you when you return.

J



0
 

Author Comment

by:natuk
ID: 17068808
Hurrah! At last, it worked! I mounted the drive using:
mount -t smbfs -o lfs <share> <directory>
and I am backing up the whole disk using tar. The tar file is already 5GB on the network drive and counting...
Some notes: I managed to get it to work under Ubuntu Breezer. I tried the same command a week ago on Ubuntu Dapper and it did not work. It might have to do with the updates (plenty of updates come out for Breezer although Dapper has been out for a while) but I haven't been following it close enough to be certain. These updates seem to have affected the unicode filenames on Samba mounts, but I don't mind since I am using tar - and it is a different issue anyway.
0

Featured Post

Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

It’s 2016. Password authentication should be dead — or at least close to dying. But, unfortunately, it has not traversed Quagga stage yet. Using password authentication is like laundering hotel guest linens with a washboard — it’s Passé.
Often times it's very very easy to extend a volume on a Linux instance in AWS, but impossible to shrink it. I wanted to contribute to the experts-exchange community a way of providing a procedure that works on an AWS instance. It can also be used on…
Connecting to an Amazon Linux EC2 Instance from Windows Using PuTTY.
This demo shows you how to set up the containerized NetScaler CPX with NetScaler Management and Analytics System in a non-routable Mesos/Marathon environment for use with Micro-Services applications.
Suggested Courses
Course of the Month14 days, 11 hours left to enroll

840 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question