Link to home
Start Free TrialLog in
Avatar of krisdigitx
krisdigitx

asked on

using bacula NFS mount to a NAS server

hi,

I just wanted to get some opinions of configuring this setup, I have a 7TB server which I mount as a NFS share to a bacula server(ubuntu 8.10)  for backups. The NFS/NAS server is connected to bacula backtoback by a ethernet connection.

The NAS server(debian) is a software RAID 5 with 6 x 1.5TB IDE/SATA disks.

Is there any constraints in the long run? will there be performance issues and which areas need to be upgraded?

any suggestions will be helpful

cheers!
Avatar of nabeelmoidu
nabeelmoidu
Flag of United States of America image

I think the main points would be the cache on the device, your controller speed and network connectivity.if you can give more details about your hardware we can help you out. Its not very likely you might get people who actually worked on bacula here.
Copy 7G over gigabit wire will take 15 hours
7G off the 50MB/s disk (common) will take 30 hours
Striping (raid0)disks in backup server is absolute must. SW breed is OK

Benchmark like iozone will tell you better.

gzip -1 compression can help
teaming network adapters is absolute must for NFS server to keep serving people while backing up (or double backup time)

backup is not dependent on disk cache - it reads all the physical media, it is very likely to hit disk errors, so use "smartmontools" religiously.

You might need to rebuild gzip to your CPU so that it goes up to speed.

PS it is highly likely to see bacula in academic settings.
Avatar of krisdigitx
krisdigitx

ASKER

this is the server configuration,

smartmontools is already installed and running,
lf01:~# cat /proc/cpuinfo
processor       : 0
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 65
model name      : Dual-Core AMD Opteron(tm) Processor 2214
stepping        : 3
cpu MHz         : 2211.442
cache size      : 1024 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 0
initial apicid  : 0
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy
bogomips        : 4426.26
clflush size    : 64
power management: ts fid vid ttp tm stc

processor       : 1
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 65
model name      : Dual-Core AMD Opteron(tm) Processor 2214
stepping        : 3
cpu MHz         : 2211.442
cache size      : 1024 KB
physical id     : 0
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 1
initial apicid  : 1
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy
bogomips        : 4422.70
clflush size    : 64
power management: ts fid vid ttp tm stc

processor       : 2
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 65
model name      : Dual-Core AMD Opteron(tm) Processor 2214
stepping        : 3
cpu MHz         : 2211.442
cache size      : 1024 KB
physical id     : 1
siblings        : 2
core id         : 0
cpu cores       : 2
apicid          : 2
initial apicid  : 2
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy
bogomips        : 4422.71
clflush size    : 64
power management: ts fid vid ttp tm stc

processor       : 3
vendor_id       : AuthenticAMD
cpu family      : 15
model           : 65
model name      : Dual-Core AMD Opteron(tm) Processor 2214
stepping        : 3
cpu MHz         : 2211.442
cache size      : 1024 KB
physical id     : 1
siblings        : 2
core id         : 1
cpu cores       : 2
apicid          : 3
initial apicid  : 3
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 1
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy
bogomips        : 4422.72
clflush size    : 64
power management: ts fid vid ttp tm stc

lf01:~#


lf01:~# free
             total       used       free     shared    buffers     cached
Mem:       3632608     301664    3330944          0       1180     191116
-/+ buffers/cache:     109368    3523240
Swap:      3903752          0    3903752
lf01:~#

Open in new window

dmesg is of more help.

if you run amd64 system it is possible that gzip is optimal already.

could you please run iozone to see if disks are faster than network and if you can gain from compression?
i ran this test over the NFS link

/var/www/vhosts is mounted from the NFS server(AMD 7TB server)
        Iozone: Performance Test of File I/O
                Version $Revision: 3.327 $
                Compiled for 32 bit mode.
                Build: linux

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

        Run began: Mon Jan 11 11:11:26 2010

        Excel chart generation enabled
        Record Size 4 KB
        File size set to 102400 KB
        Command line used: /opt/iozone/bin/iozone -R -l 10 -u 10 -r 4k -s 100m -F /var/www/vhosts/f1 /var/www/vhosts/f2 /var/www/vhosts/f3 /var/www/vhosts/f4 /var/www/vhosts/f5 /var/www/vhosts/f6 /var/www/vhosts/f7 /var/www/vhosts/f8 /var/www/vhosts/f9 /var/www/vhosts/10 output.txt
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 10
        Max process = 10
        Throughput test with 10 processes
        Each process writes a 102400 Kbyte file in 4 Kbyte records

        Children see throughput for 10 initial writers  =  915439.45 KB/sec
        Parent sees throughput for 10 initial writers   =   24475.38 KB/sec
        Min throughput per process                      =     573.60 KB/sec
        Max throughput per process                      =  414805.00 KB/sec
        Avg throughput per process                      =   91543.95 KB/sec
        Min xfer                                        =     420.00 KB

        Children see throughput for 10 rewriters        = 1085968.31 KB/sec
        Parent sees throughput for 10 rewriters         =   73246.33 KB/sec
        Min throughput per process                      =       0.00 KB/sec
        Max throughput per process                      =  690697.44 KB/sec
        Avg throughput per process                      =  108596.83 KB/sec
        Min xfer                                        =       0.00 KB

        Children see throughput for 10 readers          =  110983.26 KB/sec
        Parent sees throughput for 10 readers           =  105681.77 KB/sec
        Min throughput per process                      =    9486.93 KB/sec
        Max throughput per process                      =   15482.62 KB/sec
        Avg throughput per process                      =   11098.33 KB/sec
        Min xfer                                        =   64336.00 KB

        Children see throughput for 10 re-readers       = 2314355.01 KB/sec
        Parent sees throughput for 10 re-readers        = 1982440.32 KB/sec
        Min throughput per process                      =    8974.01 KB/sec
        Max throughput per process                      = 1037539.81 KB/sec
        Avg throughput per process                      =  231435.50 KB/sec
        Min xfer                                        =     924.00 KB

        Children see throughput for 10 reverse readers  = 1539350.92 KB/sec
        Parent sees throughput for 10 reverse readers   = 1482131.59 KB/sec
        Min throughput per process                      =      34.76 KB/sec
        Max throughput per process                      =  786095.81 KB/sec
        Avg throughput per process                      =  153935.09 KB/sec
        Min xfer                                        =       4.00 KB

        Children see throughput for 10 stride readers   = 1288892.79 KB/sec
        Parent sees throughput for 10 stride readers    = 1216024.05 KB/sec
        Min throughput per process                      =   28681.06 KB/sec
        Max throughput per process                      =  659254.19 KB/sec
        Avg throughput per process                      =  128889.28 KB/sec
        Min xfer                                        =    4544.00 KB

        Children see throughput for 10 random readers   = 1522220.33 KB/sec
        Parent sees throughput for 10 random readers    = 1467732.92 KB/sec
        Min throughput per process                      =     724.64 KB/sec
        Max throughput per process                      =  825160.56 KB/sec
        Avg throughput per process                      =  152222.03 KB/sec
        Min xfer                                        =      92.00 KB

        Children see throughput for 10 mixed workload   = 1137702.12 KB/sec
        Parent sees throughput for 10 mixed workload    =   26290.57 KB/sec
        Min throughput per process                      =    1133.19 KB/sec
        Max throughput per process                      =  684931.06 KB/sec
        Avg throughput per process                      =  113770.21 KB/sec
        Min xfer                                        =     172.00 KB


        Children see throughput for 10 random writers   =  564308.42 KB/sec
        Parent sees throughput for 10 random writers    =    1153.03 KB/sec
        Min throughput per process                      =     464.65 KB/sec
        Max throughput per process                      =  456997.94 KB/sec
        Avg throughput per process                      =   56430.84 KB/sec
        Min xfer                                        =     436.00 KB

        Children see throughput for 10 pwrite writers   =  559548.02 KB/sec
        Parent sees throughput for 10 pwrite writers    =   26084.40 KB/sec
        Min throughput per process                      =    1613.10 KB/sec
        Max throughput per process                      =  339352.72 KB/sec
        Avg throughput per process                      =   55954.80 KB/sec
        Min xfer                                        =      40.00 KB

        Children see throughput for 10 pread readers    =  110567.23 KB/sec
        Parent sees throughput for 10 pread readers     =  104486.70 KB/sec
        Min throughput per process                      =    9584.21 KB/sec
        Max throughput per process                      =   16444.08 KB/sec
        Avg throughput per process                      =   11056.72 KB/sec
        Min xfer                                        =   59728.00 KB



"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4 Kbytes "
"Output is in Kbytes/sec"

"  Initial write "  915439.45

"        Rewrite " 1085968.31

"           Read "  110983.26

"        Re-read " 2314355.01

"   Reverse Read " 1539350.92

"    Stride read " 1288892.79

"    Random read " 1522220.33

" Mixed workload " 1137702.12

"   Random write "  564308.42

"         Pwrite "  559548.02

"          Pread "  110567.23


iozone test complete.

Open in new window

-s should exceed all the imaginable caches 2-10fold.

-s8G or more looks like a hardware test.

On the other hand - results are quite good for memory cache - they exceed gigabit network, and this means no need for nfs tuning...
so you mean, bacula would work seamlessly with this config?
ASKER CERTIFIED SOLUTION
Avatar of gheist
gheist
Flag of Belgium image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I have configured bacula to use the NFS mount from the storage server, works pretty cool now.. i will keep a check...