How to reliably measure the performance of a FreeNAS zpool?

J Z used Ask the Experts™

I created a few different zpool configs on ZFS using FreeNAS and I'm trying to figure out a way to reliably benchmark the performance.

I first tried it from a remote host connecting to an SMB share. In Windows I can't get any higher than 60 MBps. Both using a normal file copy (big file) and using various benchmarking tools. On linux I tried filling up a file using
to a smb mapped drive using a simple dd if=/dev/zero of=/mnt/cifs-share1/test1.img bs=1G count=5 There I get up to 106 MBps and then I guess I'm hitting the 1 Gbps Ethernet connection speed limit.

On to the host itself: So far I tried the default way using iozone which ships with FreeNAS. But the results seem to be high to be true so the question is how do I do it reliably and how to interpete the results?

The result I'm getting is:

    sudo iozone -i 0 -R -l 5 -u 5 -r 4k -s 10G | tee -a /tmp/iozone_results.txt
Excel chart generation enabled
        Record Size 4 kB
        File size set to 10485760 kB
        Command line used: iozone -i 0 -R -l 5 -u 5 -r 4k -s 10G
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Min process = 5
        Max process = 5
        Throughput test with 5 processes
        Each process writes a 10485760 kByte file in 4 kByte records

        Children see throughput for  5 initial writers  = 1224543.30 kB/sec
        Parent sees throughput for  5 initial writers   = 1158243.81 kB/sec
        Min throughput per process                      =  238010.19 kB/sec
        Max throughput per process                      =  249081.27 kB/sec
        Avg throughput per process                      =  244908.66 kB/sec
        Min xfer                                        = 10019684.00 kB

        Children see throughput for  5 rewriters        = 1195993.89 kB/sec
        Parent sees throughput for  5 rewriters         = 1123344.59 kB/sec
        Min throughput per process                      =  231760.47 kB/sec
        Max throughput per process                      =  243582.41 kB/sec
        Avg throughput per process                      =  239198.78 kB/sec
        Min xfer                                        = 9976868.00 kB

"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4 kBytes "
"Output is in kBytes/sec"

"  Initial write " 1224543.30

"        Rewrite " 1195993.89 

Open in new window

That means average througput for read is 233 MBps?

The other volume which is a raidz volume gives me more or less the same result.

While the test is running a "zpool iostat volume1 2" is showing me values below 50 MBps.

    volume1     2.90G  1.61T      0     80      0   320K
    volume1     2.90G  1.61T      0      0      0      0
    volume1     2.90G  1.61T      0    193      0   952K
    volume1     2.90G  1.61T      0    274      0  4.94M
    volume1     2.90G  1.61T      0    515      0  22.7M
    volume1     3.03G  1.61T      0  1.28K      0  61.7M
    volume1     3.03G  1.61T      0  3.15K      0  29.2M
    volume1     2.91G  1.61T      0  6.44K      0  26.8M
    volume1     2.94G  1.61T      0  10.5K      0  47.0M
    volume1     2.91G  1.61T      0  13.0K      0  54.6M
    volume1     2.92G  1.61T      0  12.5K      0  54.8M
    volume1     2.94G  1.61T      0  16.6K      0  69.0M
    volume1     2.97G  1.61T      0  17.5K      0  72.7M
    volume1     2.95G  1.61T      0  13.6K      0  56.4M
    volume1     2.95G  1.61T      0      0      0      0
    volume1     2.95G  1.61T      0      0      0      0
    volume1     2.97G  1.61T      0     89      0   360K
    volume1     2.97G  1.61T      0      0      0      0

Open in new window

So what is the performance that the system really delivers?

As background information. This is the config of my zpools:

This is the layout of my volumes:

    % zpool status
      pool: freenas-boot
     state: ONLINE
      scan: scrub repaired 0 in 0h1m with 0 errors on Mon Oct 30 03:46:11 2017
            NAME        STATE     READ WRITE CKSUM
            freenas-boot  ONLINE       0     0     0
              da25p2    ONLINE       0     0     0
    errors: No known data errors
      pool: volume1
     state: ONLINE
      scan: none requested
            NAME                                            STATE     READ WRITE CKSUM
            volume1                                         ONLINE       0     0     0
              mirror-0                                      ONLINE       0     0     0
                gptid/98f7028d-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/9c00c3d8-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
              mirror-1                                      ONLINE       0     0     0
                gptid/9eb0346f-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/a1665232-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
              mirror-2                                      ONLINE       0     0     0
                gptid/a41a8f63-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/a6deacd3-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
              mirror-3                                      ONLINE       0     0     0
                gptid/a9a301ce-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/ac795d8a-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
              mirror-4                                      ONLINE       0     0     0
                gptid/af3d96cf-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/b204d519-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
              mirror-5                                      ONLINE       0     0     0
                gptid/b4cef34e-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/b7830138-bbd8-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
    errors: No known data errors
      pool: volume2-raidz
     state: ONLINE
      scan: none requested
            NAME                                            STATE     READ WRITE CKSUM
            volume2-raidz                                   ONLINE       0     0     0
              raidz1-0                                      ONLINE       0     0     0
                gptid/11c9e37e-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/14913aab-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/1746d4bd-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/1a116763-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
              raidz1-1                                      ONLINE       0     0     0
                gptid/1ce3b132-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/1fd7fae1-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/22624372-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/251e482e-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
              raidz1-2                                      ONLINE       0     0     0
                gptid/27f70ec6-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/2adda23f-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/2dca82da-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
                gptid/308cea55-bbdc-11e7-a4c7-d8d3855d7106  ONLINE       0     0     0
    errors: No known data errors

Open in new window

Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Distinguished Expert 2017

Your need smb has a handling overhead as does NFS though different.

Depends on your needs, what is the average file size and quantity will you expect to be transferred in and out from that locations?
To gauge the pergormance based on your utilization.
I.e. If all the Santa stored are 60k to 1mb files, testing large100mb  file through put ...

Try this, as possibly an explanation.
You have a cargo of 500lbs/kg to transfer from point A to point B.
One it is a single pallet the other is an assortment of boxes.

It will take you longer to load, transport, unload the cargo with multiple boxes compared to the single pallet
The transport will take the same amount of time.......
J ZSysAdmin


OK Arnold thank you. But what is your question? How big the files are going to be? Well I assume some of it is gonna be random (all kind of things) and a part of it is going to be a lot os 13,2 MB files.

But the question was how do I reliably test the expected throughput. I have to know to be somewhat more sure that it makes sense to invest in 10 Gbps Ethernet. That's the main concern.
Distinguished Expert 2017
Gert, as you noted, throughput is based on the types of files, access to data you use.

What accesses and how it accesses would be a better way to address whether it would be better, worthwhile to buy 10GB network setup.

Since you are testing using smb, this suggests a specific utilization user access.
Multiple nics, could look into lag, bonded, teamed to increase bandwidth.

The switch might be ....

I.e. If you test using a direct cross cable, you can test raw direct NIC pergormance on freenas.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial