Link to home
Start Free TrialLog in
Avatar of SaffronThePuppy
SaffronThePuppy

asked on

FileIO test NFS mount

I am conducting a POC against a few popular NAS filer systems (1st example runs ZFS) and I wish to use FIO to test remote throughput, latency, etc. I'm relatively new to Linux.
I mounted the remote system as an NFS mount and then am attempting to run FIO against it using options as follows:
"fio --directory=./ --rw=randwrite --bs=4k --rwmixread=100 --iodepth=1 --numbobs=1 --name=4ktestwrite --size ==100M"

Any FIO experts out there that can walk me through some of these parameters and/or discuss some reasonable FIO tests against ZFS NAS?

Thanks.
Avatar of robocat
robocat

Benchmarking requires that you know the IO requirements of the application that you will run on the NAS.

E.g. databases or VMWare require mainly random IO, file sharing of large multimedia files requires mainly sequential IO. You also need to know the number of simultaneous users/processes accessing the NAS, blocksize of the average IO request,...

You can only perform a valid benchmark if you know this info. Then you can use a tool like FIO to mimic the real world usage and get results that have actual meaning.

If you can provide this type of info, I can help to translate this into FIO options.
Any thoughts ?
Avatar of SaffronThePuppy

ASKER

Here are the reqs I received. I'm still trying to understand exactly what they want to get from this but here we go.
The key is that this is for an NFS target.

Test 1
1 LUN with Threads:16, Queue Depth:16
IOPS, Avg Latency, Max Latency, standard deviation at 100% read, 100% write at variable block sizes.
Block Sizes (KB): 4,8,16,32,64,128
Test 2
2 LUN sizes with IOPS, Avg Latency, Max Latency, Standard Deviation at 70% Read/30% Write
2 threads
2,4,8,16 Queue Depth
4 threads
2,4,8,16 Queue Depth
8 threads
2,4,8,16 Queue Depth
16 threads
2,4,8,16 Queue Depth
I'm not sure where you got this from ??

Testing LUNs with various queue depths etc. is a whole different story and all this makes no sense for the question you're asking.

The danger of synthetic benchmark testing is that you do this without knowing the real world requirements. Some devices will perform well for one task and not for the other and vice versa, often there's not "best" choice in general.

So info needed:

- what's the application (RDBMS, file server, VMWare, ...)
- average block size ("variable block sizes" is not the answer)
- average number of simultaneous users/processes
- average % of reads and writes
- average % of random versus sequential
I understand what you're getting at and thank you for responding. In this case it's a standardized/canned test which I was given and I have no background knowledge of the customer or specific application's IO profile.
I would settle for this:
How do I run fio against a directory and not a device. I need to run it against /mnt/nfs   as opposed to /dev/sdb  /dev/sdc ..
If I can get this running, I can ask more intelligent questions.

The following syntax fails:

fio --filename=/mnt/NFS --rw=randrw --ioengine=libaio --bs=4k --rwmixread=1-- --iodepth=2 --numjobs=1 --runtime=2 --group_reporting --name=4ktest

return: "fio: pid=10709, err=21/file:filesetup.c:573, func=open(/mnt/NFS), error=Is a directory"
ASKER CERTIFIED SOLUTION
Avatar of robocat
robocat

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
makes sense. And this worked below:
fio --rw=randrw --ioengine=libaio --bs=64 --rwmixread=30 --iodepth=6 --numjobs=5 --runtime=30 --name=4ktest --size=64 --output=out1

thanks robocat