Drawbacks of small ZFS pools versus large

Mike R.
Mike R. used Ask the Experts™
on
We are redesigning out current ZFS file server. It currently only has one pool of over 70TB.

I see this is a problem since, if there are enough disk failures within the pool to make it non-functional, we have to restore all 60TB of currently existing data.

A debate has risen as to how to divide up the storage into smaller pools, and how big/small each of those pools should be.

As I see it, the only drawback of using pools that are "too small" is a lot of wasted disk space. We are using `4TB 7500 spinning disks` with `4TB SSD cache disks`. So, I'm leaning towards 20TB raidz vdisks.  That means we get about 77% of the disk space usable.

Q: Are there any other advantages/disadvantages to making the pool sizes too large/too small?
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Aaron TomoskyDirector of Solutions Consulting
Commented:
the big thing missing is your performance requirements. Personally I feel that disks are cheap and only do stripes of mirrors (basically raid 10) for 50% capacity. I also snapshot all my pools and send to a secondary server as my primary method of backup with graduated retention.
Fractional CTO
Distinguished Expert 2018
Commented:
I'm with Aaron. Disks are super cheap. $500 for a 16TB drive. Cheaper if you buy in bulk.

Build up an array of many 16TB drives.

Also as Aaron said, the primary consideration relates to your entire infrastructure design + required performance.

For example, I run 100s of LXD containers housings 1000s of LAMP Stack sites.

All projects require high performance disk I/O.

ZFS performance, inside containers, compared to EXT4 performance. There's no comparison. EXT4 is far more stable + much faster.

Stable meaning, sometimes ZFS is blazing fast + will just stall for minutes, causing massive problems for high traffic sites.

This might be related to LXD architecture. I'm unsure.

For my criteria - high performance LAMP Stacks running inside LXD containers - EXT4 is currently required.

So take Aaron's advice.

Start with your performance requirements, then start performance + load testing various filesystems to ensure you have the performance + stability required for your specific application.
David FavorFractional CTO
Distinguished Expert 2018
Commented:
Aside: While not specifically a ZFS related forum, many of the LXD developers do extensive work with ZFS.

Posting a question to https://discuss.linuxcontainers.org about best practices tooling high performance ZFS setups may provide you with some great info too.
Splitting up a file server (files accessed over SMB/CIFS/NFS) volumes has its pro's and cons, it can be easier to have all files on one volume, then any file move is just a move, rather than multiple volumes which requires copying (much slower), an, but as you are aware the recovery time from "disaster" can be significant, and it is possible to run out of space on one volume when there is still free space on other volumes.

For large file servers, restore time become a business continuity issue, and to address this, it can be advantageous to have an online mirror. On the basis that the online mirror is only used in the event that the main storage in in a DR state, the mirror could be build from lower cost parts such as using 7.2k disks instead of 10k disks etc.

As ZFS is already used, ZFS send/receive could be used, or other tools such as rsync etc.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial