ZFS config question using FreeNAS

I have a tower case with a intel 64bit 16GB RAM, but my question is the hardware disk layout

I have two 120GB intel SSD drives and 6 x 3TB Seagate 7200RPM disks.

I want to have the optimum layout for a general NAS server that I can do NFS reads and writes to. Backups, and media storage.

The 3TB drive I was going to ZFS RaidZ2 and mirror the 120GB SSD as logs to improve the writes. The cache I was hoping that the 16GB of RAM would pick up that load.

I was playing with the idea of mirroring the SSDs but doing a split format 1/2 for logs and 1/2 for cache. The thought was that the SSDs could take the load and the mirror would mitigate a SSD drive failure.

I would have loved to have ZFS configured to use a true tier storage where it would auto-move blocks between SSD, and faster and slower HDs (I have old disks) The tower can hold 10 HDs and 4 SSDs

[root@freenas /]# uname -a
FreeBSD freenas.local 8.3-RELEASE-p7 FreeBSD 8.3-RELEASE-p7 #1 r249203M: Sat Apr  6 09:28:27 PDT 2013     root@build.ixsystems.com:/tank/home/jpaetzel/fn8.3/freenas/os-base/amd64/tank/home/jpaetzel/fn8.3/freenas/FreeBSD/src/sys/FREENAS.amd64  amd64
[root@freenas /]# dmesg | grep mem
real memory  = 17179869184 (16384 MB)
avail memory = 16425308160 (15664 MB)
[root@freenas /]# zpool status
  pool: ZFS-Raid-Z2
 state: ONLINE
  scan: none requested

      NAME                                            STATE     READ WRITE CKSUM
      ZFS-Raid-Z2                                     ONLINE       0     0     0
        raidz2-0                                      ONLINE       0     0     0
          gptid/5643058c-1198-11e3-9e5a-d43d7e35d587  ONLINE       0     0     0
          gptid/5699113f-1198-11e3-9e5a-d43d7e35d587  ONLINE       0     0     0
          gptid/571ddcf8-1198-11e3-9e5a-d43d7e35d587  ONLINE       0     0     0
          gptid/57aadd13-1198-11e3-9e5a-d43d7e35d587  ONLINE       0     0     0
        mirror-1                                      ONLINE       0     0     0
          gptid/57e90a29-1198-11e3-9e5a-d43d7e35d587  ONLINE       0     0     0
          gptid/57fc1f5e-1198-11e3-9e5a-d43d7e35d587  ONLINE       0     0     0

errors: No known data errors
[root@freenas /]#
Brian Sretired geekAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Daniel HelgenbergerCommented:
I think you may looking for ZFS Hybrid Storage pools (HSP)?
This should in turn be quite easy, using zpool add, please keep in mind some restrictions:
Also, for RAM caching,
set swapfs_minfree in /etc/system:

If you use both, you will have a 3-tier-storage:
1. RAM cache
2. SSD pool
3. SATA disks
Brian Sretired geekAuthor Commented:
I have rebuilt my ZFS to be:

freeNAS -- Hostname = [freenas.local]
  pool: ZFS-Raid-Z3
 state: ONLINE
  scan: none requested

      NAME                                            STATE     READ WRITE CKSUM
      ZFS-Raid-Z3                                     ONLINE       0     0     0
        raidz3-0                                      ONLINE       0     0     0
          gptid/8c883387-2c25-11e3-b2be-d43d7e35d587  ONLINE       0     0     0
          gptid/8d0787a7-2c25-11e3-b2be-d43d7e35d587  ONLINE       0     0     0
          gptid/8d9706b1-2c25-11e3-b2be-d43d7e35d587  ONLINE       0     0     0
          gptid/8de55100-2c25-11e3-b2be-d43d7e35d587  ONLINE       0     0     0
          gptid/8e550f3e-2c25-11e3-b2be-d43d7e35d587  ONLINE       0     0     0
          gptid/8ee253a0-2c25-11e3-b2be-d43d7e35d587  ONLINE       0     0     0
        mirror-1                                      ONLINE       0     0     0
          gptid/8f1af888-2c25-11e3-b2be-d43d7e35d587  ONLINE       0     0     0
          gptid/8f31465c-2c25-11e3-b2be-d43d7e35d587  ONLINE       0     0     0
        gptid/8f481f6d-2c25-11e3-b2be-d43d7e35d587    ONLINE       0     0     0

errors: No known data errors
ZFS-Raid-Z3  16.2T   631G  15.6T     3%  1.10x  ONLINE  /mnt
 This gives me about 8TB of diskspace since I opted for RaidZ3 -- this I think gives me the best striping and best recovery if a disk fails.

                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
ZFS-Raid-Z3   631G  15.6T     12  3.40K  40.8K  58.0M
ZFS-Raid-Z3   631G  15.6T     19  4.48K  28.6K  73.7M
ZFS-Raid-Z3   632G  15.6T     18  3.79K  27.6K  71.2M
ZFS-Raid-Z3   632G  15.6T     21  3.68K  30.0K  60.8M
ZFS-Raid-Z3   632G  15.6T     23  2.96K  34.8K  53.5M

Reads and writes seem to be ok. I see with iostat that the cache disk is being used and the log disk are about 50% of the writes to the disks.

I'm currently moving data back on to the new ZFS FS via find / cpio commands.

My FreeNAS OS does not have a /etc/system file
Reading through the guides I am a bit confused -- the best practices state that I should use entire drives, but other guides should like I can cut it up into slices and then have smaller zvols.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Brian Sretired geekAuthor Commented:
Sadly ZFS doesn't seem to add any extra value for the tiered storage.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.