ZFS: some pracitical questions

Hi,

I have some practical question on ZFS.

1.Hanccocka mentioned he has NFS + ISCSI running, why is that. Why ISCSI AND NFS?

2.What/how to do to configure ISCSI on the Nexenta (for the ESXi side: no problem)

3.If you add a CIFS-share: won't it impact the performance of the vm's? Or is this neglectable?

4.Does performance of ZFS get better after a while (when cache/logs are built) ...?

5.How many vm's do you have running on a ZFS NAS? I would like to run all my vm's but wonder if it could take it (will cache be more heavily used so it CAN take it or is it just the logic = less = the better performance)?
vm's I would like to run: windows 7 workstation, sql 2008, windows sccm, windows scom, vcenter 5, 2 domain controllers, 2 xenapp-servers, maybe also a windows 2012 and windows 8 as testdrive
=> therefore question 4, I had the impression performance got better after a while, just want to know if I'm over-exhausting zfs running all these vm's  (then I would run some on local ssd's in the esx-server) or I would not take advantage of what it's capable of

6.One more thing: what to do with 3 nic's in ZFS, use them all, only use two of them & how to configure then?

Please advise.
J.
janhoedtAsked:
Who is Participating?
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
1. NFS is easier and quicker to setup, there is no requirement for creating and assigning LUNs. Some workloads are better using iSCSI (it's faster).

NFS is native amongst Linux and Unix, easier to transfer files, than standard Windows Samba/CIFs.

if you can do JF see my EE Article

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

also best way to setup iSCSI on ESXi

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0

2. See this Tutorial

3. Neglible, and you are not reading and writing to CIFs, much. Certainly not as busy as NFS or iSCSI.


4. Performance diminishes if you use up all the disk space. But your performance should be roughgly the same from the moment you have built it with logs and cache.

5. Different labs have different amounts of VMs, 40-50 NFS, Domain Controllers, SQL, vCenter, Operations Manger, VDI, Win2k8 Servers, Linux Servers Solaris Servers.

6. Team and trunk your NICS for resilience at least (with the low processor, it's difficult to fully saturate a single 1GB NIC).
0
 
janhoedtAuthor Commented:
1.ISCSI for "some" workloads, you wouldn't recommand it by default (for performance reasons, just as you would use vmxnet3, not e1000 as nic in vmware machines)?

2.great tutorial thanks!

3.ok, I might consider bringing all my data of my Synology over to ZFS

4. & 5. ?! wow, so performance is only impacted by lack of diskpace, not the amount of vm's?
40 or 50 vm's on 1 ZFS?

6.ok, might not be good option to add extra nic, however if I would add my data to it and backup over network + migrate vm's once and a while to other nas (synology), it might be still good option to add more nics, right?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
1. again wether you use iSCSI or NFS, it's up to you, Test, see which is better for you. It's not a ZFS SAN question, it's the age old question of NFS versus iSCSI.

2. no problems.

3. we use CIFs.

4 & 5. Yes, performs well for us on NFS.

6. Two NICs is always better than one. But in our tests, we've not been able to saturate a single 1GBe NIC, because of the performance on the server (cpu)
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.