Go Premium for a chance to win a PS4. Enter to Win

x
?
Solved

ZFS on Vmware config & optimalization check

Posted on 2014-01-27
13
Medium Priority
?
530 Views
Last Modified: 2014-12-11
Hi,

I have this Nexenta ZFS on my ESXi (5.5, server has only 16 GB) deployed, nfs enabled for my vm's.

Practical questions:

*I only granted Nexenta 3 GB but performance is really for crying out loud, what can I do to optimize performance on vm ... besides granting more ram?

*does installing vmware tools help on performance? installed it before (with lot of hassle) but vmxnet 3 adaptor is not supported

*I wanted to install Nexenta to hardware mapped laptop disks (mirror), this didn't work, is it a supported config/should it work ... should I?

*any special configs on Nexenta for performance optimization?

*the Nexenta zfs is on 1 SSD, can I backup the configs and use for disaster/recovery?

*Please check config, looks fine to me:

*Update: raised the ram to 8GB and ran another vm (Windows 2012) from another machine, still terribly slow  (still booting -for 10 minutes now-) .... What a dissapointment.

nexenta
0
Comment
Question by:janhoedt
  • 7
  • 5
13 Comments
 
LVL 124
ID: 39812595
*I only granted Nexenta 3 GB but performance is really for crying out loud, what can I do to optimize performance on vm ... besides granting more ram?

Increasing the amount of memory for the appliance will help, as it uses memory for cache.

As much as you can assign, 8-16GB is not uncommon for an appliance.

Assigning SSDs for LOGs and ARC can also help, but you would probably see more performance increase using SSD with traditional rotational hard disks for ZFS.

Adding more drives for ZFS, e.g. four SSDs, configured as two mirrors.

*does installing vmware tools help on performance? installed it before (with lot of hassle) but vmxnet 3 adaptor is not supported

VMXNET3 is a better vitualised network interface than the E1000, if you can get it to work!

*I wanted to install Nexenta to hardware mapped laptop disks (mirror), this didn't work, is it a supported config/should it work ... should I?

It should work.

*any special configs on Nexenta for performance optimization?

see above for performance tweaks (also jumbo frames)

*the Nexenta zfs is on 1 SSD, can I backup the configs and use for disaster/recovery?

more disks, would give you more IOPS.
0
 

Author Comment

by:janhoedt
ID: 39812663
It s not a little bit slow, ctrl alt del takes 5 min! The vm is on the same machine as zfs!
0
 

Author Comment

by:janhoedt
ID: 39812683
I can add as much or disks ram as I want, this will never be acceptably fast. With all do respect, I really cannot understand why someone would suggest running zfs virtual. What a waste of my time.
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
LVL 124

Accepted Solution

by:
Andrew Hancock (VMware vExpert / EE MVE^2) earned 2000 total points
ID: 39812885
Is you ZFS server running on a VMDK?

Is so the I/O is virtualised, all the guides, state present the Storage Controller directly to the VM, using VM Direct Path I/O, or using RAW disks.

If ZFS appliance is virtual it will always take a virtual performance HIT, but your biggest issue again is the low performance Host Server.

CPU is slow.
Memory is slow.
Bus is slow.
CPU cannot provide enough power for the Network Interfaces, even if jumbo frames is supported.

But how much wattage is the Microserver consuming 40 Watts, so not bad for 40 Watts of electric!
0
 

Author Comment

by:janhoedt
ID: 39812915
It was performing ok as a physical zfs. I knew there would be performance loss, but this is not performance los, this is no perf at all.
0
 
LVL 124
ID: 39813045
1. Is the Nexenta VM stored in a VMDK?

2. Did you present the storage controller to the VM via VM Direct Path ?

3. Did you prfesent the physical disk directly to the VM e.g. RAW ?
0
 

Author Comment

by:janhoedt
ID: 39813142
1.Yes, raw  mapping did not work
2.As you know, hp NL 40 does not support direct path, as it is a "budget server"
3.Yes, I posted several q. about it on this forum
0
 

Author Comment

by:janhoedt
ID: 39813146
1.raw mapping for os disks
0
 
LVL 124
ID: 39813166
Poor Server Spec, rebuild, and convert Hypervisor to Deddiciated NAS usign ZFS.

This looks like you best performance, per server specification and power consumption.
0
 

Author Comment

by:janhoedt
ID: 39813570
I've requested that this question be closed as follows:

Accepted answer: 0 points for janhoedt's comment #a39812683

for the following reason:

Fair enough.
0
 

Author Comment

by:janhoedt
ID: 39813540
Sorry, I'm really frustrated here.
0
 
LVL 124
ID: 39813571
An Answer has been provided here.
0
 
LVL 5

Expert Comment

by:Dawid Fusek
ID: 40493410
janhoedt,

I know that it's quite old closed topic, but I have to add some light to your case (with ZFS).

ZFS is a VERY VERY demanding solution. So so extremely poor performance happens often when someone wanna to use it as a VM without VMDP (VM Direct Path) because:
1. ZFS really need direct acces to disks (especially spindle hdd), and it's really works very poor when it doesn't have direct access to HDDs (timeouts often more than 15 seconds !!!!!) ZFS is so demanding for direct HDD access that it's really hard to set it with good performance on a high performance RAID controller with some RAID 5/6 or even 10, because ZFS can't understand access to volume, not to HDD and trying to manage that volume as a HDD which in most cases will result in very poor or extremely poor performance !!!!
2. ZFS need ZIL cache on SSD (2 SSD on mirror for redundancy and stability), and this should be good VERY FAST SSD, midrange or enterprise level SSD, It's can be used on some cheaper selected SSD but with a huge care, without ZIL performace on virtualization (without tuning) is really poor. ZIL is a some kind of WriteCache
3. ZFS need to have a lot of RAM, lot mean in SMB (or home LAB) minimum of 8-16GB of RAM, this ram is used for certain ZFS internal operations but mainly for ARC cache which ZFS speeds up hot data reads and also write a little, if You will use less than 8GB of ram for ZFS You will notice dramatically reducing in performance in Virtual Environments and DB
4. ZFS don't like cheap SATA HDD, it really likes to drop them from RAID or give the CRC errors, so if You using cheap HDD check it for 2 months under heavy load in the LAB until You are sure they are not drop by ZFS or not causing CRC errors
5. ZFS likes fast CPU (and 4+ cores), VM slows down access to CPU so it reduces ZFS speed what is visible
6. Nexenta CE appliance will consume always 1 core on your CPU, even when totally idle when running on VM, it's "bug" by design
7. HP N40L is not an optimal "server" for ZFS to be under VM (no VMDP, small RAM/16GB, and low speed only dualcore cpu), it can be used as ZFS NAS on bare metal but not on VM, but it can be use as a other OS NAS (thats not using ZFS) and be dramatically faster than ZFS as a VM
8. ZFS can be really fast but it's really require a good dedicated hardware with minimal requirements best practices for VE (Virtual Environment), minimal 16GB of ram for 4x normal iops VM's - so 8GB +2GB/vm, 4x cores fast cpu, direct access to HDD (not on HW RAID, it possible but require really deep knowledge of ZFS to be fast), NAS HDD, NL HDD or Enterprise HDD, DONT USE ZFS DEDUPLICATION for VE (can be used only for backup), DONT USE ZFS COMPRESSION for VE volumes with DATABASES and OS images
9. ZFS likes NFS very much, it depends of OS and deamons, iSCSI also working very good but NFS is more flexible and native for Solaris based ZFS solutions, I recommend NFS with ZFS for SOHO/LAB/SMB VE solutions, also Vmware ESXi (from 4.x to 5.x) is far more stable with loosing connection to NFS with running VM's on it than loosing connection to iSCSI Target with running VM's on it.

so ZFS is a good solution but not always so cheap as many thinks.

kind regards
NTShad0w
0

Featured Post

Get your Disaster Recovery as a Service basics

Disaster Recovery as a Service is one go-to solution that revolutionizes DR planning. Implementing DRaaS could be an efficient process, easily accessible to non-DR experts. Learn about monitoring, testing, executing failovers and failbacks to ensure a "healthy" DR environment.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

If we need to check who deleted a Virtual Machine from our vCenter. Looking this task in logs can be painful and spend lot of time, so the best way to check this is in the vCenter DB. Just connect to vCenter DB(default DB should be VCDB and using…
In this step by step tutorial with screenshots, we will show you HOW TO: Enable SSH Remote Access on a VMware vSphere Hypervisor 6.5 (ESXi 6.5). This is important if you need to enable SSH remote access for additional troubleshooting of the ESXi hos…
Teach the user how to install ESXi 5.5 and configure the management network System Requirements: ESXi Installation:  Management Network Configuration: Management Network Testing:
Teach the user how to configure vSphere clusters to support the VMware FT feature Open vSphere Web Client: Verify vSphere HA is enabled: Verify netowrking for vMotion and FT Logging is in place or create it: Turn On FT for a virtual machine: Verify …

877 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question