Link to home
Start Free TrialLog in
Avatar of byt3
byt3

asked on

ZFS or RAIDZ / ZPool as iSCSI target for Windows computer

I was told by someone in a company that a ZFS volume and be setup as an iSCSI target to Windows. Windows would have to format that as NTFS. So is the ZFS volume being re-formatted as NTFS? This doesn't make sense.

The only think I can think is that the volume is just the RAID-Z / ZPool stuff and isn't formatted with ZFS. If that is the case which features of the ZFS would be lost by not having the full ZFS being used?
ASKER CERTIFIED SOLUTION
Avatar of arnold
arnold
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of byt3
byt3

ASKER

Ultimatly the plan is to run Hyper-V VMs off of it. I wanted remote storage with ZFS features to run the VMs on. The particular features I need are deduplication, compression, storage snapshots and asynchronous replication to remote site. I can do these things with really expensive SANs, but I was looking for an alternative to make it affordable.
Avatar of byt3

ASKER

Is there notable performance issues by setting up an iSCSI target on a ZFS volume? Or is it no different than iSCSI to a device like a Nimble or EMC?
unfamiliar with nimble, the raw handling on the server would be in a similar vein.

Though you should check whether the hyper-v VM's state when the snapshot is taken would have issues .....
 
As far as the zfs filesystem all it will see is a blob of 60Gb for example. It will not see any data within.
The Windows system will see the vhdx fole for the drive of the VM, but it requires an extra step I think Windows server 2012/14 only provide for a way to mount/load the vxhd as it would a cd/DVD and be able to see the contents within.
The same goes on the sans, replication on the San level has to be supported on within the applications.

Others will point out if my point of reference is in error.
Avatar of byt3

ASKER

Are the snapshots done not block level? If it is then it doesn't need to be "supported" does it?
While it is done at block level, the state of the data therein is not as important when dealing with storing files, but is important when you have VM data or application data whose state you do not know at the time the snapshot is taken, akin to taking a filesystem level backup of database files the state of which could be mid transaction. ...

 While the expensive San have San level mirroring/copying apps like SQL were not supportive unless the location to which replication was off and ran into issues if it was attempted to be brought up while the remote Sara has not synchronized.

I am not sufficiently knowledgeable with the setup you wish to undertake either way.
Your consideration for the setup is the quick transition of the data  to a new system.  Check the hyper-v recommended protocol and whether the hyper-v backup would need to be run within the Windows server regardless whether the VM data is on an iscsi, fc or local storage. But not capturing it at storage level unless the VMs are stop at the time of backup.
Or your zfs iscsi allocates two luns, obe that is operation and the other onto which hyper-v host stores the VM snapshots/backups. The sevond LUN could be backed up/snapshot, on the zfs storage level as the sata there is "somewhat static" I.e. It runs/executes after the hyper-v process completes/concludes.
Avatar of byt3

ASKER

I'm wondering then if the VM replication should be handled by Hyper-V rather than by the SAN.
Hyper-v snapshot has negative consequences on performance including complexity of getting rid of the snapshot.

At least based on server 2008.

Look at the application/etc before you evaluate the options available/used on the storage level.
Avatar of byt3

ASKER

I think snapshots and replicas are different things. Replicas were added in Windows Server 2012. If I've got a 100Mps link, I figure that should be enough for the replication to happen. This might be the way to go instead.