Link to home
Create AccountLog in
Avatar of Wangstaa

asked on

Permission issues with HyperV over Scaled out File Server

Hello All,

I have two major issues with trying to implement HyperV Virtual machine over Scaled out File Server (SOFS), I would really appreciate it if anyone can help out on this.

The Setup

Scaled out File Server Name: SOFS
Test Share: \\SOFS\VM

The Issues

Creating HYPERV over SOFS Shares
- Creating a Virtual Machine on HYPERV1 from HYPERV1 does not have any issues
- Creating a Virtual machine on HYPERV2-4 from HYPERV1 result in General access denied error (0x80070005)

Quick/Live Migration over SOFS Shares
- Migration does not have any issues if I grant EVERYONE - FULL Permission to the share \\SOFS\VM
- Migration will error out in permission error if I only give FULL Permission to HYPERV1$, HYPERV2$, HYPERV3$, HYPERV4$, HOSTCLUTER$, SOFS$, SYSTEM, Administrator

I have tried other methods such as granting
Kerberos delegation
and they didn't work either. THIS IS DRIVING ME CRAZY!! PLEASE HELP!!!
Avatar of Cliff Galiher
Cliff Galiher
Flag of United States of America image

That's a cluster build issue, not a permissions issue. NTFS is not multi-node aware, so the way Microsoft got around this is to layer CSVFS on top. And CSVs work by having a "coordinator node" handle all meta-data changes. That includes creating VMs, changes in vhd(x) files, and such.

Which means, it doesn't matter which bode you "create" a VM on. It'll talk to the coordinator node and the coordinator node will handle creating the files necessary. That is try with hyper-converged clusters, SOFS clusters, or "other."

The two most likely problems are 1) you aren't using cluster manager to create your VMs. If you user the hyper -v manager, it'll try to write to storage it (rightly) doesn't have permissions to write those changes should go through the coordinator node and *it* has permissions. All changes need to be done through cluster manager.

Or 2) you didn't create a CSV on your SOFS storage. Which has the sane problems as #1. Different bodes fighting over writes.
Avatar of Wangstaa



1. I am using cluster manager to create my vms
2. I am using CSV, you actually can not create a SOFS Share without CSV volume
Then the behavior you describe makes no sense. You say you cannot create a VM on Hyper-V 2-4 for Hyper-V 1, but from a technical perspective, *all* of those changes would go through the coordinator node, whether made on Hyper-V 1 or Hyper-V 2-4. Functionally all would be redirected and the actual writes would come from the same node. The fact that different nodes behave differently indicates that the coordinator node is not being used, which only occurs if the cluster is not aware of the CSV or if the nodes are not properly configured to use that storage. I'd strongly recommend re-validating your cluster at this point. Something has gone terribly wrong. I said above...isn't a permissions issue, but is an issue with how the nodes are communicating with the coordinator.

- All nodes are freshly formatted machine, patched to date
- Cluster Validation passed without any errors or warning

I cannot create VM on HYPERV2-4 from HYPERV1, respectively, I also cannot create VM on HYPERV1,3,4 from HYPERV2

For issue #2, It has to be a permission issue because once I get a VM going, and grant EVERYONE full access to the SOFS Share, everything works as intended, failover, migration, etc... I just don't want to grant EVERYONE full access for obvious reasons.
That doesn't mean it is a permission issue. That means that the nodes are writing directly to the storage and you've granted them permissions to do so. Unfortunately that is a great way to have two nodes end up trying to write to the same block at the same time and that *Will* cause data loss.

Yes, changing permissions appears to solve the issue, but that is a red herring. It is allowing something that shouldn't be allowed.

You've re-validated both clusters? Both the SOFS cluster *and* the Hyper-V cluster?  Can you post updated results of both validations?

I have re-validated the YUHOSTCLUSTER, and SOFS is not a cluster it's a role there is no validation option.
THATS the problem!!!

Very first bullet point in the SOFS requirements:

"Before you deploy Scale-Out File Server, you must first review the requirements and plan for the deployment. The information you should review includes:

•Failover Clustering requirements   Because a Scale-Out File Server is built on Failover Clustering, any requirements for Failover Clustering also apply to Scale-Out File Server."
SOFS *is* a cluster. It must *be* a cluster. Without the cluster, SOFS deployments can't coordinate writes and all hell breaks loose. That's why it is a listed requirement.

It is apparent to me that you do not have actual hands on experience in this matter, I would appreciate it if you let other "experts" comment. Thank you
Feel free to escalate the issue. But first I suggest you look at my profile. How many questions I've answered. My history. My experience with clusters. Oh, and that I'm a Microsoft MVP and have hundreds of clients that I manage with *exactly* this technology. I must not have any hands on experience.  And I provided official Microsoft documentation *WITH* the requirements. They must be wrong too. You know better than all of us. Good luck with that.
Set-KCD is a script we use to set up our SOFS and Hyper-V clusters.

It looks like you have the permissions FULL to Hyper-V$, ClusterName$, and Set-SmbDelegation, and then there is a Command set for restricting the networks SMB uses.

If the KCD doesn't work correctly then enable ANY for Kerberos delegation and reboot the Hyper-V nodes.

is this the exact Set-KCD script you used?
This one: Setup-KCD.ps1 powershell script 2 setup Kerberos Constrained Delegation 4 HyperV

A bit of clean-up would be required in ADUC to remove references to self in the server list on the Delegation tab.

The script works really well.

I have ran the script, setup all the *cifs* & *Microsoft Virtual System Migration Service* delegation and the error persists.
Where is the cluster OU wise?

I suggest moving the entire lot into their own OU at the root of DOMAIN.Com something like 2016-Cluster then an OU for SOFS and an OU for Hyper-V.

Make sure the ClusterName$ has Create Computer Objects permissions for the OU they reside in.

Hopefully the Default Domain Policy GPO hasn't been mucked about with. I suggest from an elevated CMD in a C:\Temp directory:
GPResult /h 2016-04-08-AdminNameGPR.html

Have a look at that to see if there are any restrictive settings.

Correct me if I am wrong, does my configuration cause a SMB loopback?
Please post via TXT for each SOFS and Hyper-V Node:
On all:

Get-SmbServerConfiguration | Select EnableMultichannel
Get-StorageEnclosure | FL
On Hyper-V:
Get-SmbClientConfiguration | Select EnableMultichannel
I missed this:

SOFS is not a cluster ...

Get-SmbDelegation is what we are after now.

Permissions = delegation problem.

SOFS should work in a single server setting too. It's all in the way delegation gets set up.
Well, it is rare I disagree with Philip, but I do here. While Hyper-V will run fine with a single SMB3 Sever as storage, that isn't SOFS. The SOFS role is specifically designed for clusters and the changes it makes are done to enable those features, such as continuously available shares. There is *no* benefit of the SOFS role on a single server, and there are legitimate drawbacks. Including a not-supported topology. It will impact how permissions work.
The setup would be a single node cluster with JBOD and the intent to join a second node into the cluster. We've done that quite a bit on the Hyper-V side but not on the SOFS side. We've always deployed with two or three nodes out of the gate for SOFS.
Oh, no argument there. I've even done a couple single node cluster SOFS deployments. Rates but have done them. Notably though, the failover cluster role is still installed and configured, as per the requirements specified on the SOFS documentation. Running it without a cluster of some sort, even a single node cluster, underneath is still bad.
Agreed. We design to eliminate all SPFs (Single Points of Failure) in our cluster settings.
Avatar of Wangstaa

Link to home
Create an account to see this answer
Signing up is free. No credit card required.
Create Account
Huh? It would not dawn on me that this would be a problem as we'd never do this. ;)
Nor was it the topology originally described, so it'd never have struck me either. each their own. At least it seems to be sorted for now.
Yes, we ended up splitting the cluster in two. One for SOFS one for HYPERV.