Link to home
Start Free TrialLog in
Avatar of Rick Goodman
Rick Goodman

asked on

Issue seeeing Cluster Shared Volume mapped drive for Hyper-V Cluster

I have a 2012 R2 Hyper-V cluster that consists of 2 hosts. I configured a CSV using NetApp SnapDrive for Hyper-V to map a LUN from my NetApp as shared storage. It shows up as a mapped drive on one host. On the other host it shows up under the disk list in Failover Cluster Manager and has the drive letter next to it but it doesn't show up in Explorer. Under Disk Management it shows up as offline. When I try to make it online, it errors and says it's managed by a Failover Clustering component and must be in Maintenance Mode to bring online. Is that normal? My understanding was a CSV could show up as a mapped drive on both hosts to hold my VMs and VHDs so I can run them on either host if needed. Any suggestions?
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Both nodes are connected to the iSCSI target simultaneously and it is added to CSV? If yes, then you should see the root folder under C:ClusterStorage\VolumeXX where the XX is the volume number shown in FCM.
Avatar of Rick Goodman
Rick Goodman


It does show up like that. But shouldn't it show up as the mapped letter under explorer on each host? See below. One host has the V drive but the other doesn't. I though I would be able to point the location of VMs to the V drive on both hosts but can't do that if one doesn't have it.

 User generated image
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
OK, then it's working correctly. Thanks for clearing that up for me. Appreciate it.
No worries.

FYI: Our Shared Storage layout would be something like the following:
1.5GB Cluster Witness
350GB Common Files CSV (sum RAM of each node plus 100GB)
50/50% CSV for VHDX (Split rest up 50/50 so each node can own to distribute the load).
What do you mean by 50/50% CSV for VHDX (Split rest up 50/50 so each node can own to distribute the load).\? I'm not sure I understand. And should all VM and VHDs be placed on the same CSV? Either host can access a VM or VHD from the CSV, correct?
After setting up our witness and common files CSV storage the leftover storage is for our VMs.

As an example:

2TB total shared storage.
1.5GB to Witness
350GB to Common Files

The balance left over is about 1,695GB.

We would split that up into two LUNs at 847GB each, add storage in FCM, and then CSV. Each node should end up owning one of the CSVs. This distributes the IOPS between the two nodes.

Create one big CSV and only one node owns it. Thus that node has to handle all IOPS for the CSV.

So, by splitting available storage into at least two LUNs/CSVs we end up with a better distribution of I/O.
Oh, thanks. That makes sense. I really appreciate the info. Have a great day.