Avatar of Rick Goodman
Rick Goodman
 asked on

Issue seeeing Cluster Shared Volume mapped drive for Hyper-V Cluster

I have a 2012 R2 Hyper-V cluster that consists of 2 hosts. I configured a CSV using NetApp SnapDrive for Hyper-V to map a LUN from my NetApp as shared storage. It shows up as a mapped drive on one host. On the other host it shows up under the disk list in Failover Cluster Manager and has the drive letter next to it but it doesn't show up in Explorer. Under Disk Management it shows up as offline. When I try to make it online, it errors and says it's managed by a Failover Clustering component and must be in Maintenance Mode to bring online. Is that normal? My understanding was a CSV could show up as a mapped drive on both hosts to hold my VMs and VHDs so I can run them on either host if needed. Any suggestions?

Avatar of undefined
Last Comment
Rick Goodman

8/22/2022 - Mon
Philip Elder

Both nodes are connected to the iSCSI target simultaneously and it is added to CSV? If yes, then you should see the root folder under C:ClusterStorage\VolumeXX where the XX is the volume number shown in FCM.
Rick Goodman

It does show up like that. But shouldn't it show up as the mapped letter under explorer on each host? See below. One host has the V drive but the other doesn't. I though I would be able to point the location of VMs to the V drive on both hosts but can't do that if one doesn't have it.

Philip Elder

View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
Ask your own question & get feedback from real experts
Find out why thousands trust the EE community with their toughest problems.
Rick Goodman

OK, then it's working correctly. Thanks for clearing that up for me. Appreciate it.
Experts Exchange has (a) saved my job multiple times, (b) saved me hours, days, and even weeks of work, and often (c) makes me look like a superhero! This place is MAGIC!
Walt Forbes
Philip Elder

No worries.

FYI: Our Shared Storage layout would be something like the following:
1.5GB Cluster Witness
350GB Common Files CSV (sum RAM of each node plus 100GB)
50/50% CSV for VHDX (Split rest up 50/50 so each node can own to distribute the load).
Rick Goodman

What do you mean by 50/50% CSV for VHDX (Split rest up 50/50 so each node can own to distribute the load).\? I'm not sure I understand. And should all VM and VHDs be placed on the same CSV? Either host can access a VM or VHD from the CSV, correct?
Philip Elder

After setting up our witness and common files CSV storage the leftover storage is for our VMs.

As an example:

2TB total shared storage.
1.5GB to Witness
350GB to Common Files

The balance left over is about 1,695GB.

We would split that up into two LUNs at 847GB each, add storage in FCM, and then CSV. Each node should end up owning one of the CSVs. This distributes the IOPS between the two nodes.

Create one big CSV and only one node owns it. Thus that node has to handle all IOPS for the CSV.

So, by splitting available storage into at least two LUNs/CSVs we end up with a better distribution of I/O.
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.
Rick Goodman

Oh, thanks. That makes sense. I really appreciate the info. Have a great day.