Link to home
Start Free TrialLog in
Avatar of DITGUY
DITGUY

asked on

Unmount option missing for datastore

I have a 5.1 vCenter with 3 clusters in it. One is 4.1 U1 cluster, and the other two are 5.1 U2 clusters.

We have a number of NetApp VMFS3 datastores that are zoned to be seen by the 4.1 cluster and one of the 5.1 clusters. The goal was to migrate the VMs from the old to the new cluster which worked.

After migrating the VMs are still on the datastores but now I don't want the 4.1 cluster to see it anymore. In the 5.1 hosts I can go to the configuration tab, click Storage, and then I can right click the datastore and choose Unmount. It would show a checklist of things.

But in the 4.1 cluster's storage page I don't see the Unmount option when right clicking the datastore.

Likewise if I go to the Datastores screen in vcenter instead the hosts and see all the datastores. If I right click the datastore I can choose Unmount but in the list of hosts that are connected to it the 4.1 hosts are missing.

Any ideas why the unmount option is missing from the host > configuration > Storage view as well as the hosts not showing being connected in the Datastores screen?
Avatar of GG VP
GG VP
Flag of India image

Can you please perform a rescan/refresh of the storage and try?
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
VMware vSphere 4.x does not have an Unmount option. This was a new feature in 5.x.
Avatar of DITGUY
DITGUY

ASKER

Thanks Andrew. So to decouple a datastore shared between a 4.1 host and a 5.1 host in separate clusters, do I need to do anything before I remove the zoning for it against the 4.1 host (I just want it on the 5.1 host going forward).

I don't want to delete it since it's still being used by the 5.1 host.
Avatar of DITGUY

ASKER

GG VP  - I tried this to no avail. This would coincide with what Andrew said.
You should, technically create a mask on the host, so it's no longer visible, and then re-scan, and ensure zones/map/iQN or WWNN are removed for that host.
Avatar of DITGUY

ASKER

Hmmm. Not sure the terminology of "mask" in VMware. We don't want all datastores the hosts can see to disappear as some VMs are still using it. Is there a blog or something showing what steps you're referring to?
Yes, mask is the term used for a specific host!

Mask is the correct method and best practice to "unmount" a LUN from a Host, MOST VMware Admins, just pull the plug, think that unmapping on the SAN, and re-scan is the method! (it's not, and can lead to awful pauses in the stack!) - the later is common practice, because it's believe to be plug and play!

Same analogy, as when you remove a USB device in Windows, do you Eject it first/Safely Remove or just rip it out!

Masking a LUN from ESX and ESXi using the MASK_PATH plug-in (1009449)
Avatar of DITGUY

ASKER

Wow. I've never seen that before. That's a lot of steps and verifications just to do what "unmounts" seems to do in 5.1.

It starts off by saying this which I'm not sure applies to us or not. We simply take the LUN in NetApp OnTAP and map it to the WWNs of the hosts we want to allow to see it. Then we rescan on the host and we can add it as a storage device.


This procedure only applies to LUNs that are being managed by the VMware NMP multipathing service. If you are using 3rd party multipathing plugin such as Powerpath/VE then LUNs must be excluded from the 3rd party multipathing software and put under NMP control before they can be masked again properly.
ASKER CERTIFIED SOLUTION
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of DITGUY

ASKER

Yes we are using Brocade FC switches  (dual not zoned together). Then we have igroups in Netapp containing the WWNs of the servers in each cluster. One admin did just remove the cluster WWNs from the igroup and after a rescan the volumes disappeared but one got stuck in the grayed out "inactive" mode. Now we have to figure out how to get that out of there. Ugg.
Avatar of DITGUY

ASKER

In the past that is. Not the current set of datastores.
Fibre Channel does seem better at handling LUN/datastore loses compared to iSCSI.