I believe many of you must have thought of trying your hands at the "High Availability" feature of Windows OS - Clustering. Building a usual plain windows cluster could be easily achieved using any virtualization tool like MS Virtual PC or VMware Workstation. But the fact here is "does it actually work the same way as we expect or otherwise have we really been successful to mimic the actual feature of Windows Clustering as in a live environment"?
To answer the above question, let's take a small scenario of building a 2 node server cluster and having a "shared physical disk" as one of the resources hosted on the cluster.
Now to give a brief overview about the working of a "shared physical disk" in the live environment, we all know that if Node1 is in control of the disk, then Node2 would not be able to access it and vice versa. But the most important thing here is the integrity of the data that's been written on the disk i.e. if Node1 writes some bytes on the disk the same should be visible to Node2 when it gains control over it.
In a live environment using SAN and SCSI hardware the above scenario could be easily achieved. But when it comes to virtual environment (or rather a test lab on our home PCs) it doesn't come up easily unless until either using 3rd party software or the tweak explained in this article.
Note: This article henceforth would be referring to VMware Workstation as the virtualization tool since MS Virtual PC doesn't support a 2 node cluster with a "shared" quorum feature. Also this article assumes that the reader has the basic understanding and awareness about Windows clustering.
So let's consider the below setup:
* DC1 - Domain Controller/DNS
* Node1 - Member Server/Cluster node 1 having its own virtual HDD
* Node2 - Member Server/Cluster node 2 having its own virtual HDD
* HDD1 - A separate virtual HDD added to either of the nodes (Node1 for example in this case)
The above mentioned HDD1 would be first added as a part of the Node1 server. Then the same HDD would be added to the 2nd node, Node2 server. Now there is the catch, at a single point in time, only one server can claim hold of the additional hardware HDD1 in VMware. So when you power up Node1 and then try to power up Node2 (to which you have added the same additional HDD) it would throw an error. So to resolve this conflict you would need to add a code of line in the VMware configuration file (.vmx file) of both the servers.
disk.locking = "FALSE"
This would allow both the nodes to simultaneously access the HDD. But still the integrity and persistence of the data written by both the nodes wouldn't yet be retained. That is, if Node1 writes anything on the disk it wouldnt be visible to Node2 and vice-versa. Therefore to ensure that both the nodes see a common data we need to add another line of code in the configuration file of both the nodes.
The above code would retain the integrity of the disk and it would behave just as any usual "shared physical disk" resource as in a live environment.
Apart from the above, you would be following the standard process of adding the disk and making it recognized using Windows "Disk Management Tool" and building the cluster and finally adding the physical disk as a resource in the cluster group.
Teach the user how to rename, unmount, delete and upgrade VMFS datastores.
Open vSphere Web Client: Rename VMFS and NFS datastores: Upgrade VMFS-3 volume to VMFS-5: Unmount VMFS datastore: Delete a VMFS datastore:
This Micro Tutorial steps you through the configuration steps to configure your ESXi host Management Network settings and test the management network, ensure the host is recognized by the DNS Server, configure a new password, and the troubleshooting…