In one of my previous articles
, I explained how to create a native Windows 2012 Cluster. The method described in the linked article is fine as long as you don't want or need to manage your environment with this additional optional software known as System Center Virtual Machine Manager.
SCVMM brings in another layer of management not available through standard clustering, including the ability to create machines from Templates or Profiles - the ability to store virtual machines in a library for redeployment later, the ability to allow self service so that administrators can give permissions to users or departments to be able to manage and create their own machines, the ability to update virtual machines as well as virtual hosts, and even the ability to monitor software running inside virtual machines (Windows 2012 only). Powerful stuff indeed, and I didn't mention the ability to organize your infrastructure into data-centres and clouds, which is what SCVMM can also offer. SCVMM is a huge product - and it can do a lot of things - probably more than you need it to, but it is a vital product to help manage your virtual infrastructure.
This article is not here to teach you about SCVMM completely, because it is a huge product. It is however here to help you install SCVMM, configuring your Fabric, and getting your first Hyper-V Cluster created through SCVMM.
Installing SCVMM 2012 TipsPlease note - the current version (at the time of writing) is SCVMM 2012 SP1 - you should ensure you install this, or newer.
Since this article isn't here to show you how to install SCVMM, I will give you a few tips when installing it, so that you don't come up a cropper. Installing SCVMM itself is a relatively simple task, not really worth of documenting. The tips are:
Have a separate SQL Server (2012 or 2008 R2) which has to be running under a Domain user account (i.e. not any built-in accounts)
Run SCVMM on a dedicated server (virtual or physical, does not matter)
Do not run SCVMM on one of your cluster hosts!
Ensure that your SCVMM 2012 server has a separate data partition, which you will use for the "library". The Library does not have to exist on the same physical server, but a library must be created in order for you to complete the installation.
When you install the software, it will create an Administrators role inside the application, and add your account to it, as well as adding Domain Admins of the domain in which the SCVMM Computer is a member. What it doesn't however tell you is that in order to be able to use SCVMM, you also have to have read and write permissions to the database that it creates. This is important, because if you add people in future to the Administrator role - they will not necessarily be able to launch SCVMM, and be given a completely irrelevant error message when they try to launch it.
There are other prerequisite software, like SIS (Single Instance Store), but these can be installed as you see the messages regarding them. I already have SCVMM running (otherwise I might have documented the entire installation for you), and this is what it looks like:
You can see however that I have already got a cluster in there, and managing it, but fear not - because I'm going to give you the instructions you will need to get you on your way.
Understanding Networking in Hyper-V
There are essentially two ways we can do networking in Hyper-V. The "legacy" way is to configure the teams at the OS level, and then make the teamed adapter a Virtual Switch inside Hyper-V. In this case, the Virtual Switch would be using a logical switch that acts and performs just like a normal network adapter would. This is what the team would look like inside your OS:
The issue with doing it this way is that we cannot define any policies to the adapter to allow different kinds of network traffic have different Quality of Service, so if for example we did this, and decided to do Hyper-V Replication to another node - there is a good chance you will suffer network loss to your virtual machines while the initial sync is being done. I've done this, it's not hard to do at all:D
The more preferred way to do it is to have a logical switch that utilizes the Windows Network Virtualization Filter driver along with the Hyper-V Extensible Virtual Switch, to give us the greatest amount of flexibility and reconfigurability in in our cluster. Your teamed adapter, while having the same icon, would just look slightly different:
One of the main differences you will see immediately is that this interface is not used for communication - there is no IP Bound to it. This interface acts as a virtualization layer for the physical adapters, and thanks to the filter driver, we can impose a lot more rules to it.
Don't worry about creating this team now - SCVMM is going to do it for us :)
The first place you start before you implement something as big as Hyper-V Clustering, you want to ensure that you do it right. There's nothing more frustrating than having a Hyper-V Cluster hosting 500 virtual machines with a bad configuration that requires re-doing. It is therefore best to start off with the networking design. You have to ask yourself many questions - What networks do I need? The answers I came up with are as follows:
The guests network is pretty self explanatory. This network will contain all my Virtual Machines. Sure, we can have more than one Guest network - and these may be identified by different VLAN's, but essentially you only really need 1 - regardless of how many VLAN's you require. The reason you only need 1 is because you only need to assign one policy for guests. In rare cases, you may require more Guest VLAN's, if for example, you wanted to give one set of virtual machines higher availability via the network.
Hyper-V relies on something known as a "Cluster Heartbeat" in order to determine whether or not hosts are alive and kicking. It is best practice to have a separate, dedicated network for your cluster heartbeat, so that if your main network goes down, your cluster hosts can still communicate with each other, and realise that each other is still up and running. If you had an issue where you had 3 cluster nodes, and the network went down, each cluster node would lose communication with the other ones, and each one will think that THEY'RE the bad one - and commit hara-kiri, effectively taking your entire cluster down. How marvelous. We are therefore going to create a dedicated Cluster network for this task.
Live Migration is the process of moving virtual machines from one node to another - while the machines are still operational. This is achieved by copying the running memory from the source node to the target node over and over again until it's almost identical - stopping the CPU on the source node, registering the MAC Address on the target node, and then starting the CPU on the target node after doing one final memory copy to get the last bit. Naturally, a server running 32 GB of RAM is going to take a little bit of time to do this, and the faster we can do this - the better. It's best practice to have a dedicated network to do this, so that it has all of its own resources to do this in.
Finally, I've decided that the last network I need is a Management network. This is so that I can talk to the cluster nodes, and manage them. I *could* double-up on the Guests node to do this, but the thing is, management is more than just being able to ping my server and communicate with it - it's also the adapter through which I am going to back up my Virtual Machines. I don't really want to overload the guest network backing up my virtual machines now, so therefore, my Management network will be used for this.
Now that I have designed my required networks, I count the interfaces on my server, and see that I only have 6. I want redundancy, so effectively, in this situation I can only make 3 networks - and my requirement is 4. Do I go and buy another dual port network adapter? That's of course an option, but hang on a moment - these networks we're talking about - they don't actually have to be physically separate from each other. Thanks to the fact that we're using a filtering driver, we can combine some of them on the same physical adapters, because our filtering driver will apply QoS based on something known as "Port Classifications".
What I've ended up with then is the following design:
One thing we have to remember however is that our host needs to be contactable during all of our configuration. This is easy for me because the 4 ports which I have dedicated for my cluster are all configured as trunks with a default VLAN which is the VLAN I've dedicated for Management. My Cisco Switch configuration for all 4 ports, is something like this:
description description mgmt49 ft/lb team A
switchport trunk encapsulation dot1q
switchport trunk native vlan 11
switchport mode trunk
spanning-tree portfast trunk
spanning-tree bpdufilter enable
This means, that any traffic NOT prefixed by a VLAN ID, is put onto VLAN 11, so while the switch port is in a trunk mode, it's also in access mode for VLAN 11. Not all switches can potentially have this configuration! Some switches may only allow Trunking - in which case all traffic must be tagged with the correct VLAN ID. If you are one of these people, then you are going to have to be careful in which order you configure your networking and your teaming, because if SCVMM is no longer able to talk to the host - then it can no longer continue to configure the host.
The configuration I have on my switches is best because I don't have to worry about configuring the VLAN for my management network - because it's there by default.
Without further ado, let's go ahead and implement a new cluster!
Configuring your fabric
We're going to tell SCVMM what networks we intend to use in our cluster, as well as what kinds of priority each network is going to have. First, open up SCVMM, and then go to the Fabric tab. All of our configuration in this section will be under the "Networking segment. Now, let's follow these steps:
1. Create all your logical networks
Under "Logical Networks", create all the logical networks that your hosts/clusters will support. You will note that I also have IP Pools - this is not mandatory, but recommended for networks where you might not have any DHCP. SCVMM does not use DHCP for these adaptors, but allocates IP's on a first assigned first served basis. It helps you keep track of certain networks. These logical networks are the same ones we have come up with in our design earlier. Because I don't want to manage my own Live Migration Network, or my Cluster Communication network - I'll let SCVMM do it for me.
You will notice that I have a check-box checked here - and this is "Allow new VM networks created on this logical network to use network virtualization". This property is required on at least one of the Logical Networks in order to allow Virtual Machines on different hosts to be able to communicate with each other. If you take SCVMM out of the equation, it would be the equivalent of adding the ms_netwnv component to the team, using a command similar to the following:
Enable-NetAdapterBinding -InterfaceDescription "Hyper-V Logical Switch92c5a205-405b-40bc-8d97-f4f0ab4ea7b8" -ComponentID "ms_netwnv"
NOTE: Don't type that in, I'm just explaining what checking this box will do for us :) ONLY check this box for networks which are intended to be used for Guest Machines - do not enable this for the Live Migration, Cluster Communication or Management networks.
If you do - well, it won't be a bad thing :)
You do NOT need to define all your VLAN's up front - goodness knows you'll want to add more VLAN's to various networks later. What is important to note is that you are going to tell SCVMM which hosts the networks are valid for. Unless you want a complicated network strategy (which is of course possible to achieve), you'll probably want to just select the root node like I have done.
One more screen-shot I want to show you - is the Management Network, and this is as follows:
As previously discussed, the Management Network is network on which my cluster hosts will reside, and through which I can talk to and manage them. Due to my network configuration, my management network is VLAN 11 as defined by using my native VLAN on my switch. You will see by this screen-shot that my VLAN here is 0 - this is equivalent of saying "The Switches default VLAN" (which is in my case, 11). If your switch only permitted trunk mode, then you would specify your management VLAN here. For example, for me it would be 11. I could also have used the literal value of 11 in my screen-shot above.
Of course, if you don't have to worry about VLAN's at all - i.e. your "management" network is a non-trunked standard access port - then you would leave it as 0 here since it has no VLAN. It's quite normal to have a dedicated NIC or team for management purposes only - in fact, I'll probably switch to that when I get my 10 gigabit Ethernet as the Live Migration and Cluster Communication could move to that network.
2. Create Uplink Profile
In the "Native Port Profiles", you should create a new Hyper-V Uplink Profile. NOTE: When creating it you choose Create -> Native port Profile, and then use the radio button "Uplink Port Profile" to choose the right kind of profile. Profiles are used to tell the switch what kind of load balancing it should do, and what kind of traffic it will be responsible for. Since we are going to have two logical switches, I'm going to create a port profile for each switch.
The best load balancing algorithm to use in a Hyper-V environment is HyperVPort, since it will evenly balance all your hyper-v interfaces across the teamed adapters. Doing it via Address Hash (also a valid configuration) does not necessarily guarantee evenness. You will see that my teaming mode is SwitchIndependent. This is because I am connecting to different switches on my team for redundancy, and my switches are not dependent on each other. If I wanted to bond the adapters together, thus providing bigger throughput (for example, combining 2 x 1 Gbit interfaces to allow a 2 Gbit channel), I can use LACP. Your switch will also be needing to support this. LACP is Link Aggregation Control Protocol, and you can read more about this from one of the links at the bottom of the article.
Select the applicable Networks which will use this uplink port profile. You absolutely do want to check "Enable Windows Network Virtualization" for this uplink profile. More about Windows Network Virtualization (WNV) in Technet, link at bottom.
Repeat this step, until you have two Native Port Profiles, one for each switch we intend to create. The second profile - in which we define the Management, Live Migration and Cluster Communications networks - we do NOT need to turn on Networking Virtualization, but it won't hurt you if you do.
NOTE: If you really want to be lazy, you can create one port profile, and just add all of the networks to it. Just because a network is in a port profile, doesn't mean you actually have to use it - all it means is that you *can* use it
3. Create Logical Switches
Under networking configuration, you choose your logical networks you created above. You should create all the Logical Switches you intend to activate on your hosts at this step. If you look back on my configuration, you will see I intend to have two logical switches - one for Guests only, and the other for my management, live migration and cluster communication. When creating switches, we assign port profiles to them, which we've created in the step just past.
Give it a name - a descriptive name would be good. When you actually go to add the switch on the host later, this name will appear in the team, with a GUID next to it.
Please do not remove the default "Microsoft Windows Filtering Platform"
In the uplink section, set your Uplink mode to Team (assuming you are teaming like me). You might be tempted to choose "No uplink team" in a single adapter configuration, but to be perfectly honest, setting it as Team here will save you frustration in future. You CAN have a single adapter team. Changing the uplink mode from Not team to team later will just cause you a headache.
Finally, you tell the Logical Switch what Port Classifications will be applied to it. We've not talked much about Port Classifications in the past. Technet tells us that
A port classification provides a global name for identifying different types of virtual network adapter port profiles. As a result, a classification can be used across multiple logical switches while the settings for the classification remain specific to each logical switch. For example, you might create one port classification named FAST to identify ports that are configured to have more bandwidth, and one port classification named SLOW to identify ports that are configured to have less bandwidth. You can use the port classifications that are provided in VMM, or you can create your own port classifications.
Based on this, and based on the fact that they already have port classifications defined for us, we can deduce that this is where we can define the quality of service we can associate to each network. Since my first Virtual Switch is ONLY going to be for guest machines, I only have to add the "Medium bandwidth" Port Classification.
You will see where this comes in a bit later.
Repeat this step and create two Logical Switches. For my second switch, I've added the following classifications: Host Cluster Workload (for my Cluster Logical Network), Host management (for my Management Logical Network) and Live migration workload (for my Live Migration Logical Network).
4. Add your VM Networks
Our configuration of the Fabric is now finished. What we have to do now is just define these networks under the VM Networks section of the VMs and Services pane. Click on "Create VM Network" - give it a name and if desired, a description, and then select the Logical network to which this VM Network should be bound. When asking for isolation for VM Network, we're going to say "No isolation". Of course, if you need an isolated VM network - this is where you'd configure it! On the Summary screen, just click Finish. It couldn't be any simpler. Repeat these steps for the 4 logical networks we defined earlier.
NOTE: You may discover that you cannot select certain logical networks more than once. The reason for this is that when we created our Logical Networks, only one network had the option to "Allow new VM networks created on this logical network to use network virtualization". Having this checked has also allowed us to create multiple VM networks using the same Logical Network definition.Believe it or not, but after all that, we've still not actually done anything with any hosts. What we've done here is define our configuration
. Fortunately, and if you've planned it right, you should never need to define or change your configuration again - unless of course you're just adding VLAN's. How exactly does this help us? Well, quite simply - SCVMM now can apply the same configuration, same VLAN's, same port classifications - to any and all of the hosts it's going to manage. The opportunity for human error is reduced significantly, and you can almost be assured that your configuration is identical on all your cluster nodes. Adding a VLAN in the future is as easy as going into Logical Networks - editing the Guests Logical Network, and adding the VLAN. The configuration will flow down to all the hosts, allowing this VLAN number to be chosen on any virtual machine which has an adapter bound to the Logical Network.
Let's move on! NOW it gets interesting!
Add your Hosts to SCVMM
Our ultimate goal here is to add a Hyper-V Cluster for Windows 2012. Remember - the cluster needs to be configured via SCVMM, saving us the trouble of having to configure the cluster manually. This is why we went to the trouble defining all that Networking malarkey.
Therefore, our hosts should be fresh and pretty much configuration free. I have two brand spanking new beautiful IBM System x3650 M4 servers with 384 GB RAM and 2 Quad Core CPU's in there. Intended purpose - to Host SQL 2012 servers inside a Hyper-V 2012 cluster. I've done this to save on licensing costs thanks to Microsoft's new SQL 2012 licensing model. The servers have been built with Windows 2012 Datacenter, and joined to the domain using their correct computer names. The only additional things I've done are:
Installed the IBM System x3650 M4, System x3500 M4, System x3550 M4 UpdateXpress System Pack for Windows 2008, Windows 2008 x64, Windows 2012 x64, Windows 2012
Added the MPIO feature in Windows 2012
Installed the DSM for my shared storage (an IBM StorWize v7000)
Installed Windows Updates
Installed Second Round of Windows Updates
Connected my servers to my shared storage
Provisioned shared storage for the servers (for my quorum). NOTE: When doing a quorum partition, be sure you format the volume as a Quorum disk has to contain a partition in order for it to be assigned as a Quorum.
Renamed my network adapters to have names so that I can easily identify them. I've put this in bold because you just won't believe how helpful this will be later down the line!
You should do the same (of course, replacing components where applicable for your hardware).
You should NOT:
Team any of your adapters (using Microsoft's teaming capability, or third party software)
Create a new cluster
Set up any Virtual Switches inside Hyper-V (assuming you've enabled the Hyper-V role)
Now, let's go ahead and add these hosts to be managed by SCVMM.
Right click the node where you will be placing your Hosts, and then select "Add Hyper-V Hosts and Clusters.
This step can be a bit confusing - my current domain is the same domain as the hosts are in (mgmt.local), yet, I've chosen "Trusted" domain. This is normal - if it's the same domain then it's trusted :)
At the credentials screen, you can enter in your own credentials, or use Browse to select (or create) a Run As account. Run as accounts are useful because any Administrator can select them for future use - without having to tell those administrators what the passwords are. NOTE: Whatever account you use, must be in the Local Administrators group (either directly or via nested groups) of the target servers.
Just specify which servers you wish to search for, and then click Next
Check the servers you wish to manage, and then click Next. You will note that it's found one as Hyper-V already - that's because I had inadvertently added the Hyper-V role. SCVMM will automatically add the Hyper-V Role for me on the other server.
Nothing should need to be changed on this server.
Click Finish - and then it should perform the necessary tasks and reboots to get your hosts manageable. You can view the progress in the Jobs pane.
And here we go, the hosts are now being managed by SCVMM and as you can see they are both happy.
If you look at the completed jobs however, you will notice that it's probably giving you a warning similar to this:
Easiest way to reboot them is is to right click the servers, and then select "Refresh". It will detect that a reboot is pending and offer you this warning.
Of course, you aren't running any virtual machines yet, so go ahead and click Yes. Do this to all your hosts.
Configuring the Hosts
Prior to adding to a cluster, we're going to configure the hosts and tell it about the network architecture we've implemented. This section is amazingly simple, but if done wrong - can cause a lot of headache! Let's right click one of the hosts in SCVMM, and select "Properties". You will see a screen similar to the following:
What we need to do in this section is to configure the Hardware (specifically, the network adapters)
NOTE: The following section will show you how I need to do it to achieve my goals. Your goals may be different to mine, but please go through these screenshots as they will explain how I am achieving my goal. Adapt your strategy according to your needs.
You will notice that I have quite a few network adapters. 7 in fact. 4 x Gigabit Ethernet, 2 x ten Gigabit Ethernet, and 1 logical USB network adapter for the internal management module. We're only interested in 4 of these interfaces right now, so what I'm going to do on 4 of them is to keep the check-box "Available for placement" checked, and the remaining 3, I'm going to remove this check-box. What this does, is tell SCVMM which adapters it should make available to select when creating Virtual Switches (a bit further down).
The "Used by management" is a tricky one - this is the equivalent of telling Hyper-V to share the adapter with the operating system. In my scenario, I am going to have two teams, and one of those teams is going to have the management logical network on it. Therefore, my two adapters which will form this team, I will keep "Used by management" checked, and all other adapters I will un-check "Used by management".
For safety sake, I am going to do this as a two-step phase. I'm going to configure my management logical switch prior to removing "Used by management" on the other interfaces - this way I know I can't lose access to the server. Right now I have 4 interfaces which are "Used by management", and my server does indeed have 4 IP addresses. If I leave two adapters as "Used by management", and then add the Virtual Switch (i.e. the team), then there may be a brief moment where I lose communication to the server while it switches from physical adapters to the logical team. By keeping the other two adapters as "Used by management", for now, I'm still guaranteed to be able to talk to the server.
Directly below each Network Adapter (actually, it's a child node of each network adapter), you will see that there is an option to choose the Logical network connectivity. This is what's going to tell the network adapters what networks are available for those adapters. Since my first team is going to be my Management, Live Migration and Cluster team - I check the parent box for each. The moment I click one of them, I will get this warning:
This is just basically telling you to be cautious about this section because you don't want to lose access to the server while making these changes. The large bold paragraph above told you what to watch out for.
I check the logical networks which will be used by this adapter (soon to be a team member), and I do the exact same configuration to the adapter below it. How did I know to do these two adapters? Simple - the names I've given the adapters show up below the physical adapter name so it's easy for me to know which adapter is designated for which task. SW01 P11 is Switch 01, Port 11, and that is a team member of SW02 P11 (Switch 02, Port 11).
Clicking OK will make the changes to the host adapters. Essentially at this point, you won't see any difference, because all we've actually done is told the adapters what logical networks they will support. We've not pulled the rug from under their feet by un-checking "Used by management" yet, because if we had done that - we would definitely have seen a change on the server (it would lose one or more IP addresses). Right now my server still has 4 IPv4 addresses and what looks like a hundred IPv6 addresses, thanks to my IPv6 RA on my routers.
Let's go ahead and team our first two adapters, and add the Virtual Network Adapters. Still under Properties of the Host, select the Virtual Switches Tab.
Eventually I'll end up with 2 virtual switches - one for each of my two teams. Naturally, you may end up with more or less teams, depending on your requirements. Click on "New Virtual Switch" and then select "New Logical Switch".
Use the drop down box to select the appropriate Logical Switch that you had previously defined. If you're wondering where "Hyper-V Logical Switch" comes from - ignore that - it's one of my other logical switches which I'm not using in this particular cluster (it's for my other cluster).
Find the first team member using the drop down. Unfortunately, this lists the adapter Descriptions and not the names - so you may want to confirm the descriptions against the names by doing an IPCONFIG /ALL on the host.
Click on Add to add another adapter, and again, use the drop down to select the appropriate adapter.
Don't click OK Just yet!
A Logical Switch by itself does nothing for us - it's just the gateway between the physical adapters and the Virtual Network Adapters. In order to make it useful, we have to add at least one Virtual network Adapter. A Virtual Network Adapter will link to one of your VM Networks, which in turn links to one of your Logical Networks. Complex enough for you? Well fortunately, this is as complex as it gets! Once you've been through it a few more times, you'll get a lot more comfortable with it.
Click on "New Virtual Network Adapter". You need to give it a name, select the appropriate VM Network, and then the Classification. You will note that you can only select the appropriate classifications and VM Networks if your previous configuration is correct. This is what my Management Virtual Network Adapter looks like:
I've left "This virtual network adapter inherits settings from the physical management adapter", so that my switch configuration takes effect on this virtual adapter. I was talking about my switch configuration earlier stating that it's a trunk with a default access VLAN of 11. Leaving this box checked is how my management adapter gets its IP address. If for example you had to specify a VLAN - you would then have to check "Enable VLAN" and specify the appropriate VLAN here. Of course, if you are not using a trunk at all, then you only need to leave the check-box checked. I have external DHCP take care of my management adapter, so I'm happy with this as is.
I still have to add two more Virtual Network Adapters - and here they are: (NOTE: You will have to select the Logical Switch again so that "New Virtual Network Adapter" becomes available again)
My Cluster network adapter should NOT inherit - because inheriting would just give it more IP's than it should have, and that would become confusing for both you and the cluster. Since we know it's a trunk port, we need to specify the appropriate VLAN, and thanks to SCVMM's ability to actually manage some of our IP pools for us, I've chosen to tell it to use the pool I defined ages ago in my Logical Network Configurations. Of course, use the appropriate classification.
Finally, I add my last Virtual Network Adapter
When you believe you have your adapters correct, then click OK. This will commit the changes to the host, and create a new team, and assign the physical adapters you have selected to the team, and then add the virtual adapters that you specified. The physical adapters will no longer operate in individual mode, so if you've made a mistake with your configuration, you may end up losing access to your host. If you are uncertain about what you are doing, then only add ONE adapter here, so that the second adapter is not modified, and you can verify that the new team is correct before adding the second adapter.
Despite the fact that I've done everything correctly, SCVMM still gave me this warning after it completed making the changes:
I'll just attribute that warning to the fact that SCVMM hates me, because when I observe my configuration on the server, it all looks correct:
The highlighted elements are what are new on my server. In addition to this, the two adapters SW01 P11 and SW02P11 you will notice no longer have any domain associated to them - that's because they no longer have any bound communications protocols, because they are members of the team.
If I have a look at my 3 new virtual Ethernet adapters, they look as follows:
The Cluster and Live Migration networks both have an IP address assigned - note that it's statically assigned - and this was assigned by SCVMM based on the pool availability. What does it mean that it's been statically assigned? Well, that means that when the host comes up - it is not dependant on SCVMM for its configuration. You are not at the mercy of SCVMM, so if SCVMM dies a horrible death - you do not lose your cluster, not your cluster configuration. It will still operate perfectly with or without SCVMM running.
My Management Virtual Adapter is showing up as "Management Logical Switch". Hang on, shouldn't that just say "Management"? No - because we told this virtual adapter to inherit the properties of the physical adapter, which is why the name of it is what it is - and why it's getting an IP address from DHCP. A nice IP too I might add - I like binary.
If you are a single switch environment, then you can use this opportunity to add the second interface to your team, since you have verified that the management virtual adapter is working fine.
What's left for me to do is to add the second Virtual Switch, and these are the steps I'm taking:
I can happily deselect this now that I've confirmed that my management interface is working fine (note: this is for the second non-management team only - don't deselect this on your management team adapters)
Choose which logical networks these adapters will support.
Create a Guests only Logical switch, and select/add the two adapters which will form this team
The only Virtual Network Adapter this team will support is the Guest VM Network. We don't specify a VLAN here, because each guest will have its own VLAN configuration. Remember to choose the correct Port profile.
When you click OK, adapters 3 and 4 WILL lose their host IP addresses, and if SCVMM is currently communicating with the host on these IP's, you might notice a lag while it discovers the other IPs and round-robin fails over to the new one which was recently created. This is perfectly normal.
This is the final picture - the highlighted elements are what SCVMM created for me.
Repeat the steps on the other nodes of the new cluster, so that the configuration is the same on both hosts, before we go ahead and create our cluster.
Creating our ClusterSCVMM will not create a single node cluster.
You'd of course never actually run a Hyper-V Cluster with a single node, since that would just be madness, however SCVMM will not let you create a single node cluster - despite the fact that it can be done. If you NEED to establish your cluster using a single node, then you will have to create your cluster using Failover Cluster Manager. Refer to my article I linked right at the top to do that. If you do this and create a single node cluster, SCVMM will detect this and modify the topology accordingly
Since we do have two hosts, we're going to use SCVMM to create the cluster for us.
On the "Fabric" tab, you will see at the top left a Create Icon. Use this to create a Hyper-V Cluster
Give the cluster a suitable name. Remember, this name needs to be unique in your domain. Select a Run As Account or enter a user name and password. NOTE: This account has to have Domain Administrator rights as creating a cluster needs permissions to write to a domain. This is why I use my credentials here, rather than any built in ones.
Use the drop down in the host group to select the host group where the nodes reside, and then click Add to add nodes to be clustered. If you wish to skip cluster validation tests, you can do so here, but only do this if if it is necessary. It's always good to ensure your cluster passes all validation tests (warnings are OK, errors are bad).
How cool is this. It's detected that I have two volumes in addition to my Quorum disk which I had prepared earlier. it's even going to use this as my witness disk as per the notes below. Here I can select to format the other volumes which I had not prepared, and also to make them CSV's (Cluster Shared Volumes).
This we leave blank because all the logical networks already exist on each node, and as a result gives us no options.
Click Finish to do it.
We have seen some quirky issues where sometimes adding nodes to a cluster fails for a completely bizarre reason (usually telling you that the network is not available). Right-clicking the failed job in the jobs list and selecting "Retry" retries the job, and generally succeeds the second time. This is something that is known by the SCVMM team and will be fixed in a future update.
Watching and Waiting...
And we've hit a snag! It seems that our cluster validation tests did not succeed. Something to know when running a cluster validation report is that the report is written to all nodes in the cluster - or all nodes which are participating in the test. You can open up the report using IE, and view the results. My validation test failed because of this error:
Found duplicate IP address 169.254.95.120 on node mgmt48.mgmt.local adapter Local Area Connection and node mgmt49.mgmt.local adapter Local Area Connection.
The adapter it is referring to is the USB to Management virtual interface. Well that's mighty annoying! Fortunately, since I have 3 other IBM's in my environment, I know how to resolve this - just disable the interface. The interface is only used when doing things like firmware updates to the IMM (Integrated Management Module). Just remember to re-enable the interface if you need to run future UpdateXpress Packs on the server. Despite the fact that this appears to have an random APIPA Address, it's always the same on every server - and of course, The Cluster validation is detecting this as an IP conflict, even though these machines never talk to each other on the network.
Now that the interfaces are disabled, I go ahead and restart the task. In your Jobs list, find the failed job, right click and then Restart.
The warning will tell you that it's going to try from the last successful step. Clearly all steps prior to doing the cluster test passed, so it's only going to start from the cluster test again. Because it is doing the cluster validation test and cluster creation, it is going to re-prompt me for my credentials. Why is it doing this? Well, because any other administrator can retry the job, and I don't want them running jobs on my credentials!
Sometimes the SCVMM console will appear to "lock up" after submitting or restarting jobs. This seems to be a bug which I can't quite reproduce consistently enough to report to MS. I find that just logging off the machine and logging back on allows you to run the console again. This has just happened to me. I've logged back on and my job has now finished.
Warnings are inevitable, especially in the IP address segment, but they can be safely ignored.
You will notice in SCVMM that a new icon has appeared - this is for the cluster node, and the hosts will be moved into this node.
Here is Failover Cluster Manager viewing my brand new cluster - and you can see that it's prepared my CSV disks for me and that they're currently on-line. I usually rename my Cluster Disks to have human readable descriptions rather than just "Cluster Disk 1" etc - it's safe to go ahead and just right click the disk, select the properties and change the name. SCVMM will catch the change, and update the hosts information accordingly.
Adding more nodes to the cluster
Expanding your cluster is a simple matter of configuring the new node(s) just like you did this one - i.e.
Build your servers, patch, and install software where applicable
Add the servers as Hyper-V Standalone
Configure the network hardware (the physical adapters)
Add the Logical Switches and adapters
And to make the new host join the cluster - just simply pick up the host in SCVMM, and then drag and drop the host onto the Cluster object. This will tell it to join the new cluster.
One important thing to remember when doing this is that it will do another cluster validation test. It's OK to run these tests while systems are live - you won't interfere with normal operations. One thing to keep in mind though is that it will also validate the integrated tools version against the host version, and you'll typically find that the virtual machines will be out of sync with the host - depending on how often you update the host system with windows updates. Always be sure to keep your virtual machines running the latest version of the integrated tools, and the easiest way to check if they are - is to do a manual validation against your cluster using either SCVMM (right click Cluster, select Validate Cluster), or in Failover Cluster Manager.
I hope this article has been useful for you - happy clustering!
Technet: System Center 2012 Virtual Machine Manager
Technet: Configuring Ports and Switches for VM Networks in System Center 2012 SP1
Technet: Hyper-V Network Virtualization Overview
Technet: Windows Server® 2012 Hyper-V Network Virtualization Survival Guide
Wikipedia: Link Aggregation
All links valid at time of writing.