Using SCVMM only for templates

I'm reading the book SCVMM 2012 cookbook.   It is not only highly recommanded but almost a prerequisite to run VMM in HA as well as SQL.  

Private cloud or hybrid cloud is not something we're working with, we don't need it.  But SCVMM 2012 SP1 seems to be built for large organizations.  All the virtualization networking is managed from SCVMM.  This is how Microsoft is advertising SCVMM.

SCVMM is useful to create and manage Template from the Library.   But I'm not interested to create a Private cloud, having a Gateway.    

I figured out something to use SCVMM but I'm not sure whether it's safe or not.  

1 - Hyper-V 2012 is clustered.  So far so good.  
2 - 2x 10GbE NIC Team, switch dépendent, hash, 6x VLAN (cluster, csv, live migration, management, LAN, DAG)
3 - 2x 10GbE NIC team for iSCSI
4 - I use VLAN ID to select the VLAN to be assigned to the VMs

But as soon as the cluster is added to SCVMM 2012 SP1 I have to:

1 - add the VLAN network in the logical network site
2 - check the VLANs in vswitch hardware properties of every host.  

It works.  I can select the VLAN to use in the network adapter of the VMs.

But I didn't configure all the other features in the Fabric.  In the event that something goes wrong, like loosing the VMM server, Am I in trouble?  SCVMM seems to take over the Hyper-V servers.

Who is Participating?
LesterClaytonConnect With a Mentor Commented:
You should let SCVMM even configure your team, because that way, SCVMM knows about the configuration, and will assign the appropriate Policies to it (policies which you should have defined first, I might add).

There is ONE small tiny problem in doing all of this though, SCVMM will not create a cluster if it has LESS than two nodes.  If for example you have two servers, and you wish to "sort one out", then make the cluster using SCVMM, then join the other to it - it won't work.  You'll need to create the cluster using Failover Cluster Manager.  SCVMM will recognise this and make the appropriate changes in itself.

So in a Nutshell, you should do the following in this order:

1. Create all your logical networks

Under "Logical Networks", create all the logical networks that your hosts/clusters will support.  You will note that I also have IP Pools - this is not mandatory, but recommended for networks where you might not have any DHCP.  SCVMM does not use DHCP for these adaptors, but allocates IP's on a first assigned first served basis.  It helps you keep track of certain networks.  The reason for these logical Networks is shown a bit later.

Logical Networks and IP PoolsPropertiesNetwork Sites
This is where you would all your VLAN's for your guests.  I only have two here because I only need two for this cluster (my other cluster needs 60+)  You of course can modify this later, even though it's assigned to many hosts.

2. Create Uplink Profile

In the "Native Port Profiles", you should create a new Hyper-V Uplink Profile.  NOTE: When creating it you choose Create -> Native port Profile, and then use the radio button "Uplink Port Profile" to choose the right kind of profile.  Under networking configuration, you choose your logical networks you created above.

GeneralNetwork Configuration

3. Create Logical Switch

In SCVMM, in the Fabric section under Networking and Logical Switches, create your logical Switch.  These are the properties of mine:


4. Configure your Adapters, to let Hyper-V know which one is going to be doing what

Now, you need to configure the hosts themselves.  Unfortunately, this has to be done per host (either via GUI or script), because naturally, not all hardware is the same.  One ethernet adapter in one server can be completely different from another.  So, viewing the properties of the host, go to the Hardware tab, and do the following with every network adapter.  Adapters that will be teamed should have the same config on each.

Network Adapter DetailsHere, we tell SCVMM that this adapter is available to be used for placement, i.e., it's going to be used in the cluster.  You will notice that "Used by management" is also checked - this is equivalent to the Hyper-V Setting "Allow management operating system to share this network adapter".

Adapter Details
We also tell it what logical networks this adapter will service.

Logical Network Connectivity

5. Add your virtual switch and Virtual Network Adapters

The last piece of the puzzle is to actually add your virtual switch (where we will also use teaming), and your Virtual Network Adapters.  Under "Virtual Switches", create a new logical switch (using the New Virtual Switch button), and use the logical switch you defined in step 1 above.  Here, add the adapters which will form part of this team.

New Hyper-V Logical Switch
Once the Hyper-V Logical Switch exists, you can add new Virtual Network Adapters, and give each one the appropriate profiles for its function.  Here is an example of the virtual adaptor which I'll be using for my Live Migration.  You will see here where I've used an IP Pool to allow SCVMM to manage the IP addresses inside my Live migration network

Live Migration Virtual Adapter
NOTE: It is not necessary to use Logical Switches with Virtual Network Adapters.  You could create new Standard Switches insted, but this gives you much less functionality, and you cannot team the adapters.  You would choose this if you had previously teamed your nics, because then you can use the drop down to select your teamed adapters.  By doing this you lose the additional functionality of Port Profiles, which acts as a Quality of Service for the traffic going over the virtual adapters.

I hope that this helps :)

When I create my next cluster (soon, I've ordered some more hardware), I might create an article on how to do this, because it's not quite straightforward with the various books and literature out there.
If you lose your VMM server, your cluster will still continue to operate just fine :)  VMM is just a convenient place to configure your networking components, and even though many of them don't actually appear to make a difference to the Hyper-V itself (like adding VLANs to your fabric), this is used later when you do things like add Virtual Machines, as it knows what networks and VLANs to offer you based on its placement.

Microsoft recommends that you use VMM to configure your Hyper-V and Cluster environment so that you have conformity across all your Cluster hosts - because it will configure the Cluster for you so that "human error" is removed from the equation.  It is possible for example for a person to configure teaming in the wrong modes on both servers.  Of course, it's possible to do this with VMM 2012 too, but with VMM2012, it won't allow you to join two nodes to the cluster if the configuration is different, whereas Failover Cluster Manager would not spot this difference, and *would* allow you to join the node to the cluster.

But, your concern is greatly appreciated, since I too have Windows 2012 Clusters - and I can happilly say that you can still use Failover Cluster Manager to do do everything, and you're not dependant on VMM being there or even running.  VMM2012 is running as a Virtual machine on the cluster it manages.  If I lost that Virtual machine, I have no issues brining the cluster up using the standard Failover Cluster Manager tool :)
quadrumaneAuthor Commented:
Thanks Lester

It might not be as easy as I thought it would be.  I'm unable to select the VLAN when I'm creating a new VM.  As a result it can't be placed on a host.  

It looks like I'll have to go through the whole fabric (logical switch, ports, uplink..)  This is exactly what I wanted to avoid.

What have you done in SCVMM?  Perhaps you don't have more than one VLAN.  

Thanks again
The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

I use SCVMM to create virtual machines using templates :D  There are no template features inside Hyper-V or Failover Cluster Manager, and it's nice and easy to choose a template which has my memory configuration and CPU configuration already there.

We have 65 VLANs (and increasing), and unfortunately you CANNOT choose the VLAN when creating the virtual machine - you have to edit the virtual machine afterwards to set the VLAN ID (a failing in the product if you ask me, and hopefully something that will be fixed).

I don't have the issue of not being able to place it though - it allows me to place it on a host.  You probably want to check a few things:

On each host, ensure that you have checked the networks which should be available for the switches, similar to the screenshot below.  If you have teamed adapters, then you need to do this for each adapter in the team.

Logical Network Connectivity
The second thing to check of course is that you have defined your Virtual Switch and Virtual Adapters inside VMM for each host, similar to the following screenshots.

Virtual Switch
Virtual Adapter
The VM networks are of course, defined in the Fabric, under Networking and Logical Networks.  The Logical Switches are defined under Logaical Switches, and this is where you would set your Uplink Port Profiles and Virtual Ports.

Ideally, you should configure all of your networking in VMM prior to creating your cluster, because adding the configuration afterwards and matching it up to an existing cluster is near impossible.
quadrumaneAuthor Commented:
My mistake I can place the VM.  A non supported LSI SCSI was preventing it.  

Thank you for the snapshots.  Ths is very useful.  

But I keep thinking VMM could be as useful but also resource consuming..  I'm reading two books (MS Private cloud computing and SCVMM 2012 cookbook)  

Understanding how Virtual Machine Manager has become a critical part of the private cloud infrastructure is very important. This chapter will walk you through the recipes to implement a highly available (HA) VMM server, especially useful in enterprise and datacenter environments.VMM plays a critical role in managing the private cloud and datacenter infrastructure, which means that keeping the VMM infrastructure 100 percent available is crucial to preserving the services' continuity, provision and to monitor VMs to respond to fluctuations in usage

I did not configure all the fabric as you did.  I want to make sure it is not going to keep me from managing everything from the Hyper-V host.  Of course it works for you.  But as soon as the cluster has been added to SCVMM I no longer see the network adapter in Hyper-V for the VM I've created from SCVMM.  

I guess I need to add more settings in the VMM networking.  But If I configure the networking before the cluster, I will have to use the VMM networking to add the vEthernet for the cluster on the hosts.  All the networking (iSCSI aside) comes from the 10GbE NIC TEAM interface, including the management.  I wanted to have less NIC.  

The purpose of the fabric is to simplify the configuration,  centralizing everything.  It looks so easy when you tell me how the VM networks is defined in the Fabric.  But with some nay features and options I'm a little confused.  The books put the focus on the private cloud.  But the private cloud requires another layer unless you need to access only the private cloud and get rid of all the other PCs, physical servers on the network: the gateways.  This is too much for us.    

Here are snapshots from my configuration

thank you very much
quadrumaneAuthor Commented:
Another mistake.  I do see the networking in Hyper-V.  I think I should get some sleep ;)
quadrumaneAuthor Commented:
You said I should configure all of your networking in VMM prior to creating your cluster.

But what's happen when you add one by one all Hyper-V host in SCVMM and all networks, switch are checked?   Do you see the new switch you've created in Hyper-V networking too?    

I can see the cluster networking in your configuration.  I already configured the cluster networking from the host.  Is it that I have to get rid of this configuration too?  Thanks again

dd-VMNetworkAdapter -ManagementOS -Name "Management" -SwitchName "vSwitch1"
Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName "vSwitch1"
Add-VMNetworkAdapter -ManagementOS -Name "Live Migration" -SwitchName "vSwitch1"
Add-VMNetworkAdapter -ManagementOS -Name "CSV" -SwitchName "vSwitch1"
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Management"  -Access -VlanId 192
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Live Migration"  -Access -VlanId 102
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "Cluster"  -Access –VlanId 100
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "CSV"  -Access –VlanId 101
quadrumaneAuthor Commented:
This by far the best documentation I've seen on SCVMM 2012.   If I could I owuld give you 5000000 points.  You used this word "ordered".  Besides you have established the link between Hyper-V and SCVMM.  That was the missing link.   On Technet they don't mention it.  They probably think we should know already that the hosts must be configured from SCVMM, from the start to the end, including the cluster and the team.  

One thing I'm not sure about.  The networking has been already created (The NIC TEAM) on my Hyper-V hosts as well as the VLAN for the clusters as you can see in my previous post.  

Should I destroy the 10GbE NIC TEAM?  As all my networking comes from this team including the management I will not be able to add the Hyper-V (standalone before creating the cluster within SCVMM) in SCVMM.

I could probably enable a 1GbE dedicated NIC for the management.  

One part of the VLAN in Hyper-V (cluster networking, live migration and csv) has been added in the Windows networking (vEthernet) the other VLANs are selected throught VLAN ID during the VM configuration.  

In your configuration there's only two logical network networks (Cluster and Live Migration) but you obviously have more networking, including guest and management.  

As to why no logical network is created for the other VLAN / subnets is a little confusing.  Actually I'm having hard time to understand, trivial, perhaps, but I'd like to understand.

I created 6 VLAN in the NIC TEAM (switch dependent address hash lacp)  Management, Cluster, Live Migration, CSV, vm guest and management (same subnet) DAG and NLB.  

As far as I understand I have to add all thoses VLAN (I'll need a dedicated NIC for management otherwise I can't add the Hyper-V host on SCVMM) in the logical network as you did during the phase 1.  Later the TEAM will be created from SCVMM?  

I really think it's a detail missing in my case because all you've done here is massively impressive.  I think it will help many people out there.  

Many many thanks !  :)
I would indeed destroy the team and start again, so that you use the new methodology.  I have 4 NICs in my server, 2 are dedicated for iSCSI Failover, and the other two are dedicated for the Cluster Team.  All my guests, Live Migration and Cluster communications go over that team.

For my main cluster, I hvae 6 interfaces - 4 gigabit and 2 10 gigabit.  My Gigabits form one team, and only the live migration and host communication is there (Host communication is also the interface that Cluster Replication runs over).  2 other interfaces are another team for dedicated cluster communications, and the last 2 are for guests.
quadrumaneAuthor Commented:
Last question (ok maybe not you are the best ressource I ever had for SCVMM 2012, but don't worry I'm not going to ask you hundreds of questions)  

I don't understand how you can set the VLAN in SCVMM before the NIC TEAM.  

You have more than 2 VLAN, in your snapshot "Logical Network Connectivity" you have  5 VLANs including management.   But you have defined only 2 logical networks, why is that?

If I destroy the TEAM, I'll have to:

1 - enable and assign one 1GbE NIC for the management because the management was brought by the NIC TEAM

2 - Create the NIC TEAM before configuring the Network logical connectivity at the host Level.  In you snapshot you're doing the configuration on the SW2 P05 NIC but this NIC is not teamed up.  Again I' confused here.

Overall I understand the concept.  This step by step guide you've made is greatly simplifying the virtualization network deployment.  I hope my question will be useful for your future article.  Maybe you should write a book, I'll buy it :)

thanks again !
LesterClaytonConnect With a Mentor Commented:
I have 4 logical networks - two of them are using IP Pools, so maybe they stand out more, but there are definitely 4 logical networks defined

There's Cluster, with an IP pool, Guests (selected), Live Migration with another IP pool, and Management :)

Think about what you're doing though - if you destroy the team, all that's going to happen is that the interfaces which were team members will both get IP's from your DHCP servers on the default VLAN's.  I presume they're connected to Trunk ports, and you've configured the trunk ports to have default VLAN?  If you think back, before you could team them, you had to be able to talk to the server before there was a team, right?

What does look confusing though, and I will agree here, is that my configuration is being done on SW2 P05 (Switch 2, port 5) is the same configuration I'm putting on SW1 P05 (Switch 1, port 5) - yet somehow it's applying to the team.  This is because, these two ports are team members.

If you think THIS looks confusing wait till you actually see what happens on my Host itself.  Wait no further, because here it is :

Look Ma, A Team without the regular Team!
What you see here is as follows:

Before configuration:

SW01 P10 - my iSCSI VLAN A
SW02 P10 - my iSCSI VLAN B
SW01 P05 - Trunk with native VLAN of 11 (my special Cluster Management VLAN)
SW02 P05 - Trunk with native VLAN of 11

After Configuration:

All the adapters stay as they are.  When you create the Hyper-V Logical Switch under "Virtual Switches", the Hyper-V Logical Switch with the funky icon gets created.  This as you will notice is not the same as a team icon, however, this is in actual fact a team.  The thing is, you don't bind any IP addresses to this team - nothing does.  With conventional teams, it of course because the place where you configure TCP/IP.

Now, you see 4 vEthernet interfaces.  These 4 vEthernet interfaces are for my 4 Virtual Network Adapters.  I am able to communicate with my host thanks to the MGMT vEthernet interface, which I have configured thusly:

Management vEthernet Interface
You will notice that I've not set a VLAN here - I should have set VLAN 11, except that I didn't need to, because as you can see there is a checkbox there "This virtual network adapter inherits settings from the physical management adapter" - and if you recall, the two adapters that I've marked as my "team" adapters, I also set them to be "Used by management", which we already know is the same as "Share this adapter with the OS".  My switch configuration which is as follows:

interface GigabitEthernet1/0/5
 description mgmt31 ft/lb
 switchport trunk encapsulation dot1q
 switchport trunk native vlan 11
 switchport mode trunk
 spanning-tree portfast trunk
 spanning-tree bpdufilter enable

Open in new window

Automatically means that where I don't specify a VLAN, the default VLAN of 11 takes place.

So, before the team - my host is available on VLAN 11, because of this configuration.  And after I've established the team, and through a whole series of inheritences through adapters, and virtual adapters and a team, I get the same configuration down to my MGMT vEthernet adapter :)

TIP: When creating the actual team - you can do one adapter at a time (Yes you can!)  This will mean that you won't lose host communication between not having the team, and having the team.  You might have some IP changes due to the fact that your MAC addresses may be slightly different as they converge, but I did this all via SCVMM and never lost communication with my hosts for more than 30 seconds.

Some more screenshots:

I have a team!
NIC Teaming
As you can see, my Operating system still thinks I have a team, and if SCVMM ever dies, my Cluster will still live on and I can manage it fully like this.  In fact, I still do many things inside Failover Cluster Manager.  It's much faster at moving VM's from one node to another - simple right click, live migrate, move to best node.  It's a bit more cumbersome in SCVMM to say the least.

Hopefully this will fill in the gaps I may have left previously :)
quadrumaneAuthor Commented:
I'm back after a long weekend.  This is an astonishing article.  You have provided so much perspective on things !

I would be a shame to keep asking you any more question.  But I can't let you go you're too good ;)  It's late here I'll finish reading tomorrow.  

I just couldn't jump into the bed before telling you how valuable your work here will be not only for me but I'm sure for many other people out there.

Thanks one billion time.
quadrumaneAuthor Commented:
It took time but I figured most of the stuff out except for one thing: why are you using isolation?

You checked "Allow new VM networks created on this logical network to use network virtualization".  I'm a little confused on why the guest network is in isolation mode.  

I think this time it will be the last question.

Thanks again for your amazing support
The checkbox "Allow net VM networks created on this logical network to use network virtualization" is required on at least one of the Logical Networks in order to allow Virtual Machines on different hosts to be able to communicate with each other.  If you take SCVMM out of the equation, it would be the equivalent of adding the ms_netwnv component to the team, using a command similar to the following:

Enable-NetAdapterBinding -InterfaceDescription "Hyper-V Logical Switch92c5a205-405b-40bc-8d97-f4f0ab4ea7b8" -ComponentID "ms_netwnv"

Open in new window

Since our guests communcate on the Guest Logical Network, it seems like the most logical place to put it :)  I'm not sure where you got the idea that this was putting it into isolation mode - if anything this is doing the opposite of isolation.

More information about Network Virtualization in Hyper-V can be found in the following Blog.  Please note that this Blog does not include SCVMM, but does tell us why we enable Network Virtualization.  Don't follow any of the instructions though, we've done it all via SCVMM already
quadrumaneAuthor Commented:
Again, thanks !

As my switch doesn't support native VLAN (Powerconnect 8132) I had to add another NIC for Management.  A logical network with VLAN 0 (no vlan) has been created for this NIC as well as the port profile and a vm network.  I'm not really sure to know what to do with that as the management was supposed to be part of the same NIC TEAM.  

logical networks
My stack switch is set in trunk mode.  So I didn't use your configuration (switchindependent)

interface Te1/0/11
channel-group 6 mode active
description 'S1-HV-006 NIC01'
spanning-tree portfast
mtu 9216
switchport mode trunk
no lldp tlv-select dcbxp ets-config
no lldp tlv-select dcbxp ets-recommend
no lldp tlv-select dcbxp pfc
no lldp tlv-select dcbxp application-priority

I had to create VM network for each subnet VLAN added in the logical networks.  I've selected "no isolation" for all VM networks.  

The native port profile
port profile
I ended up with this error.  

Error (25234)
The selected host adapter (Broadcom BCM57711 NetXtreme II 10 GigE (NDIS VBD Client) #43$$$Microsoft:{91BFC59A-0542-466D-9E31-95396640EB9E}) has an uplinkportprofileset configured with network sites. Hence, logical networks or subnetvlans cannot be directly modified on the host network adapter

Recommended Action
Please make the desired modification to the uplink port profile set or choose a different network adapter and retry the operation

Warning (25259)
Error while applying physical adapter network settings to teamed adapter. Error code details 2147942484

Recommended Action
Please update the network settings on the host if virtual NIC is connected to the host.
quadrumaneAuthor Commented:
I forgot this snapshot.   Quite frankly I don't know what I've done wrong.  

I don't really know what to do with the Management adapter.  But I'm sure it's not the cause.  

I'm still confident I'll figure this out... with your help hopefully :)  

uplink profile
quadrumaneAuthor Commented:
At the switch level it's all good.  I keep troubleshooting.  

interface port-channel 6
description 'Team Hyper-V S1-HV-006'
hashing-mode 6
port-channel local-preference
switchport mode trunk
switchport trunk allowed vlan 30-31,100-102,192,254
mtu 9216
quadrumaneAuthor Commented:
Fantastic !  Very useful.
As promised, I have written an article for creating Hyper-V Clusters on 2012 using SCVMM 2012.

If you find this helpful, please click the relevant link on the bottom of the article :)
quadrumaneAuthor Commented:
Thanks Lester

What an amazing work you've done !  Each time I was about to think "well this is good but I would have liked him to provide me with more details" all of a sudden the comment below answered the exact question I was thinking about.

This article is better than all books combined.   You really explained what is creating the team, why check-box is checked, why everything...

Great  !  Maybe you should write books :)

Thank you very much  !
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.