Link to home
Start Free TrialLog in
Avatar of Olevo
OlevoFlag for Australia

asked on

Recreating new Microsoft Hyper-V cluster

Hi to all.
 
What we have here (please have a look at the topology picture provided):

Server 1, Server2 and Server3 are identical hardware boxes. Server1 and Server 2 are cluster nodes in Hyper-V cluster. Both of them are running "full" version of Windows Server 2008 R2. Currently we have around 30 virtual servers running on Hyper-V cluster.

Goal to achieve:

Create 3 nodes Microsoft Server 2008 R2 Core edition of Hyper-V cluster with less possible working disruption to the end users.

Plan of action:
1. Server 3 will be used as temporary Hyper-V server. It has enough RAM capacity to run 30 VM’s  
2. New temporay LUN-2 will be created (similar in size to LUN-1) on SAN
3. Server 3 will be connecting (iSCSI) to LUN-2. LUN-2 will be used as a temporary storage for Hyper-V on Server3
4. With help of SCVMM 2008 R2 after working hours (probable on weekends) all VM’s from Hyper-V cluster will be migrated to Server-3 Hyper-V. During migration VM’s will be shut down.
5. Confirming that all VM’s have been successfully migrated and working ok on the Server3
6. Hyper-V cluster will be deleted and both servers (Server1 and Server2) will be re-builded with Windows Server 2008 R2 Core edition on them.
7. New Hyper-V cluster will be created with Server1 and Server2. Existing LUN-1 will be used for CSV
8. Repeating step 4 but this time all VM’s will be migrated back to new Hyper-V cluster
9. Server3 will be rebuild with Windows Server 2008 R2 Core edition and LUN-2 will be deleted from SAN
10. Server3 will be added as a 3rd node to existing Hyper-V cluster

From what I know (but I’m hoping that I’m wrong), you cannot mix and match Full add Core editions of Windows Server 2008 R2 in Hyper-V cluster. That is why we need to do all of those upgrading steps above.
 
Could someone please with relevant experience and knowledge go through the action list with me and advice if everything is looking ok? Perhaps, maybe better idea on how to do this upgrade?

Any advice will be appreciated.
Hyper-V-Upgrade.jpg
Avatar of kevinhsieh
kevinhsieh
Flag of United States of America image

Your plan is correct. The only thing I would add is that you might as well upgrade the hosts to SP1. You may also consider just using Hyper-V Server 2008 R2 SP1 instead of Windows 2008 R2 Enterprise Core install. It is basically the same thing but it is more preconfigured and you can't add other roles.
Avatar of Olevo

ASKER

Thanks kevinhsieh, the main reason why we want to re-configure hyper-v cluster from “full” to “core” is security and performance. We are planning to use Datacentre editions because it will give us unlimited VM’s licences on Hyper-V server.
I personally load Hyper-V Server 2008 R2 on the host and then assign two datacenter CPU licenses per host. You can do it either way. I can't seem to tell if Hyper-V Server can scale to 1 TB RAM (less than 50K from Dell) on the host like Datacenter can. You also get more flexibility because you can add other roles if you need to. DHCP anyone? ;-)

You should be in good shape. Good luck!
Avatar of Olevo

ASKER

Each server will have 4 nics available just for SAN connection. For high availability two switches will be setup between servers and SAN. Now, I have two options to consider. The first one is to setup two network teams (2 nics per team) so one team goes through one switch and the other through the second switch. Second option would be not to use network teaming but rather having look at MPIO implementation. 4 independent MPIO channels will be used; two through one switch and another two through second one. Which of two (or, maybe third one exist) options will be the best one to choice?!
ASKER CERTIFIED SOLUTION
Avatar of msmamji
msmamji
Flag of Pakistan image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I don't mess with NIC teaming. Microsoft has backed away from their unsupported statement, but MPIO works great, and I have seen people have problems trying to get NIC teaming to work because for some vendors it requires software that isn't available for Core.
Avatar of Olevo

ASKER

As I have mentioned before all three servers are identical (ProLiant BL460c G6). Currently Server 1 and Server 2 are running “full” version of Windows 2008 R2. Managing and creating teams, setting additional options, etc. for network cards are relatively easy with HP Configuration Utility. Obviously you cannot run full GUI of HP config utility inside the server core console. However, the core components of HP utility are installed during HP DVD installation of Windows 2008 R2. All you have to do is to update HP network config file with your own desired settings (teaming, etc.) inside windows core edition. Or, you can simply export network settings from full version of HP network utility (Server 1 or Server 2) and import them onto windows 2008 core edition.
 
Microsoft officially is not supporting teaming with iSCSI. For more than a year we are have been using HP teaming and iSCSI connection to the SAN on other servers without any problems! Why we’re doing this? Well, in the HP server with two teams’ connection (through two separate switches) to the SAN we have high availability plus each connection in fact is a 2GB link instead of 1GB.

I can see the big potentials with MPIO but also I have to strongly consider what are we currently have and have been using.

What about Hyper-V cluster heart bit traffic? Is that preferably to dedicate one nic in the server just for that that traffic? Previously we didn’t do it because we didn’t have enough nics per server. Now, all HP servers have 8 nics in them! If yes, (I’m guessing) I need to connect all three servers “heart bit” nics to the separate switch, right?

Thanks.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Olevo

ASKER

To kevinhsieh:
“It's rare for any of my NIC ports to even average 5% utilization.”

Are you talking about iSCSI traffic to the SAN? How many VM’s on CSV do you have on your SAN?
I have 77 VMs on my SAN, 49 online SAN volumes total, and for my 1 GB interfaces over the last 24 hours max utilization was 8.4% for a SAN interface, average is 1.7%. There are 6 active NICs on the SAN. I have many more hosts. A lot of the SAN traffic is actually replication overnight and backups.
Avatar of Olevo

ASKER

Hmm, it seems to be I went over the board with my Hyper-V network design…
HP Network #1 is for VM’s traffic and #2 and #3 are for SAN connection fault tolerance. And, I probably don’t need to waste one nic just for “heard bit” traffic.  
Note: To simplify the view, only one of the cluster nodes is showing in the picture.

H-V-Pic.jpg
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I would recommend taking a look at this as well.
Avatar of Olevo

ASKER

Here is something I forgot to ask. Two nodes Hyper-V cluster was automatically (through cluster wizard) setup as “Node and Disk Majority Quorum”. It has one CSV and one witness disk. Now I need to add third node to the existing Hyper-V cluster. Will it be the same quorum mode or I have to change it?!

Somewhere on the internet I read that “Node and Disk Majority Quorum” is well suited on a 2-, 4-, 6-, 8-, or 16 node cluster. For the odd numbers of cluster nodes Microsoft recommend “Node Majority Quorum”
If you change the numner of nodes, the quorum method will not automatically change. You will need to initiate the change.
Avatar of Olevo

ASKER

Here is my current setup for one of the Hyper-V cluster nodes to the iSCSI SAN. Right now I can “unplug” up to three networks cables (Cables 1, 2, 3, and 4) and still have all my VM’s up and running. In addition to the HP networking teaming I also can implement MPIO if it’s needed. My question here… Should I go ahead and setup MPIO?! I guess that MPIO will give me additional fault tolerance in case if one of the SAN controllers dies or three of four connections (between switches and SAN controllers) will fail. I’m I right?!
O-ops. IP address for HP team #3 is 192.168.0.2  
MPIO-and-HP-NetTeam.jpg
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Olevo

ASKER

Sorry, I was very busy and didn’t have time to go through points assignments. Will do it shortly.