Link to home
Start Free TrialLog in
Avatar of RatherBeinTahoe
RatherBeinTahoeFlag for United States of America

asked on

How should I set-up my Hyper-V networking stack?

HI team,

If I have two hyper-v servers with (4) Ethernet ports each running Server 2012 R2 (GUI) and then have (2) 24-Port stackable Netgear switches, what would recommend I do to make everything fast and stable? Should I attempt to set-up things with some concern for one of the switches going offline?

(1) Firewall Router
(2) Netgear 24 Port Switches
(2) Hyper-V Hosts Server 2012 R2 with (4) NICs each.

How should I set-up my virtual switches with concern to NIC teaming?

Thanks for your help!
Avatar of Philip Elder
Philip Elder
Flag of Canada image

I have an EE article here: Some Hyper-V Hardware and Software Best Practices. A lot is explained there.

In a setting where there are four ports we'd team two for production and two for exclusive use of the vSwitch (no host OS sharing).

There is no need for the two switches given there are eight ports combined between the two servers.

That's about it.

EDIT: Broadcom Gigabit ports require VMQ to be turned off for each physical port!
Avatar of RatherBeinTahoe

ASKER

Thank's Phil,

When we first virtualized our old servers, some years ago,  and the Broadcom VMQ thing was causing all sorts of issues until I found references to the issue. I'm still assuming there are known issues and we do have Broadcom NICs on the new hosts.

We're setting up our first off-site co-location and wanted to use best practices considering we do have some leeway in acquiring new hardware at this point. For ease of use and comfort level, I'm continuing on our practice of essentially having (2) hosts that can run all servers (each with less than half of total production) and local storage. Then we're going to be backing up locally to NAS and then experimenting with replication (host to host locally). Still not sure on our what our best solutions should be for off-site (somewhere other than the co-location site) storage location should be and what technique to use - whether leverage VEEAM or some other tool and copy the back-ups off-site or run separate jobs directly from the hosts.

You're implying that there is no benefit to running half the NIC's to one switch and half to the other? I know that we're not using a standard high-availability model in this case and our restore window is a few hours so we can handle some downtime. I've seen models representing splitting up Ethernet patch cables to different sets of Ethernet jacks on-host and distributing them to different switches and I also know that many of those arrangements have to with storage HA scenarios, but I didn't know if we could take advantage of some of that even though we're using local storage?

Do you have any input in the prior? Thanks for the article too - good read.
Veeam is one option for sure and it works really well.

In 2012 R2 Hyper-V Replica can be set up with three tiers.

Host 1 --> Host 2 --> Host 3 (tertiary).

Host 3 would be in your DC. No VPN would be required between 2 and 3.

As far as the switch goes, where are the endpoints? If they are all connected to one switch the point is moot. If they are distributed between the two switches then the endpoints connected to the switch that fails are down too.
I don't fully follow the "Endpoint" terminology. We have (1) FIrewall with dual-WAN's (primary, fail-over) and then we're moving down the stack from there to switches then hosts. With the minimum of (2) hosts in that stack. The hosts will have servers that share content with remote locations through a third interface on our Firewall using virtual Ethernet connections to remote sites.

I was planning on using (1 of 4) Ethernet jacks per host for management OS and then in the past - due to being a bit green - I would assign one vswitch per physical nic and assign them per VM based on expected load/importance. Seems as though that is not best practice and to use multiple Ethernet connections teamed and then assigned to a vswitch.
One tenant of virtualization is to eliminate as many single point of failures (SPFs) as is possible.

One should never bind a vSwitch to a single NIC port unless one plans on teaming _within_ the VM.

Endpoint = Printer, PC, other servers, and network connected devices.
If I'm interpreting your information correctly, would this be a sound configuration?

- Firewall Configured with two LAN interfaces bridged (X0, X2)
- Switch A patched to X0 (local LAN)
- Switch B patched to X2 (local LAN)

- Host Server (C) with NIC1 and NIC2 Teamed - for Management
- Host Server (C) with NIC3 and NIC4 Teamed - vSwitch Assigned
- Host Server (D) with NIC1 and NIC2 Teamed - for Management
- Host Server (D) with NIC3 and NIC4 Teamed - vSwitch Assigned

- Host Server (C) NIC1 patched to Switch A, NIC2 Patched to Switch B
- Host Server (C) NIC3 patched to Switch A, NIC4 patched to Switch B
- Host Server (D) NIC1 patched to Switch A, NIC2 Patched to Switch B
- Host Server (D) NIC3 patched to switch A, NIC4 Patched to Switch B

The switches connected via stacking connection with spanning tree enabled. Each VM attached to a single (and the only) virtual switch per host.

Does this seem sound or is this over-engineering without any real benefit?
ASKER CERTIFIED SOLUTION
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks Phil, I think I'm ready to jump in. A lot to think about but on the right track. Thanks again.
I'm definitely going to change up how I've been treating my vSwitch set-ups and look into the best kind of teaming to do at the hardware level. More questions but definitely on the right track. Thanks.