Link to home
Start Free TrialLog in
Avatar of kt3z
kt3zFlag for Canada

asked on

Is it possible to use only 2 subnets including iSCSI to run Windows 2016 cluster?

Hi, We're about to create a Windows 2016 cluster.  It's a small cluster with 2 members.  We usually use multiple subnets: live migration, cluster, management, iSCSI

But Live migration will not be used very often.  Each server has 4x 10G Intel NIC   Of course 2 NIC will be used pour iSCSI so I'll keep the iSCSI network for the cluster.  But I'd prefer to keep only one subnet (one vlan) for everything else.

All ports are connected to 2x Force10 switch.  NIC teaming will be used.  LACP is already configured in access mode on both switch (the Force10 are configured in VLT or Virtuel Link Trunk)  The iSCSI network has its own VLAN as well as the LAN.

I want to keep things simple.  

I guess this is not the recommended setup.  But i'd like to have your advice.  

Avatar of Dan McFadden
Dan McFadden
Flag of United States of America image

You are building a Hyper-Converged Failover Cluster...

General knowledge with FoC.. you need 2 dedicated networks:

1.  Client Access (needs AD connected networking)
2.  Cluster communications (can be a direct cross over, IPv4 (IP/SNM only, no gateway, no DNS, no NetBIOS, no client protocols) or IPv6 configured)

If you do not have 2 separate networks, the cluster validation test will finish with warnings that the cluster does not meet the recommended best practices.  So the answer is, you could build the cluster with on 1 network, but it is not a recommended best practice.

Also, Microsoft teaming, when created thru the Server Manager, does not support RDMA therefore you will lose some functionality that may affect performance.

Reference link:

I am building a similar setup, 2 node hyper-converged Failover Cluster with Storage Spaces Direct (no shared storage) on Server 2016.

I have 4 networks.

1.  Client Access (can see AD & the rest of the infrastructure) [teamed 10G ports, IPv4 fully configured, IPv6 enabled local-link only]
2.  Cluster Comm [direct 1G cross-over, IPv4 address and subnetmask only]
3.  Storage 01 [10G port, connected to a VMSwitch with RDMA enabled for teaming, IPv6 local-link no IPv4]
4.  Storage 02 [10G port, connected to a VMSwitch with RDMA enabled for teaming, IPv6 local-link no IPv4]

The storage network will support live migration and the storage bus (SMB BUS for storage spaces direct)

Reference links:
1. Microsoft S2D:
2. 2 Node example:

Avatar of kt3z


Sorry I should have told you but hyperconverging is not an option for us.  it's less expensive to have a Compellent SC4020 (10 SSD 1.9TB + 14 SATA 1.8TB) in tiering.  Windows 2016 is way too expensive.  Besides it takes at least 3 but more likely 4 servers to hyperconverge and Windows 2016 datacenter is required and it's per core pricing.  We only need 2 servers.   It's less expensive to buy a SC4020 with 2 servers.  

I use 2 networks: the cluster network is the LAN and iSCSI.  That's it.  With 20G we don't really need to create a VLAN for the cluster, another for live migration and another and another.  

I think S2D will gain in popularity someday when or if the pricing will go down.  But it's too expensive and too early for us.  

Thanks again
Avatar of Dan McFadden
Dan McFadden
Flag of United States of America image

Link to home
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial