Hyper-V Core 2016: Building A Workgroup Cluster – Part 2, Hardware

Windows Server 2016 features workgroup cluster support. In Part 2, I’ll give an overview of the lab on which I built my cluster.

Hyper-V Core 2016: Workgroup Cluster Series

Hyper-V Core 2016: Building A Workgroup Cluster – Part 1

Hyper-V Core 2016: Building A Workgroup Cluster – Part 2, Hardware

Hyper-V Core 2016: Building A Workgroup Cluster – Part 3, Security Setup

Hyper-V Core 2016: Building A Workgroup Cluster – Part 4, Cluster Setup


The Lab

I’m building a fully isolated lab environment in which I can run Active Directory services, Windows Deployment services, DHCP and DNS services, and other number of things that might cause problems if they were to escape into my general home network.

To this end, I wanted the ability to segment the lab’s network traffic between multiple nodes and a storage appliance which needed to serve both the lab and my general home network.


Networking

I’ll state up-front, I have little clue of what I’m doing here – I’m not a network engineer. I’ve worked in a highly segmented environment designed by some great network engineers, so at least I have some friends off which I can bounce ideas, and a high-level idea of what I’m trying to do.

  • Cisco Layer 3 managed Gigabit switch
    • One uplink to the rest of my network
    • Two bonded uplinks to a dual-NIC NAS device, which sits in both a 192.168.1.x network and a 10.0.1.x network
    • Some VLANs:
      • VLAN1, native VLAN – connects up to the full home network
      • VLAN2-5, serving VM traffic, iSCSI, Cluster, and migration

Storage

Nothing special here. I have a NAS with dual NICs, sitting across VLAN1 (which pumps it back to my home network) and the iSCSI VLAN, via 802.1Q trunking and LACP link aggregation.

Two LUNs have been created, one for cluster Quorum disk and one for VM Storage, which is mapped via iSCSI to both of my physical hosts. Later, this becomes a Cluster Shared Volume inside Windows.


Compute

Power, heat, and noise are among the most important considerations for my home lab, due to the placement of equipment within my home. I spend a lot of effort tuning each of these factors down. To that end, I’m using off-the-shelf commodity desktop computers as hosts. They’re small-form factor, yet still support 32GB of RAM, and therefore make pretty good virtualization hosts (overlooking their lack of redundancy, which is fine for home and why we’re clustering anyway).

I did make one interesting modification – each host has received a PCI-Express x1 Intel I340 Quad-Port NIC , which gives me a whole mess of NICs to play with teaming.

Separately, I’m running one management PC – a Windows 10 desktop with the Hyper-V and Windows Remote Management Tools installed.


Hooking Up

The hosts’ integrated NICs are being treated like management ports. They go to my switch and sit in VLAN 1, which pipes it back to the home network, where my management PC sits.

The remaining 4 ports for each host’s NIC gets assigned an 802.1Q trunk across the remaining VLANs, with their defaut VLAN (for any untagged traffic that may come out) set to a garbage VLAN 99, so we discard it (not sure if this is a sound network practice, but I think it’s working).


What’s Next

In Part 3 I’ll lay out the provisioning of your Hyper-V server and setting up some of the pre-requisite security adjustments that should be made to get cluster-able.



 

Advertisements

What would you like to say?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s