This blog series consists of four parts
- NIC Teaming, Hyper-V switch, QoS and actual performance | part 1 – Theory
- NIC Teaming, Hyper-V switch, QoS and actual performance | part 2 – Preparing the lab
- NIC Teaming, Hyper-V switch, QoS and actual performance | part 3 – Performance
- NIC Teaming, Hyper-V switch, QoS and actual performance | part 4 – Traffic classes
With the insights from the results of the tests, it is possible to look at multiple scenario’s for the traffic classes live migration and virtual machine.
Live migration moves machines from one host to another without noticeable downtime. This can be live migration within a cluster or moving virtual machines with “shared nothing” live migration. Live migrations uses one TCP stream for control messages (low throughput) and one TCP stream for transfer of virtual machine memory and state (high throughput utilization). When live migration includes migrating the VHD, SMB will be used for that. SMB itself will use one or multiple TCP streams depending on your SMB multichannel settings.
Scenario 1 : Server with two quad port 1Gb NICs
If you have invested in new 1Gb hardware before Windows Server 2012 was available, upgrading your NICs to 10Gb hardware is not a requirement. The NIC Teaming functionality allows for teaming up to 32 physical NICs. It is possible to reuse the dedicated 1Gb NICs you used for your Windows Server 2008 R2 or your (obsolete!!) VMware environment and create a single team.
The disadvantage with VMQ and LBFO based on Address Hash is that all the settings for the individual physical NICs in the team must be identical. Whereas NIC Teaming based on HyperVPorts allows for overlapping processor settings.
I have tested with additional live migration networks with the same metric in Switch Independent / HyperVPorts mode. Each live migration network will get its own port on the Hyper-V switch allowing for distribution of the individual live migration networks amongst the team members on a round-robin basis.
I created single NIC team with 8 1Gb team members in Switch Independent / HyperVPorts. After configuring a Hyper-V switch on top of this NIC team, I created six live migration networks with the same metric.
I also adjusting the maximum number of simultaneous Live Migration settings to ten simultaneous live migrations on each cluster node. Running a live migration of ten virtual machines (ten high throughput TCP streams) resulted in only one team member being utilized.
Live migration will use only one available network for moving virtual machine memory and state. Even if other live migration networks are configured with the same metric.
With 2 quad port NICs it is possible to create a different configuration for more live migration bandwidth without losing all VMQ overlapping. Create two NIC teams. One team with four 1Gb team members in Switch independent / HyperVPorts and one team with four 1Gb team members in LACP / Address Hash (you might even configure two team member per quad NIC in a single team for added redundancy).
The Switch independent / HyperVPorts NIC team is configured with a Hyper-V switch for converged Fabric. The LACP / Address Hash NIC team is dedicated for live migration. Since there is no Hyper-V switch on top of this NIC team, RSS is used for load balancing the individual stream.