Configuring networking in Hyper-V is not that smooth as we would like it. You have al the possibilities you need but you need to know where you can configure what option and how you need to configure it. In this blog I will describe some tips and tricks and will show you how you can configure the several network components. This blog is based on a HP Hyper-V server (blade server) which is member of a Hyper-V cluster. The NIC’s (8 pieces) are presented through HP Virtual Connect. The total bandwith I received was 20 GB and I need to share that between the 8 NIC’s.
Posts tagged Virtual Connect
Using VLANs in Hyper-V virtual machines has always been a bit messy with HP’s teaming sofware, aka HP Network Configuration Utility for Windows Server 2008 R2. Until now a lot of steps had to be taken:
- Prepare trunk/channel between Hyper-V host and core network switch(es) and add all possible VLANs
- Create NIC Team with HP NCU
- Create VLAN’s on the NIC Team
- In Network Connections additional NICs are created which each represent one VLAN.
- Create a Hyper-V Virtual Network connected to an external network adapter representing a specific VLAN and remove tick in front of Allow management operating system to share this network adapter because we don’t want to see even more NIC’s under Network Connections.
- Add VLAN id to virtual network adapter in Virtual Machine, start the machine and add an IP address to the virtual network adapter that belongs to that VLAN.
A NIC Team with VLANs is recognized by a V in front of the team name.
Recently HP upgraded NCU to version 10.10.0.0 (9 Sep 2010). Apart from additional support for a new Converged Network Adapter (CNA), one of the notable improvements is that it now formally supports VLANs created in Hyper-V Virtual Machines.
The help file that goes with the new NCU version states that a new VLAN Promiscuous property allows a team to pass VLAN tagged packets between virtual machine and external networks only when there is no VLAN created on that team in the Host operating system. If a team is assigned to a virtual machine, the NCU disables the VLAN button to prevent VLANs from being created on the team in the host operating system. This property is available only on Windows 2008 x64/R2 and only when the Hyper-V role is installed.
VLAN Promiscuous is disabled by default.
If the VLAN Promiscuous property and the VLAN button on the NCU GUI are mutually exclusive. If one is selected or configured, the other is hidden or disabled.
If Hyper-V is installed and VLANs are created on the team in the host operating system, the NCU either hides the VLAN Promiscuous property or disables it.
If we interpret correctly a NIC Team is now transparent for VLAN tags. It now allows us to use more than 64 VLANs when the Virtual Connect is switched into tunneling mode. The only thing to do is create a team from multiple network ports in a ProLiant server, use the teamed NIC as an external adapter for a Hyper-V virtual network and add a VLAN tag to the virtual network adapter in a virtual machine.
As soon as I have been able to test this setup, I will write an extra piece to this blog. After all, the proof is in the pudding!
My colleague Norbert Westland asked me if I could explain the noticeable difference in network throughput between two Windows Server 2008 virtual machines on the same physical server in a Hyper-V R2 cluster compared to an identical file copy between the same VM and a physical host or from the physical host to the VM.
HP c7000 enclosure with a few BL460c G6 blades
HP Virtual Connect Flex-10
HP Brocade Blade SAN Switches
Five 500GB Cluster Shared Volumes, spread across 10 Fibre Channel disks configured with VRAID5; the EVA virtual disks are spread across both array controllers)
Thanks to Flex-10 we could split up the dual 10Gb ports on the HP blade server into four flexNICs with appropriate network speeds.
Management Team: 2Gb/s
Live Migration Team: 2 Gb/s
VM Network Team: 14Gb/s
VM Network DMZ Team: 2 Gb/s
VM to VM tests
When copying a number of files between the two VM’s we saw a speed of 30MB/sec. In this case the datacopy only traversed the virtual network adapter of VM1, across the VMBus directly to the virtual network adapter of VM2. No physical network was touched at all.
A similar test was performed between VM1 and VM2 on different nodes in the cluster. In this example the path was extended from the VMBus in VM1 to the physical VM Network Team of the one cluster node to the physical VM Network Team of the second cluster node and back to the VMBus and the virtual network adapter of VM2. The result was almost identical. So far so good.
VM to Physical (and vice versa)
When copying the same amount of data from either VM1 or VM2 to the physical host, the speed consistently increased to 100MB/sec. This sounded like bad news. What could explain this enormous difference in throughput.
Did the VM have an emulated or a synthetic network adapter installed? In our case the Hyper-V integrations were installed and the much faster hypervisor aware network adapter was configured.
Were there any hidden network adapters still left behind from? Using set devmgr_show_nonpresent_devices=1 did not reveal any hidden network adapters in device manager.
Were network optimizations turned on on the virtual network adapter? Optimizations were turned on.
Are supported network drivers and teaming software installed? All network software was up-to-date.
After some search we found that several other people had experienced disappointing performance in network speed. On many occasions there were references to disabling TCP Offload. The HP blade server contains an embedded NC532i Flex-10 10GbE Multifunction Network adapter which supports which supports TCP Offload and Large Send Offload.
Results with TCP Offload disabled
The VM to Physical and Physical to VM copy test remained at 100MB/sec so that switch did not do much for this test.
However, the VM to VM test jumped to 100MB/sec and higher after the change. The two VM’s could be on one host or on separate hosts in the cluster. The speed would remain the same.
In the end the difference in network throughput disappeared and physical and virtual were fully on par again.
Because I was aware that virtualization MVP Aidan Finn (@joe_elway on Twitter) was also running almost the same kind of hardware, I asked Aidan if he could post his results as well.
The next day I saw this great post:
Microsoft has published a whitepaper on network optimization in Windows Server 2008 R2 which discusses all new networking features:
Whitepaper in HTML:
You can follow Hans Vredevoort on Twitter