• About Hyper-V.nu
  • Azure Pack Wiki
  • Azure Stack Wiki
  • WS2012R2 Hotfixes
  • Home
  • Tag: Network throughput

Posts tagged Network throughput

NIC Teaming, Hyper-V switch, QoS and actual performance | part 4 – Traffic classes

January 14, 2013 8 Comments Written by Marc van Eijk

This blog series consists of four parts

  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 1 – Theory
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 2 – Preparing the lab
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 3 – Performance
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 4 – Traffic classes

With the insights from the results of the tests, it is possible to look at multiple scenario’s for the traffic classes live migration and virtual machine.

Live migration

Live migration moves machines from one host to another without noticeable downtime. This can be live migration within a cluster or moving virtual machines with “shared nothing” live migration. Live migrations uses one TCP stream for control messages (low throughput) and one TCP stream for transfer of virtual machine memory and state (high throughput utilization). When live migration includes migrating the VHD, SMB will be used for that. SMB itself will use one or multiple TCP streams depending on your SMB multichannel settings.

Scenario 1 : Server with two quad port 1Gb NICs

If you have invested in new 1Gb hardware before Windows Server 2012 was available, upgrading your NICs to 10Gb hardware is not a requirement. The NIC Teaming functionality allows for teaming up to 32 physical NICs. It is possible to reuse the dedicated 1Gb NICs you used for your Windows Server 2008 R2 or your (obsolete!!) VMware environment and create a single team.

The disadvantage with VMQ and LBFO based on Address Hash is that all the settings for the individual physical NICs in the team must be identical. Whereas NIC Teaming based on HyperVPorts allows for overlapping processor settings.

I have tested with additional live migration networks with the same metric in Switch Independent / HyperVPorts mode. Each live migration network will get its own port on the Hyper-V switch allowing for distribution of the individual live migration networks amongst the team members on a round-robin basis.

I created single NIC team with 8 1Gb team members in Switch Independent / HyperVPorts. After configuring a Hyper-V switch on top of this NIC team, I created six live migration networks with the same metric.

35 Get-VMMigrationNetwork

I also adjusting the maximum number of simultaneous Live Migration settings to ten simultaneous live migrations on each cluster node. Running a live migration of ten virtual machines (ten high throughput TCP streams) resulted in only one team member being utilized.

36 MultipleLiveMigrationsSingleStream

Live migration will use only one available network for moving virtual machine memory and state. Even if other live migration networks are configured with the same metric.

With 2 quad port NICs it is possible to create a different configuration for more live migration bandwidth without losing all VMQ overlapping. Create two NIC teams. One team with four 1Gb team members in Switch independent / HyperVPorts and one team with four 1Gb team members in LACP / Address Hash (you might even configure two team member per quad NIC in a single team for added redundancy).

37 LiveMigrationSeparateNICteam

The Switch independent / HyperVPorts NIC team is configured with a Hyper-V switch for converged Fabric. The LACP / Address Hash NIC team is dedicated for live migration. Since there is no Hyper-V switch on top of this NIC team, RSS is used for load balancing the individual stream.

Read More »

Hyper-v, Marc van Eijk
Converged Fabric, Converged Network Adapter, Hyper-V QoS, IPerf, JPerf, LBFO, Network performance, Networking, NIC teaming, NTttcp, Perfmon, Performance monitor, Powershell, QoS, RSS, TCP Stream, tNIC, VMQ, vNIC, Windows Server 2012

NIC Teaming, Hyper-V switch, QoS and actual performance | part 3 – Performance

January 11, 2013 7 Comments Written by Marc van Eijk

This blog series consists of four parts

  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 1 – Theory
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 2 – Preparing the lab
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 3 – Performance
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 4 – Traffic classes

Performance

Part 1 of this blog series explained the theory of NIC Teaming, Hyper-V switch and QoS. Theory is essential but we don’t run Hyper-V clusters in theory. We run them in production. Windows Server 2012 NIC Teaming and converged fabric allows for more bandwidth. Live migration and virtual machines are two traffic classes where more bandwidth can be useful. The following tests will look at the possible configurations to get the most bandwidth out of your Hyper-V environment on these traffic classes.

NIC Teaming to NIC Teaming

Now that we have the tools configured we can run our first test. It is a good idea to do this one step at a time so the differences in configuration will show exactly how this influences the results.

The first step is two create a NIC team on each server and connect them directly to each other. I have used the quad port 1Gb NIC on each server to create NIC team.

21 NIC Teaming to NIC Teaming

Each NIC team is configured in LACP / Address Hash. Running IPerf with a single stream results in a bandwidth of 113 MBytes per second.

22 4x1GB stream

As stated before the NIC team will force a single TCP over a single team member, so this is the expected result. Opening performance monitor during the test will verify this.

23 4x1GB stream Perfmon

Adding more streams will balance the sessions over the team members. After adding one TCP stream per test, all four team members were active at ten parallel TCP streams.

Read More »

Hyper-v, Marc van Eijk
Converged Fabric, Converged Network Adapter, Hyper-V QoS, IPerf, JPerf, LBFO, Network performance, Networking, NIC teaming, NTttcp, Perfmon, Performance monitor, Powershell, QoS, RSS, TCP Stream, tNIC, VMQ, vNIC, Windows Server 2012

NIC Teaming, Hyper-V switch, QoS and actual performance | part 2 – Preparing the lab

January 9, 2013 3 Comments Written by Marc van Eijk

This blog series consists of four parts

  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 1 – Theory
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 2 – Preparing the lab
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 3 – Performance
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 4 – Traffic classes

Preparing the lab

The test lab consist of two servers (HP Proliant DL 360 G5, nothing fancy but it will give a good picture on the processor demand). Each server contains a dual port 10Gb NIC and a quad port 1Gb NIC. The NICs have RSS and VMQ support. The quad port 1Gb NIC in the servers are directly connected to each other. This will give the best picture since a switch configuration might interfere with the results.

Performance is influenced by a lot of factors. For example, copying a large file between the servers will not be very representative. Server 2012 supports SMB multichannel, whereby multiple TCP streams are used for a single file copy. This requires Physical NICs with RSS support. SMB multichannel will work with NIC teaming since RSS is exposed through the team on the default interface. The Hyper-V switch does not support RSS and does not expose it to upper level protocols. SMB Multichannel will not function for the vNICs. A file copy initiated from a vNIC is single TCP stream. NIC Teaming is designed a single TCP stream to assign to a single team member. When the file is written to the destination, disk I/O can also impact the performance.

Luckily there are some good tools available for measuring bandwidth. During the tests JPerf will display detailed information on the bandwidth, Performance Monitor shows the load distribution and Task Manager gives insight into the processor load and distribution.

NTttcp, IPerf and JPerf

In my initial test I used NTttcp, that was rewritten by Microsoft in 2008. Microsoft is using an updated version of NTttcp that enables additional parameters, but this updated version is not publicly available. Therefore I resorted to IPerf. IPerf is a commonly used network testing tool that can create multiple TCP streams and measure the bandwidth of a network connection. IPerf can run as a server or as a client. The server listens on port 5001 and one or multiple clients can send a single TCP stream or multiple TCP streams to the server. IPerf was originally created for Linux, but there are compiled version for Windows publicly available. I have used a graphical front end for IPerf called JPerf. JPerf gives some nice graphs but requires Java so I wouldn’t recommend installing it on your production servers. If you want to run the same test in your production environment you can use the compiled version of IPerf (which will leave no footprint on the server) or create two virtual machines and install JPerf inside them.

Installation

If you want to use the command line version of IPerf (no footprint) copy the content of the compiled IPerf version to your server. For JPerf you will need to install Java first. JPerf does not require a separate IPerf file. You can just copy the content of JPerf to your server. Before you can run JPerf you will need to add the path to javaw.exe to the Path variable.

In the System Properties of your server open the Advanced tab and select the Environment Variables. Search for the Path variable and (if you installed Java in the default folder) add ;C:Program Files (x86)Javajre7bin to the end of Path variable.

12 Path1    13 Path2

Now you can open JPerf by running jperf.bat located in the root of folder you copied.

To configure JPerf as receiver select Server as IPerf Mode and click Run IPerf. IPerf listens on port 5001 by default. This port should be allowed in the firewall. With the Num Connections value of 0 IPerf will keep listening on port 5001 after a successful run.

14 JPerf server

To configure JPerf as sender select Client as IPerf Mode. In the server address specify the IP address of the server where JPerf is in listening mode. During the test I concluded that JPerf will only function on interfaces with a default gateway configured.

Read More »

Hyper-v, Marc van Eijk
Converged Fabric, Converged Network Adapter, Hyper-V QoS, IPerf, JPerf, LBFO, Network performance, Networking, NIC teaming, NTttcp, Perfmon, Performance monitor, Powershell, QoS, RSS, TCP Stream, tNIC, VMQ, vNIC, Windows Server 2012

NIC Teaming, Hyper-V switch, QoS and actual performance | part 1 – Theory

January 8, 2013 27 Comments Written by Marc van Eijk

This blog series consists of four parts

  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 1 – Theory
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 2 – Preparing the lab
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 3 – Performance
  • NIC Teaming, Hyper-V switch, QoS and actual performance | part 4 – Traffic classes

One of the basics of every Hyper-V configuration is networking. Set aside the missing flexibility, the choices for a Hyper-V cluster design in Windows Server 2008 R2 were clear. A dedicated network interface for each type of traffic (management, cluster, live migration). With this configuration in production NICs were underutilized most of the time and when you needed the bandwidth it was capped at the maximum of a single interface. In the (rare) case of a NIC dying on you there was no failover. In Windows Server 2008 R2 there was no NIC Teaming support. For load balancing and failover the only option was resorting to the NIC Teaming software provided by the hardware vendor.

From experience I can say that a lot of customers were having trouble designing their networking in a Windows Server 2008 R2 cluster correctly. Problems with 3rd party NIC Teaming, live migration over VM networks, not enough physical adapters, you name it, we’ve seen the most “creative” configurations.

Most customers are stuck in the Windows 2008 R2 thinking pattern. This is understandable as Microsoft strongly recommended that each network in a Windows 2008 R2 Hyper-V cluster had its own dedicated physical NIC in each host.

In Windows Server 2012, NIC Teaming is delivered by Microsoft out of the box. The official term is NIC Teaming, it is also referenced as Load Balancing and Failover (LBFO). NIC Teaming is an integral part of the Windows 2012 operating system. With NIC Teaming you can team multiple NICs into a single interface. You can mix NICs from different vendors, as long as they are physical Ethernet adapters and meet the Windows Logo requirement. The NICs must operate at the same speed.  Teaming NICs operating at different speeds is not supported. But flexibility comes with complexity and many choices.

01 Complete configuration

With Hyper-V in Windows Server 2012 it is even possible to create a Hyper-V switch on top of a NIC team. The Hyper-V switch is a full-fledged software based layer 2 switch with features like QOS, port ACLs, 3rd party extensions, resource metering and so on. You can create virtual adapters and attach them to the Hyper-V switch. These developments provide us with the proper tools to create converged fabrics.

What to expect

Usually the first thing tested after initial configuration is copying a large file between two hosts. With a Hyper-V Switch configured on a NIC team composed of two 10Gb adapters you might expect the file to copy with (2 x 10 Gbits / 8 =) 2.5 GBytes per second. When you copy the file you find that actual throughput is a lot lower (about 400 MB/s to 800 MB/s).

The first reaction : it doesn’t work!!

Let me clarify. It’s a little more complicated than just combining two 10Gb NICs and expecting a 2.5 GB/s file copy. It is possible to get these bandwidth results but you need to understand that there are a lot of factors of influence on the actual throughput.

Before we dive in to testing first we will have to look at the choices provided by Windows Server 2012 and how the inner workings of these choices are of influence on the actual bandwidth.

Theory

Transmission Control Protocol

TCP is one of the main protocols in the TCP/IP suite.

Transmission Control Protocol (TCP) is a transport protocol (layer 4). TCP provides reliable, ordered delivery of a stream of octets. TCP provides the mechanism to recover from missing or out-of-order packets. Reordering packets generates great impact on the throughput of the connection. Microsoft’s NIC Teaming (or any other serious NIC Teaming solution) will try to keep all packets associated with a single TCP stream on the same NIC to minimize out-of-order packets.

Hardware

There are some NIC hardware functionalities you should be aware of.

Receive side scaling

Receive side scaling (RSS) enables the efficient distribution of network receive processing across multiple processors.

It is possible to specify which processors are used for handling RSS requests. You can check if your current NIC hardware has RSS support by running the following PowerShell Get-SmbServerNetworkInterface

Get-SmbServerNetworkInterface

Virtual machine queue

Virtual machine queue (VMQ) creates a dedicated queue on the physical network adapter for each virtual network adapter that requests a queue. Packets that match a queue are placed in that queue. Other packets, along with all multicast and broadcast packets, are placed in the default queue for additional processing in the Hyper-V switch. You should enable VMQ for every virtual machine (and it is enabled by default). The new WS2012 feature, D-VMQ, will automatically assign the queues to the right VMs as needed based on their current activity.

Note Hyper-threaded CPUs on the same core processor share the same execution engine. RSS and VMQ will ignore hyper-threading.

Receive Side Coalescing

Receive Side Coalescing (RSC) improves the scalability of the servers by reducing the overhead for processing a large amount of network I/O traffic by offloading some of the work to network adapters.

For advanced configuration of these NIC hardware features Microsoft has released a great document on performance tuning guidelines for Windows Server 2012.

Read More »

Hyper-v, Marc van Eijk
Converged Fabric, Converged Network Adapter, Hyper-V QoS, LBFO, Network performance, Networking, NIC teaming, Powershell, QoS, tNIC, vNIC, Windows Server 2012

Improving network performance for Hyper-V R2 virtual machines on HP blade servers

December 22, 2010 11 Comments Hans VredevoortWritten by Hans Vredevoort

How can we dramatically improve the network communication between two Hyper-V R2 virtual machines in HP blade servers?

Many of our customers have started using HP BladeSystem c7000 enclosures with ProLiant G6 or G7 blade servers. For the network interconnect they use HP Virtual Connect Flex-10 or FlexFabric blade modules which are ideally connected to the outside world via 10Gb Ethernet network switches. In a less ideal world multiple 1Gb connections can be combined to form a fatter trunk to redundant 1Gb Ethernet core switches.

So much for the cabling! As soon as we dive into the blade enclosure, all network communication stays within the confines of the blade enclosure with its multi-terabit signal backplane.

So how on earth can two virtual machines on two different physical Hyper-V R2 blade servers within this same enclosure only communicate at the speed of only 20 to 30MB per second? Conversely, how can we get them back to a much more acceptable speed? If you want to find out, I invite you to read on.

Let me first explain how the different components work together.

In the following diagram we see an HP c7000 Blade Enclosure with several blade servers, two Virtual Connect Flex-10 or FlexFabric blade modules which are each connected to a core Ethernet switch.

A Hyper-V R2 server cannot have enough network adapters. With the latest generation of blade servers we don’t need to fill up the enclosure with a switch pair for each network module or mezzanine. The dual-port 10Gb Ethernet onboard modules can be split into 2 x 4 FlexNICs. Speeds can be dynamically assigned in 100Mb increments from 100Mb to 10Gb.

image

So the Parent Partition on the Hyper-V R2 server sees 8 network adapters at the speed set in the Virtual Connect server profile. We set up at least three NIC teams for management, virtual machines and cluster communication (heartbeat, live migration, cluster shared volumes). The onboard network adapters in the blade server are from Broadcom and the teaming software used is HP Network Configuration Utility (NCU).

Until last week we used the NCU 10.10.0.x teaming software which allowed us to use native VLANs (see previous blogs). What the older versions have in common is the ultra low speed of VM to VM communication with this particular combination of hardware.

Because we wanted to find out we setup a test configuration with above configuration. The Hyper-V servers were Hyper-V Server 2008 R2 with SP1 (release candidate).

The network performance tests were conducted with NTttcp. This is a multi-threaded, asynchronous application that sends and receives data between two or more endpoints and reports the network performance for the duration of the transfer.

We setup two Windows Server 2008 R2 Virtual Machines with 1 vCPU, 1Gb of memory, 1 Virtual Machine Bus Network Adapter (with IC 6.1.7601.17105) and two VHD’s, one dynamic VHD for the OS and one fixed-sized VHD for the test. No changes were made to the physical or virtual network adapter in terms of TCP and other hardware offloads. We simply kept the defaults.

Test 1: VM to VM on same Hyper-V R2 host in same blade enclosure

Broadcom driver on host 5.2.22.0
Teaming software on host NCU 10.10.0.x
Teaming type Network Fault Tolerance with Preference Order
Network speed 2 x 4Gb (Effectively 4Gb)
Total MB Copied 1342
Throughput in Mbps 3436
Result Excellent
Method Average of 10 tests

Test 2: VM to VM on two different Hyper-V R2 hosts in same blade enclosure

Broadcom driver on host 5.2.22.0
Teaming software on host NCU 10.10.0.x
Teaming type Network Fault Tolerance with Preference Order
Network speed 2 x 4Gb (Effectively 4Gb)
Total MB Copied 1.342
Throughput in Mbps 319
Result Awful
Method Average of 10 tests

Via my HP contacts we learnt that HP has been working to improve network performance specifically for HP BladeSystem, Virtual Connect and the FlexNIC network adapters. It turned out that the slow speeds occurred on the LAN on Motherboard (LOM) B, C and D only. So LOM 1:a and LOM 2:a appeared to perform well. If you don’t split your networks into multiple FlexNICs you wouldn’t have noticed any degradation. However, in a Hyper-V cluster environment you need many more networks.

In this iSCSI connected server we use three teams:

  1. LOM 1:a + 2:a Management_Team (Domain access; managing Hyper-V hosts)
  2. LOM 2:a + 2:b VM_Team (for Virtual Machine communication)
  3. LOM 3:c + 3:c Cluster_Team (Live Migration, Cluster Shared Volumes)

The remaining two ports are used to create a MPIO bundle for connection to the iSCSI network.

image

Because the VM_Team is on LOM B, we suffered very low performance when two VM’s living on different Hyper-V hosts.

To see if the newly built drivers and teaming software (released on december 19th) we updated the Broadcom drivers and the NIC teaming software. The same tests were executed to see the difference.

Test 1: VM to VM on same Hyper-V R2 host in same blade enclosure

Broadcom driver on host 6.0.60.0
Teaming software on host NCU 10.20.0.x
Teaming type Network Fault Tolerance with Preference Order
Network speed 2 x 4Gb (Effectively 4Gb)
Total MB Copied 1342
Throughput in Mbps 4139 (+21,7%)
Result Excellent
Method Average of 10 tests

Test 2: VM to VM on two different Hyper-V R2 hosts in same blade enclosure

Broadcom driver on host 6.0.60.0
Teaming software on host NCU 10.20.0.x
Teaming type Network Fault Tolerance with Preference Order
Network speed 2 x 4Gb (Effectively 4Gb)
Total MB Copied 1342
Throughput in Mbps 1363 (+426%)
Result Good
Method Average of 10 tests

Although we haven’t tested and compared all LOMs we feel quite confident that network bandwidth is now distributed in a more even way across the different FlexNICs.

Hans Vredevoort, Hyper-v
BladeSystem, Broadcom, Flex-10, FlexNICs, HP, LOM, NCU, Network Configuration Utility, Network Drivers, Performance

Improving network throughput between Hyper-V R2 virtual machines

November 26, 2009 4 Comments Hans VredevoortWritten by Hans Vredevoort

My colleague Norbert Westland asked me if I could explain the noticeable difference in network throughput between two Windows Server 2008 virtual machines on the same physical server in a Hyper-V R2 cluster compared to an identical file copy between the same VM and a physical host or from the physical host to the VM.

image

Our configuration:

  • HP c7000 enclosure with a few BL460c G6 blades
  • HP Virtual Connect Flex-10
  • HP Brocade Blade SAN Switches
  • HP EVA4400

Storage configuration
Five 500GB Cluster Shared Volumes, spread across 10 Fibre Channel disks configured with VRAID5; the EVA virtual disks are spread across both array controllers)

Network configuration
Thanks to Flex-10 we could split up the dual 10Gb ports on the HP blade server into four flexNICs with appropriate network speeds.

Management Team:  2Gb/s
Live Migration Team: 2 Gb/s
VM Network Team: 14Gb/s
VM Network DMZ Team: 2 Gb/s

image

VM to VM tests
When copying a number of files between the two VM’s we saw a speed of 30MB/sec. In this case the datacopy only traversed the virtual network adapter of VM1, across the VMBus directly to the virtual network adapter of VM2. No physical network was touched at all.

A similar test was performed between VM1 and VM2 on different nodes in the cluster. In this example the path was extended from the VMBus in VM1 to the physical VM Network Team of the one cluster node to the physical VM Network Team of the second cluster node and back to the VMBus and the virtual network adapter of VM2. The result was almost identical. So far so good.

VM to Physical (and vice versa)
When copying the same amount of data from either VM1 or VM2 to the physical host, the speed consistently increased to 100MB/sec. This sounded like bad news. What could explain this enormous difference in throughput.

Things examined:

  • Did the VM have an emulated or a synthetic network adapter installed? In our case the Hyper-V integrations were installed and the much faster hypervisor aware network adapter was configured.
  • Were there any hidden network adapters still left behind from? Using set devmgr_show_nonpresent_devices=1 did not reveal any hidden network adapters in device manager.
  • Were network optimizations turned on on the virtual network adapter? Optimizations were turned on.
  • Are supported network drivers and teaming software installed?  All network software was up-to-date.

After some search we found that several other people had experienced disappointing performance in network speed. On many occasions there were references to disabling TCP Offload. The HP blade server contains an embedded NC532i Flex-10 10GbE Multifunction Network adapter which supports which supports TCP Offload and Large Send Offload.

Results with TCP Offload disabled

The VM to Physical and Physical to VM copy test remained at 100MB/sec so that switch did not do much for this test.

However, the VM to VM test jumped to 100MB/sec and higher after the change. The two VM’s could be on one host or on separate hosts in the cluster. The speed would remain the same.

In the end the difference in network throughput disappeared and physical and virtual were fully on par again.

Because I was aware that virtualization MVP Aidan Finn (@joe_elway on Twitter) was also running almost the same kind of hardware, I asked Aidan if he could post his results as well.

The next day I saw this great post:
http://www.aidanfinn.com/index.php/2009/11/w2008-r2-hyper-v-network-speed-comparisons/

Microsoft has published a whitepaper on network optimization in Windows Server 2008 R2 which discusses all new networking features:

Search for: http://www.google.nl/search?hl=nl&source=hp&q=8EDE21BC-0E3B-4E14-AAEA-9E2B03917A09%2FHSN_Deployment_Guide.doc&meta=&aq=f&oq=

Whitepaper in HTML:
http://74.125.77.132/search?q=cache:17UM0bosaocJ:download.microsoft.com/download/8/E/D/8EDE21BC-0E3B-4E14-AAEA-9E2B03917A09/HSN_Deployment_Guide.doc+8EDE21BC-0E3B-4E14-AAEA-&cd=1&hl=nl&ct=clnk&gl=nl

 

You can follow Hans Vredevoort on Twitter
image

Hans Vredevoort, Hyper-v
Flex-10, G6, HP, Hyper-V R2, ProLiant, TCP Offload, Virtual Connect

Powered by



Archives

  • November 2017 (1)
  • November 2016 (2)
  • October 2016 (8)
  • September 2016 (10)
  • August 2016 (3)
  • June 2016 (2)
  • January 2016 (2)
  • October 2015 (1)
  • September 2015 (1)
  • August 2015 (1)
  • July 2015 (2)
  • June 2015 (2)
  • May 2015 (7)
  • April 2015 (5)
  • March 2015 (4)
  • February 2015 (15)
  • January 2015 (3)
  • December 2014 (8)
  • October 2014 (1)
  • September 2014 (3)
  • August 2014 (6)
  • July 2014 (4)
  • June 2014 (9)
  • May 2014 (5)
  • April 2014 (7)
  • March 2014 (8)
  • February 2014 (8)
  • January 2014 (5)
  • December 2013 (9)
  • November 2013 (8)
  • October 2013 (2)
  • September 2013 (4)
  • August 2013 (3)
  • July 2013 (9)
  • June 2013 (9)
  • May 2013 (2)
  • April 2013 (2)
  • March 2013 (12)
  • February 2013 (11)
  • January 2013 (9)
  • December 2012 (5)
  • November 2012 (2)
  • October 2012 (5)
  • September 2012 (8)
  • August 2012 (4)
  • July 2012 (1)
  • June 2012 (5)
  • May 2012 (7)
  • April 2012 (9)
  • March 2012 (4)
  • February 2012 (1)
  • January 2012 (7)
  • December 2011 (3)
  • November 2011 (7)
  • October 2011 (4)
  • September 2011 (11)
  • August 2011 (10)
  • July 2011 (5)
  • June 2011 (15)
  • May 2011 (17)
  • April 2011 (16)
  • March 2011 (15)
  • February 2011 (11)
  • January 2011 (7)
  • December 2010 (2)
  • November 2010 (1)
  • October 2010 (10)
  • September 2010 (9)
  • August 2010 (1)
  • July 2010 (7)
  • June 2010 (10)
  • May 2010 (1)
  • April 2010 (3)
  • March 2010 (4)
  • February 2010 (6)
  • January 2010 (4)
  • November 2009 (26)
  • August 2009 (1)

Categories

  • Azure Pack Wiki (6)
  • Azure Stack (6)
  • Ben Gelens (20)
  • Containers (1)
  • Darryl van der Peijl (16)
  • Events (43)
  • Hans Vredevoort (315)
  • Hyper-v (233)
  • Ivo Hoefakker (1)
  • Marc van Eijk (52)
  • Mark Scholman (10)
  • Microsoft Ignite (5)
  • Peter Noorderijk (56)
  • Support (4)
  • System Management (91)
  • Tom Klaver (1)
  • Uncategorized (4)
  • Vendor (2)
  • Windows Server 2016 (1)

evolve theme by Theme4Press  •  Powered by WordPress