Upgrading Hyper-V Server 2012 to Hyper-V Server 2012 R2

System Requirements:

  • Windows Hyper-V Server 2012
  • Windows Hyper-V Server 2012 R2

The Problem:

I recently had to make the decision to implement a Windows Hyper-V Server 2012 (R1) cluster for a client in the knowledge that Hyper-V Server 2012 R2 was only a few weeks away. We therefore agreed that in order to extend the longevity of the cluster (read how long it can sit collecting dust in a corner before having to pay a consultant to upgrade it to Hyper-V Server 2022) that we would migrate the VM content off of 2012 R1 and onto 2012 R2 once it hit RTM.

… I say “we agreed”… I agreed.

The online documentation on doing this isn’t particularly advanced at the current time, the best guides that I could find were for 2008 R2 to 2012 R1.

More Info

This document is not designed to be exhaustive, but to offer a few tips that I’ve encountered in undertaking this project.

How-to

The first delight of the task was to upgrade the testing cluster to test the migration process before hitting the production cluster.

The cluster in question used shared iSCSI storage on a 2012 R1 Data centre install. Each VM cluster member had 2 dual port GB NIC’s over two buses.

  • Public Client Network for the “main LAN” virtual switch
  • Heartbeat / Cluster Private Management LAN
  • Live Migration
  • iSCSI Shared Storage pathway

The cluster hosted 3 identical cluster members, all BIOS and firmware was updated to the latest and greatest before attempting to migrate from R1 to R2.

Importantly, the entire process was done over Remote Desktop / Windows Remote Management over a weekend. There was no physical contact with the systems.

Should you be doing this?

This guide is specifically about upgrading a cluster node running 2012 R1 to 2012 R2. Having gone through the process, ultimately there is not a great time saving benefit in upgrading the cluster vs. performing a clean install – and it is obviously tidier.

As I did not have physical machine access at the time, upgrading was the only option and as it turned out given that

  • The Hyper-V 2012 R2 node refused to co-exist in the existing Hyper-V 2012 cluster so additional shared storage had to be provisioned for the migration
  • The network was fully deconfigured
  • The upgrade process may break other components or configurations, requiring a fair amount of time to troubleshoot and fix

You may want to do a clean install.

Whatever you decide to do, I strongly recommend that you get a Windows 8.1 machine with the Remote Server Administration Tools (RSAT) 8.1 installed to ensure that you are using the R2 versions of the tools and not Windows 8.0’s original version.

Preparations

Before you start to do this, ensure that you have deleted any and all snapshot/checkpoints made against the VM’s involved on the Cluster and the Hyper-V nodes. Ensure that the disks have re-synchronised before you start this process. If you miss one, you will likely find that it will not boot and you will have to force it back onto out-dated the VHD/VHDX source, losing the contents of the differencing disk.

Remember: Checkpoints are not supported by Microsoft in production environments.

Can a 2012 R2 node exist in a 2012 (R1) cluster?

Just to re-iterate this point again here.

No.

I attempted to connect both the first and second Hyper-V Server 2012 R2 hosts that I converted into the original Hyper-V 2012 R1 cluster using Windows 8.1 RSAT. In both cases the cluster refused to accept the nodes.

Node Eviction

You have to start somewhere, so from the available pick the target host that you will upgrade first. Don’t necessarily make this the slowest machine in the pool (if there is such a thing) because this will be hit quite considerably on the other side of the migration, especially if you are in a hurry to get everything on-line again.

De-configure the node

  • Drain stop any services (if you don’t have live migration capabilities)
  • Live Migrate any VM’s from the cluster node to be upgraded onto peer nodes as appropriate if you do have live migration support
get-cluster "<clusterName>" | Get-Clusternode "<sourceHost>" | Get-ClusterGroup | Move-ClusterVirtualMachineRole -node "<destinationHost>"
  • Ensure that the node is NOT the current cluster owner – use Move Core Cluster Resources in Failover Cluster Manager (FCM) or check / move the owner using.
Get-ClusterGroup
Move-ClusterGRoup -Name "Cluster Group" -Node <newOwner>

Check Hyper-V manager as well as FCM before continuing, just in case there is a non-clustered, local VM attached to the server.

Evict the node that is to be upgraded from the current cluster using the Nodes section in FCM or

Remove-ClusterNode -Name "<hostName>" -Cluster "<clusterName>" -Force

On the evicted host, perform a clean-up of the cluster configuration database

Clear-ClusterNode -Name "<hostName>" -Cluster "<clusterName>" -Force

Before continuing, make a note of the LAN adapter configurations that are in use on the host. They will be deconfigured during the upgrade process!

Mount the Hyper-V Server 2012 R2 ISO

If you have physical access, you can skip this part. If you want to do it from the ISO, download the ISO to a SMB share and copy the ISO to the local disk

net use \\<server>\<share>
copy \\<server>\<share>\9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVERHYPERCORE_EN-US-IRM_SHV_X64FRE_EN-US_DV5.ISO c:\

Mount the ISO using PowerShell

Mount-DiskImage -ImagePath "c:\9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVERHYPERCORE_EN-US-IRM_SHV_X64FRE_EN-US_DV5.ISO"

Exit PowerShell, find the mount point for the ISO and run setup

Exit
d:
setup.exe

Follow the instructions on the screen and once it kicks off with the file copy, take a walk for 30 minutes.

Repair & Reconfiguration

Once the installation has completed run Windows Update – and if necessary remember to configure your WSUS servers to download updates for 2012 R2 – before you get to the point that you have to reboot.

During the OS upgrade process you will have maintained most of your settings, however all virtual switch configurations will have been dropped. The host will be running against its physical network cards in whatever IP state they were left in when they were converted over to the virtualisation stack.

In my case, the first servers NIC’s contained correct IPv4 configuration, however the IPv6 configuration was way off on the main LAN connection. Unfortunately, the FCM was attempting to communicate with the host using IPv6 and so it could not raise the node at all – stating that it was off-line.

To fix that, repair the NIC’s IPv4/IPv6 IP addresses as required (add static addresses and remove DHCP ones) then clear out and re-register the servers (polluted) Windows DNS records. After waiting a few minutes for propagation between domain controllers, flush the ARP and Neighbour Discovery caches on the management workstation.

On the server:
# Reconfigure the physical adapter with addressing data (this will be copied to the virtual hyper-v adapter)
New-NetIPAddress -InterfaceAlias "<adapterName>" -AddressFamily IPv6 -IPAddress <address> -PrefixLength 64Set-NetIpInterface -InterfaceAlias "<adapterName>" -AddressFamily IPv6 -Dhcp Disabled

Remove-NetIPAddress -IPAddress <dhcpAllocatedAddressToRemove>

New-NetIPAddress -InterfaceAlias "<adapterName>" -AddressFamily IPv4 -IPAddress <address> -PrefixLength 24

ipconfig /registerdns

On the management workstation:

ipconfig /flushdns
arp -d
netsh interface ipv6 delete neighbors

Once converted, the second host had no static IP address configuration against the physical adapters and so fell back to the DHCP server where it was found to be lurking in a clients address pool – so don’t panic your server should be there somewhere as long as DHCP wasn’t completely disabled on its adapters. Perhaps a lesson here for remembering to add sticky address registrations against your servers MAC addresses?

Interestingly, on the first host that I converted (but none of the others), Hyper-V was also misbehaving. The FCM could not connect to it, so it could not run any tests to validate the node. Hyper-V manager similarly refused to connect to the remote instance stating that the service was unavailable. I went through troubleshooting of the firewall, services and other more likely areas to no avail. When all was said and done, removing and re-adding the Hyper-V Feature fixed the problem.

Remove-WindowsFeature -Name "Hyper-V" -Restart
Install-WindowsFeature -Name "Hyper-V" -Restart

Unfortunately this is fairly time consuming and the system will reboot several times during the removal activity as it has to disable the Virtualisation Mode on the processor and then disable the operating system hypervisor layer, deconfigure itself and then fully reverse the process after the Install command has been issued.

If you get stuck at this point and you still cannot connect to the host from the management workstation, try going to a completely different machine that is not connected to the management network and pinging it, then do an nslookup. Now repeat the same on the management workstation and compare. I wasted a large amount of time because Windows 8.1 decided it would refuse to use IPv4 DNS servers of the nodes static IPv6 address, instead it constantly attempted to use an unregistered IPv6 address.

Restore the virtual network switches

If you have properly configured the physical adapters, Windows will copy the settings into the new virtual adapter. You can do this from Hyper-V manager or from PowerShell

Get-NetAdapter # Get the names of the network adapters from here

Get-VMSwitch # Check for existing v-switches

New-VMSwitch -Name "<name>" -NetAdapter "<adapterName>" -AllowManagementOS <$true | $false> -Notes "<upToYou>"

Get-VMSwitch # View Results

At this point, your Hyper-V 2012 R2 server should be pingable against the virtual network switches defined above and through them, you should be able to manage them from Hyper-V manager and FCM.

Create a 2012 R2 Failover Cluster

Now that one working host is available, use FCM to configure the node to be the first host in a single node failover cluster. This has to be a new Cluster and depending on your objectives it may require its own shared storage allocation in your SAN.

Ensure that sufficient networks, shared storage, quorum configuration and resources (hostname’s, IP addresses etc.) are available to the new cluster to support its starting.

From here until the end of the processes, the strategy is to drain the old cluster into the new one and optionally moving the contents of the storage LUN. I needed to do this in order to increase the size of the LUN.

You can only attempt to retain the same storage pool if you are using Clustered Shared Volumes, you can simply remove the VM from the old cluster (homing it on a particular node), then move the VM without its storage over to the new Cluster.

If you are using a separate LUN for storage in the new cluster, you will need to move the VM with its VHD.

# 1. Migrate the VM to your EXIT node on the old cluster
Move-ClusterVirtualMachineRole -name "<vmName>" -cluster "<oldClusterName>" -node "<exitNodeHostname>" -MigrationType Live# 2. Remove the VM from the cluster, homing it on the Exit Node
Get-ClusterGroup -Name "<vmName>" -cluster "<oldClusterName>" | Remove-ClusterGroup -RemoveResources -Force# 2. Migrate the VM from the Old Cluster to the New Cluster
#    Note: You will have to do this from the Exit Server unless you have Constrained Delegation setup
#    on your servers and management workstation
Move-VM -Name "<vmName>" -DestinationHost "<entryNodeHostname>" -IncludeStorage -DestinationStoragePath "C:\ClusterStorage\Volume1\<vmFolderName>"# 3. Introduce the VM into the new cluster
Add-VmToCluster -VirtualMachine "<vmName>" -Cluster "<newClusterName>"

# 4. If necessary, boot it
Start-Vm -Name "<vmName>

Repeat

Once you have migrated the first VM to the first host in the new cluster, the rest of the migration becomes repetitive.

  1. Drain a host in the old cluster onto a node in the new cluster
  2. Eject the node from the old cluster
  3. Cleanup and upgrade to 2012 R2
  4. Reconfigure the network
  5. Reconfigure the virtual switches
  6. Introduce the node into the new cluster

It is up to you whether you think performing an in-place upgrade is worth it. Having migrated the entire cluster used to create this short guide I can safely say that had it not been for my need to do it at a weekend across Remote Desktop (a practice I wouldn’t recommend in case something goes wrong), it would have been just as fast to have performed a clean installed of the box – in fact the OS install time would have been faster.

Cleaning up

The upgrade process itself is messy and if you want it there is some disk space to reclaim.

' Delete the DVD ISO
rmdir /s /q "c:\9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVERHYPERCORE_EN-US-IRM_SHV_X64FRE_EN-US_DV5.ISO"' Delete the uninstall cache for Hyper-V Server 2012 R1
' See: http://gallery.technet.microsoft.com/scriptcenter/How-to-Delete-the-912d772b' Delete the backed up Windows Update Log from the R1 install
del /q "c:\windows\WindowsUpdate (1).log"' Defrag the Hypervisor
defrag.exe /E /O

Finally, in the new cluster do not forget to reset host affinity, ownership and failback as well as any other configuration settings that you require.

See Also

View: How to Delete the “Windows.old” Folder in Windows 8 (PowerShell)