Compacting a Linux Hyper-V, Virtual PC 2004 or 2007 Dynamic VHD/VHDX file

System Requirements:

  • Virtual PC 2004, SP1
  • Virtual PC 2007, SP1
  • Hyper-V

The Problem:

The Windows Integration components ISO contains a tool for performing a VHD pre-compact. Once completed, this tool allows you to shut down a VHD and compact it to reclaim disk space previously used by the VM which has actually marked as being free space by the VHD FAT/MFT.

More Info

As Linux operating systems do not formally include integration components for Virtual PC, there is no standard Microsoft way to pre-compact the VHD. The fundamental process behind pre-compaction is however simple: write 0’s all sectors of the hard drive that the FAT/MFT claim to be free (available) space.

Once you have done this, the Virtual PC/Hyper-V management UI will flush the contents of the VHD into a new VHD, skipping any 0’d sectors from the migration process, thus reducing the size of the VHD as seen by the hypervisors host partition.

The Fix

I’ve used two different methods to achieve Linux pre-compaction.

Note: This was written against the steps that need to be performed on Debian. Other distributions may require additional steps to install software.

Open Terminal and execute the following:

su
cd /
cat /dev/zero > zero.dat ; sync ; sleep 1 ; sync ; rm -f zero.dat

Although simpler and not requring any software installation, the compaction rate achieved using the method was a little hit and miss, often resulting in a large VHD file after the compaction process.

A higher success rate was achieved by installing a small app package called secure-delete. To install and use this issue the following commands from a shell.

su
apt-get install secure-delete
sfill -f -z /

While I have realised better results from this, the process takes far longer as the secure-delete packages sfill is in fact performing an “insecure” erase on the hard drive, not simple zeroing sectors. This means that it actually passes over the disk more than once, with the concluding pass being the zero pass. The cumulative effect of the multiple passes means that it takes far longer to complete. The last time that I ran with this method, a 16 GB Debian 6 VHD was reduced by 1.2 GB. Small change given the size of today’s hard drive, but a significant percentage of the 16GB disk none the less.

Please also note that there are other elements to the secure-delete package, including tools to wipe the SWAP partition (which may further reduce the size of the VHD if used) as well as tools to perform a full, secure disk erase (not just empty sector erasing). So do ensure that you use this package carefully.

Upgrading Windows Server 2012 to 2012 R2: Things to be mindful of

System Requirements:

  • Windows Server 2012
  • Windows Server 2012 R2

The Problem:

This article outlines a few tips to be mindful of when performing an in-place upgrade from Windows Server 2012 to Windows Server 2012 R2.

More Info

Without wishing to be verbose on this one, the simple answer is that there appears to be a bug / limitation / “feature” of the iSCSI Target component of Server 2012 during upgrade that will cause you some issues. It isn’t a client issue that you will encounter.

Operating System Features

The following core features will not be available after upgrading from 2012 to 2012 R2

  • Servermanagercmd.exe
  • Slmgr.vbs
  • System Image Backup
  • Windows Server Resource Manager

Network

The network profiles for non-domain adapters will drop back to Public after the upgrade, altering the active firewall configuration.

iSCSI / SAN

After the upgrade install, the Windows Firewall will inherit most firewall configuration settings from the previous configuration, however the port configurations for the iSCSI Target services will be in a disabled state, preventing your iSCSI Initiators from connecting to the service.

The upgrade process will fully de-install your NIC’s, although in general most of the main configuration settings are retained and re-applied after the upgrade (IP, Net mask etc), the advanced adapter settings are not applied. In particular, and Jumbo Frame settings designed to support extended MTU’s on your SAN NIC’s will have been reset to the standard 1500 bytes. This will have a performance hit on SAN access and Hyper-V live migration performance. You should manually re-enable the Jumbo frame settings (9014 or 9000 bytes) but be aware that it will cause the NIC to drop and re-initialise when you hit apply.

Update 24/02/2015: It is worth noting that the 2012 (R1) iSCSI target file format uses the legacy VHD format while new iSCSI targets created under 2012 R2 defaults to VHDX which supports larger volumes and better error protection technologies. Perhaps most important is the fact that VHDX is required to support the use of 512e/4K hard drives i.e. non ‘legacy’ 512n hard drives. Here in 2015, if you buy a drive larger than 1TB, it will most likely be a 512e/4K drive. If you migrate from 2012 R1 to 2012 R2 or to a future Windows version onto 4K disks, you will likely see a slump in performance unless you take remedial action to deal with it during the migration. My recommendation is that while you are taking down your iSCSI services to perform the upgrade, perform a VHDX migration of the iSCSI Target LUN using an offline server and swap the VHD for the VHDX before you go back into production. If you don’t do it now, you will either have to do it sooner or later or forget completely and suffer data integrity and system performance issues at a later point in time.

SysInternals

A number of the SysInternals apps, for example BGInfo, that work fine under 2012 have small issues under 2012 R2. BGInfo has a recent update that makes it aware of 2012 R2 and IE 11 rather than reporting that the OS is Windows 6.2 running MSIE 9.0.11.

WSUS 3.0 SP3

Ensure that any legacy WSUS 3.0 servers are patched to SP2 with KB2828185 installed. After re-synchronising, changing the product configuration and synchronising a second time that you server can update from your existing WSUS infrastructure. Be prepared for any 2012 R2 Data Center servers to report in WSUS as Windows 2000 Data Centre however!

Removing the Windows.old uninstall cache without installing Desktop Experience

Unfortunately you can no longer copy/paste the two cleanmgr.exe files out of WinSxS like you used to be able to do with 2008 (the store is compressed). I found that a few loops of the following will eventually remove the Windows.old upgrade cache from the root of the OS drive.

:: This is very slow as it is disk intensive: run out of hours!!
takeown /F C:\windows.old /R /D Y
takeown /F c:\Windows.old\* /R /A /D Y
takeown /F C:\windows.old /R /D Y
takeown /F c:\Windows.old\* /R /A /D Y
cacls C:\windows.old /T /G Administrators:F
rd /s /q C:\windows.old

p.s. run each line manually, the above is not setup as a batch script and will ask for yes / no input. Several runs may be required.

General

Your desktop wallpaper will be reset to the default grey Windows Server logo, re-create as necessary

Don’t forget to activate against your KMS or enter your key.

Dell iDRAC 7 is now completely inaccessible to any of the 2012 R2 servers because IE11 is not yet supported. To run it, disable Protected mode and add the IP address of the DRAC server to the Compatibility Mode view

0xefff0003 New-IscsiTargetPortal : Connection Failed on iSCSI Initiator client when conecting to a newly created iSCSI Target hosted on a Windows Server 2012 file server after changing server NIC configuration

System Requirements:

  • Windows Hyper-V Server 2012
  • Windows Hyper-V Server 2012 R2
  • Windows Server 2012

The Problem:

About 3 weeks ago, I completed the physical hardware installation of redundant NIC’s in a Hyper-V cluster that was backed onto a Windows Server 2012 server iSCSI SAN. The additional physical NIC’s were installed on the clients and communication between nodes worked as expected. The ports on the new NIC were placed into a new private address range of 192.168.100. Some were also removed from an existing multi-port NIC in the 192.168.254/24 range.

A couple of weeks later it came time to change the iSCSI SAN targets on the clients to use a the new adapters to move from 254/24 to 100/24.

New-IscsiTargetPortal -TargetPortalAddress 192.168.100.1

With the correct firewall and chap settings, it should have connected. Instead it returned

New-IscsiTargetPortal : Connection Failed.
At line:1 char:1
+ New-IscsiTargetPortal -TargetPortalAddress 192.168.100.1
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (MSFT_iSCSITargetPortal:ROOT/Micro
soft/...CSITargetPortal) [New-IscsiTargetPortal], CimException
+ FullyQualifiedErrorId : HRESULT 0xefff0003,New-IscsiTargetPortal

The firewall’s were OK, ping was OK. The DNS Connection Suffix, DNS Server (or lack of) were OK and NetBIOS over TCP/IP was disabled.

I could remove and reconnect the server using the original address without any problems.

More Info

Without wishing to be verbose on this one, the simple answer is that it appears to be a bug / limitation / “feature” of the iSCSI Target component of Server 2012. It was not a client issue.

The problem was that Windows had not been rebooted since standing up the new multi-port NIC (some 3 weeks prior). Yes, it was rebooted to put the hardware in, but once the NIC heads on the adapter had been configured it had not been rebooted subsequently.

It would appear that Storage Manager in Server 2012 does not force the iSCSI Target driver subsystem to re-parse the available adapter list.

In going into Server Manager > File and Storage Services > (right click the Storage server offering the iSCSI LUN) > iSCSI Target Settings

The list contained a number of network addresses that were REMOVED 3 weeks ago, but none of the NEW IPv4 or IPv6 addresses assigned to the new NIC were available.

Closing and re-opening Storage Manager made no difference.

The Fix

I sighed and being forced into unexpected maintenance on the cluster storage back end shutdown the cluster, updated drivers and firmware, cleared out Windows Update and rebooted.

After the reboot, all of the new addresses were available in Storage Manager and the redundant ones had disappeared.

So quite simply, reboot (or fully restart all iSCSI services).

Upgrading Hyper-V Server 2012 to Hyper-V Server 2012 R2

System Requirements:

  • Windows Hyper-V Server 2012
  • Windows Hyper-V Server 2012 R2

The Problem:

I recently had to make the decision to implement a Windows Hyper-V Server 2012 (R1) cluster for a client in the knowledge that Hyper-V Server 2012 R2 was only a few weeks away. We therefore agreed that in order to extend the longevity of the cluster (read how long it can sit collecting dust in a corner before having to pay a consultant to upgrade it to Hyper-V Server 2022) that we would migrate the VM content off of 2012 R1 and onto 2012 R2 once it hit RTM.

… I say “we agreed”… I agreed.

The online documentation on doing this isn’t particularly advanced at the current time, the best guides that I could find were for 2008 R2 to 2012 R1.

More Info

This document is not designed to be exhaustive, but to offer a few tips that I’ve encountered in undertaking this project.

How-to

The first delight of the task was to upgrade the testing cluster to test the migration process before hitting the production cluster.

The cluster in question used shared iSCSI storage on a 2012 R1 Data centre install. Each VM cluster member had 2 dual port GB NIC’s over two buses.

  • Public Client Network for the “main LAN” virtual switch
  • Heartbeat / Cluster Private Management LAN
  • Live Migration
  • iSCSI Shared Storage pathway

The cluster hosted 3 identical cluster members, all BIOS and firmware was updated to the latest and greatest before attempting to migrate from R1 to R2.

Importantly, the entire process was done over Remote Desktop / Windows Remote Management over a weekend. There was no physical contact with the systems.

Should you be doing this?

This guide is specifically about upgrading a cluster node running 2012 R1 to 2012 R2. Having gone through the process, ultimately there is not a great time saving benefit in upgrading the cluster vs. performing a clean install – and it is obviously tidier.

As I did not have physical machine access at the time, upgrading was the only option and as it turned out given that

  • The Hyper-V 2012 R2 node refused to co-exist in the existing Hyper-V 2012 cluster so additional shared storage had to be provisioned for the migration
  • The network was fully deconfigured
  • The upgrade process may break other components or configurations, requiring a fair amount of time to troubleshoot and fix

You may want to do a clean install.

Whatever you decide to do, I strongly recommend that you get a Windows 8.1 machine with the Remote Server Administration Tools (RSAT) 8.1 installed to ensure that you are using the R2 versions of the tools and not Windows 8.0’s original version.

Preparations

Before you start to do this, ensure that you have deleted any and all snapshot/checkpoints made against the VM’s involved on the Cluster and the Hyper-V nodes. Ensure that the disks have re-synchronised before you start this process. If you miss one, you will likely find that it will not boot and you will have to force it back onto out-dated the VHD/VHDX source, losing the contents of the differencing disk.

Remember: Checkpoints are not supported by Microsoft in production environments.

Can a 2012 R2 node exist in a 2012 (R1) cluster?

Just to re-iterate this point again here.

No.

I attempted to connect both the first and second Hyper-V Server 2012 R2 hosts that I converted into the original Hyper-V 2012 R1 cluster using Windows 8.1 RSAT. In both cases the cluster refused to accept the nodes.

Node Eviction

You have to start somewhere, so from the available pick the target host that you will upgrade first. Don’t necessarily make this the slowest machine in the pool (if there is such a thing) because this will be hit quite considerably on the other side of the migration, especially if you are in a hurry to get everything on-line again.

De-configure the node

  • Drain stop any services (if you don’t have live migration capabilities)
  • Live Migrate any VM’s from the cluster node to be upgraded onto peer nodes as appropriate if you do have live migration support
get-cluster "<clusterName>" | Get-Clusternode "<sourceHost>" | Get-ClusterGroup | Move-ClusterVirtualMachineRole -node "<destinationHost>"
  • Ensure that the node is NOT the current cluster owner – use Move Core Cluster Resources in Failover Cluster Manager (FCM) or check / move the owner using.
Get-ClusterGroup
Move-ClusterGRoup -Name "Cluster Group" -Node <newOwner>

Check Hyper-V manager as well as FCM before continuing, just in case there is a non-clustered, local VM attached to the server.

Evict the node that is to be upgraded from the current cluster using the Nodes section in FCM or

Remove-ClusterNode -Name "<hostName>" -Cluster "<clusterName>" -Force

On the evicted host, perform a clean-up of the cluster configuration database

Clear-ClusterNode -Name "<hostName>" -Cluster "<clusterName>" -Force

Before continuing, make a note of the LAN adapter configurations that are in use on the host. They will be deconfigured during the upgrade process!

Mount the Hyper-V Server 2012 R2 ISO

If you have physical access, you can skip this part. If you want to do it from the ISO, download the ISO to a SMB share and copy the ISO to the local disk

net use \\<server>\<share>
copy \\<server>\<share>\9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVERHYPERCORE_EN-US-IRM_SHV_X64FRE_EN-US_DV5.ISO c:\

Mount the ISO using PowerShell

Mount-DiskImage -ImagePath "c:\9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVERHYPERCORE_EN-US-IRM_SHV_X64FRE_EN-US_DV5.ISO"

Exit PowerShell, find the mount point for the ISO and run setup

Exit
d:
setup.exe

Follow the instructions on the screen and once it kicks off with the file copy, take a walk for 30 minutes.

Repair & Reconfiguration

Once the installation has completed run Windows Update – and if necessary remember to configure your WSUS servers to download updates for 2012 R2 – before you get to the point that you have to reboot.

During the OS upgrade process you will have maintained most of your settings, however all virtual switch configurations will have been dropped. The host will be running against its physical network cards in whatever IP state they were left in when they were converted over to the virtualisation stack.

In my case, the first servers NIC’s contained correct IPv4 configuration, however the IPv6 configuration was way off on the main LAN connection. Unfortunately, the FCM was attempting to communicate with the host using IPv6 and so it could not raise the node at all – stating that it was off-line.

To fix that, repair the NIC’s IPv4/IPv6 IP addresses as required (add static addresses and remove DHCP ones) then clear out and re-register the servers (polluted) Windows DNS records. After waiting a few minutes for propagation between domain controllers, flush the ARP and Neighbour Discovery caches on the management workstation.

On the server:
# Reconfigure the physical adapter with addressing data (this will be copied to the virtual hyper-v adapter)
New-NetIPAddress -InterfaceAlias "<adapterName>" -AddressFamily IPv6 -IPAddress <address> -PrefixLength 64Set-NetIpInterface -InterfaceAlias "<adapterName>" -AddressFamily IPv6 -Dhcp Disabled

Remove-NetIPAddress -IPAddress <dhcpAllocatedAddressToRemove>

New-NetIPAddress -InterfaceAlias "<adapterName>" -AddressFamily IPv4 -IPAddress <address> -PrefixLength 24

ipconfig /registerdns

On the management workstation:

ipconfig /flushdns
arp -d
netsh interface ipv6 delete neighbors

Once converted, the second host had no static IP address configuration against the physical adapters and so fell back to the DHCP server where it was found to be lurking in a clients address pool – so don’t panic your server should be there somewhere as long as DHCP wasn’t completely disabled on its adapters. Perhaps a lesson here for remembering to add sticky address registrations against your servers MAC addresses?

Interestingly, on the first host that I converted (but none of the others), Hyper-V was also misbehaving. The FCM could not connect to it, so it could not run any tests to validate the node. Hyper-V manager similarly refused to connect to the remote instance stating that the service was unavailable. I went through troubleshooting of the firewall, services and other more likely areas to no avail. When all was said and done, removing and re-adding the Hyper-V Feature fixed the problem.

Remove-WindowsFeature -Name "Hyper-V" -Restart
Install-WindowsFeature -Name "Hyper-V" -Restart

Unfortunately this is fairly time consuming and the system will reboot several times during the removal activity as it has to disable the Virtualisation Mode on the processor and then disable the operating system hypervisor layer, deconfigure itself and then fully reverse the process after the Install command has been issued.

If you get stuck at this point and you still cannot connect to the host from the management workstation, try going to a completely different machine that is not connected to the management network and pinging it, then do an nslookup. Now repeat the same on the management workstation and compare. I wasted a large amount of time because Windows 8.1 decided it would refuse to use IPv4 DNS servers of the nodes static IPv6 address, instead it constantly attempted to use an unregistered IPv6 address.

Restore the virtual network switches

If you have properly configured the physical adapters, Windows will copy the settings into the new virtual adapter. You can do this from Hyper-V manager or from PowerShell

Get-NetAdapter # Get the names of the network adapters from here

Get-VMSwitch # Check for existing v-switches

New-VMSwitch -Name "<name>" -NetAdapter "<adapterName>" -AllowManagementOS <$true | $false> -Notes "<upToYou>"

Get-VMSwitch # View Results

At this point, your Hyper-V 2012 R2 server should be pingable against the virtual network switches defined above and through them, you should be able to manage them from Hyper-V manager and FCM.

Create a 2012 R2 Failover Cluster

Now that one working host is available, use FCM to configure the node to be the first host in a single node failover cluster. This has to be a new Cluster and depending on your objectives it may require its own shared storage allocation in your SAN.

Ensure that sufficient networks, shared storage, quorum configuration and resources (hostname’s, IP addresses etc.) are available to the new cluster to support its starting.

From here until the end of the processes, the strategy is to drain the old cluster into the new one and optionally moving the contents of the storage LUN. I needed to do this in order to increase the size of the LUN.

You can only attempt to retain the same storage pool if you are using Clustered Shared Volumes, you can simply remove the VM from the old cluster (homing it on a particular node), then move the VM without its storage over to the new Cluster.

If you are using a separate LUN for storage in the new cluster, you will need to move the VM with its VHD.

# 1. Migrate the VM to your EXIT node on the old cluster
Move-ClusterVirtualMachineRole -name "<vmName>" -cluster "<oldClusterName>" -node "<exitNodeHostname>" -MigrationType Live# 2. Remove the VM from the cluster, homing it on the Exit Node
Get-ClusterGroup -Name "<vmName>" -cluster "<oldClusterName>" | Remove-ClusterGroup -RemoveResources -Force# 2. Migrate the VM from the Old Cluster to the New Cluster
#    Note: You will have to do this from the Exit Server unless you have Constrained Delegation setup
#    on your servers and management workstation
Move-VM -Name "<vmName>" -DestinationHost "<entryNodeHostname>" -IncludeStorage -DestinationStoragePath "C:\ClusterStorage\Volume1\<vmFolderName>"# 3. Introduce the VM into the new cluster
Add-VmToCluster -VirtualMachine "<vmName>" -Cluster "<newClusterName>"

# 4. If necessary, boot it
Start-Vm -Name "<vmName>

Repeat

Once you have migrated the first VM to the first host in the new cluster, the rest of the migration becomes repetitive.

  1. Drain a host in the old cluster onto a node in the new cluster
  2. Eject the node from the old cluster
  3. Cleanup and upgrade to 2012 R2
  4. Reconfigure the network
  5. Reconfigure the virtual switches
  6. Introduce the node into the new cluster

It is up to you whether you think performing an in-place upgrade is worth it. Having migrated the entire cluster used to create this short guide I can safely say that had it not been for my need to do it at a weekend across Remote Desktop (a practice I wouldn’t recommend in case something goes wrong), it would have been just as fast to have performed a clean installed of the box – in fact the OS install time would have been faster.

Cleaning up

The upgrade process itself is messy and if you want it there is some disk space to reclaim.

' Delete the DVD ISO
rmdir /s /q "c:\9600.16384.WINBLUE_RTM.130821-1623_X64FRE_SERVERHYPERCORE_EN-US-IRM_SHV_X64FRE_EN-US_DV5.ISO"' Delete the uninstall cache for Hyper-V Server 2012 R1
' See: http://gallery.technet.microsoft.com/scriptcenter/How-to-Delete-the-912d772b' Delete the backed up Windows Update Log from the R1 install
del /q "c:\windows\WindowsUpdate (1).log"' Defrag the Hypervisor
defrag.exe /E /O

Finally, in the new cluster do not forget to reset host affinity, ownership and failback as well as any other configuration settings that you require.

See Also

View: How to Delete the “Windows.old” Folder in Windows 8 (PowerShell)