Create a Slipstreamed Hyper-V Server 2019 installation image with working Remote Desktop for Administration

If you have been following the saga of the non-working Hyper-V Server 2019 release from November. You may be aware that the most prominent issue – that of Remote Desktop Services for Administration not working – has now been resolved in the February 2019 patch release cycle.

This article outlines how to create updated media for Hyper-V Server 2019 using the original installation medium and patch it into a working state.

Note from the author

Please note that if you intend to use Hyper-V Server in a production environment, you should wait for Microsoft to re-issue the office ISO. Once it is released, it will be made available in the Microsoft Server Evaluation Centre.

View: Microsoft Evaluation Centre

Pre-requisites

You will need access to a Windows 10, Windows Server 2016 or Windows Server 2019 system in order to update the installer.

Obtain and install the Windows ADK 1809 (or later) selecting the Deployment Tools option (providing you with an updated version of DISM)
Download: Windows Assessment & Deployment Kit (ADK)

Retrieve the original Hyper-V Server 2019 ISO
Download: Hyper-V Server 2019 (1809)

Download following updates from the Microsoft Update Catalogue
Note: This is correct as of early March 2019. It is suggested that you apply newer cumulative and servicing updates as they are released in the future.

  1. KB4470788
  2. KB4482887
  3. KB4483452

View: Microsoft Update Catalogue

[Optional] If you wish to apply any language regionalisation (e.g. EN-GB), source the CAB file(s) for the language features that you require. For example:
Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~en-GB~10.0.17763.1.cab

Updating the Installation Image

To update the installation image:

  1. Create a folder on C:\ called ‘Mount’
  2. Add a second folder on C:\ called ‘hvs’
  3. In the hvs folder, create a subfolder called ‘Updates’
  4. Extract the entire contents of the ISO from the Hyper-V Server 2019 ISO into C:\hvs
  5. Place the three MSU files from the Microsoft Update Catalogue into the C:\hvs\Updates folder
  6. [Optionally] Place the CAB file for the language pack into the C:\hvs folder and for convenience rename it ‘lp.cab’
  7. Open an elevated Command Prompt
  8. Issue:
    cd /d "C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64"
    To navigate into the working folder for the updated version of DISM.exe
  9. Issue:
    dism.exe /mount-image /ImageFile:"C:\hvs\Sources\install.wim" /Index:1 /MountDir:"C:\Mount"
    To unpack the installation image into the C:\Mount folder
    Note: Do not navigate into this folder with CMD, PowerShell or Windows Explorer. If you leave a handle open against this folder when you try to re-pack the install.wim, it will fail.
  10. Once the mounting is complete, patch the installation by issuing:
    dism.exe /Image:"C:\Mount" /Add-Package /PackagePath:"C:\hvs\Updates"
  11. [Optional] Apply the language pack by issuing (change en-GB to your language as applicable):
    dism.exe /Image:"C:\Mount" /ScratchDir:"C:\Windows\Temp" /Add-Package /PackagePath:"C:\hvs\lp.cab"
    dism.exe /Image:"C:\Mount" /Set-SKUIntlDefaults:en-GB
    If you intend to use ImageX, DISM or WDS to deploy this image, you can skip the following command. If you intend to create a new bootable ISO or UFD, issue:
    dism.exe /image:"C:\Mount" /gen-langini /distribution:"C:\hvs"
    This will create a new Lang.ini file which must be included in the ISO/UFD media (but is not required for other deployment methods)
  12. Dismount and re-package the install.wim file by issuing:
    dism.exe /unmount-image /MountDir:"C:\Mount" /Commit
  13. Once DISM has processed the installation image, the new Install.wim file can be found at:
    C:\hvs\Sources\install.wim
  14. At this point you will have a working installation image which you can use to create a new ISO, UFD or install via WDS. You should delete the Updates folder and [optional] lp.cab from C:\hvs before creating a new ISO or bootable UFD.

If it goes wrong at any point, issue the following command to abort the process and go back and try again:
dism.exe /unmount-image /MountDir:"C:\Mount" /Discard

Delete the C:\Mount and C:\hvs folders once you have finished creating you new deployment media.

Final Word

If you follow the above, you will have not only a fixed RDP experience, but also a current patched version of Hyper-V Server. Eliminating a little time spent waiting for Windows Update to run.

If you are going to enable RDP for Administration. As ever, do not forget to enable the firewall rule in PowerShell. SConfig.cmd does not do this for you!

Enable-NetFirewallRule -DisplayGroup "Remote Desktop"

Windows NT 4.0 on Hyper-V 2016

System Requirements:

  • Windows Server 2016, Hyper-V Server 2016
  • Windows 10
  • Windows NT 4.0 Advanced Server, Server, Terminal Server Edition, Workstation

The Problem:

For reasons that defy any sane logic, I decided that I needed to install NT 4.0.

It’s 2018 and over the last few years I have been slowly clearing out all of my old IT hardware, to the point now that I no longer have any legacy motherboards or systems in the house or office. So when I recently needed to fire up Windows NT 4.0 once again – for reasons that defy logic – you would assume that Virtualisation was the easy win.

Sadly – and especially with Hyper-V – this is not the case. Microsoft’s virtualisation solution is (and always has been) designed around its currently supported operating systems, with a little Linux added in to the mix in more recent times. Down-level operating systems are not supported and by default, are not going to work. This is especially true of what in effect is Windows 1996, the workhorse wonder that was Windows NT 4.0.

I am sure that the non-masochists of you will just use something like VMWare or Virtual Box to do thy bidding and carry on with their day… but I digress….

Note: This process is will be very similar for Windows NT 3.5 and NT 3.51 as it will be for Windows 2000 – however Windows 2000 does not have the 8GB disk/2GB partition initial size limitation.

The Fix

The following procedure will get you up and running with a working NT 4.0 install under Hyper-V 2016. I am assuming that you know your way around Hyper-V and this article is intended as a results based guide, not a step-by-step ‘click here, go here’ guide.

Create the VM

Use the following configuration when creating your VM:

  1. Create a generation 1 Virtual Machine. In our case this will be “NT 4.0 Server”
  2. Set the RAM to 512 MB (or lower)
  3. You can set it to 1 or 2 CPU cores as required
  4. Do not connect to a network. Remove the default network adapter completely. Add a new Legacy Network Adapter
  5. Create a new virtual hard drive. The drive can be fixed or dynamically expanding, however set the maximum disk size to 2GB or lower. Ensure that both the VHDX and the virtual DVD drive are connected to the IDE bus, not the SCSI bus
  6. Attach your NT 4.0 install CD/ISO to the virtual DVD drive
  7. [If applicable] attach the NT 4.0 virtual floppy boot disk to the virtual floppy drive
  8. Set the required boot order (Floppy or CD ahead of HDD)

Pre-configure Hyper-V

By default, Hyper-V will attempt to run the VM under its default modern architectures mode, compatible with Windows Vista+ systems. The 1996 Windows NT 4.0 code-base is not compatible with modern platforms or CPU instruction sets and if you attempt to boot to the NT text mode installer without addressing this issue, NT 4 will blue screen while attempting to bootstrap the installer.

To fix this, you need to enable the legacy CPU compatibility. This used to be a GUI option in Hyper-V 1.0 under Windows Server 2008, but the option was removed in later releases. Despite being removed from the GUI, the option does still exist in the Hyper-V core and can be re-enabled for the VM using PowerShell.

To enable compatibility mode, open an elevated PowerShell sessionon the hypervisor and enter the following command:

Set-VMProcessor "NT 4.0 Server" -CompatibilityForOlderOperatingSystemsEnabled $true
Get-VMProcessor "NT 4.0 Server"

Text Mode Setup

Boot your Virtual Machine from the floppy/CD and enter text mode and follow through the setup process.

  1. You do NOT need to add any additional mass storage device drivers (this includes the NT 4.0 SP4 ATAPI update, which if you attempt to add the updated driver, the installer will ignore).
  2. When prompted to choose the keyboard layout, language and confirm the computer type. Change the computer type to “Standard PC” for a single core VM or “MPS Multiprocessor PC” if you require access to two cores. Enter your preferred keyboard settings as required.
  3. In the drive partitioning section, create an NTFS partition of less than 2048MB. I would suggest 1024MB for simplicity. Do not attempt to create a larger partition. The reason for this is that NT 4.0 will initially format the VHDX as FAT16, which has a maximum partition size of 2GB. During the later installer process and before entering GUI mode setup, NTFS conversion will be run over the FAT partition, converting it into an NTFS 1.2 file system. You will patch it to NTFS version 3.0 after installing NT 4.0 SP4 or later.

If you receive an installation failure because setup cannot write to the Windows folder or a setup error stating that permissions could not be created, this is most likely caused by you creating the initial VHDX larger than 8GB.

GUI Mode Setup

There are no special requirements or steps to perform during GUI Mode Setup.

Auto detection of the Network card will work with the Hyper-V Legacy Network Adapter. Ensure that you properly configure TCP/IP and remove IPX/SPX from the protocol list (unless you specifically need it).

Post Install

  1. Install SP6a (SP4 at a minimum).
  2. Turn off the machine.
  3. Increase the RAM from 512MB if required.
  4. In Hyper-V Manager/PowerShell edit the virtual disk and set the maximum size to your required size (e.g. the default 127GB).
  5. In Windows Server 2016, locate the VHDX and mount the disk. Using PowerShell or Disk Manager, expand the partition to fill the entire size of the disk.
  6. [1/2] If you want Windows NT 4.0 to turn off automatically when you click the shutdown button (instead of telling you it is now safe to turn off your computer):
    1. Use 7-zip (or similar) and extract the hal.dll.softex file from the SP6a installer, rename it HAL.dll and copy it into C:\WinNT\System32\
      Note: If you are using a multi-processor VM, rename the halmps.dll.softex to HAL.dll and do the same.
  7. Unmount the VHDX.
  8. Reboot the Virtual Machine.
  9. [2/2] If you want Windows NT 4.0 to turn off automatically when you click the shutdown button (instead of telling you it is now safe to turn off your computer):
    1. Add the following to the registry:
      REGEDIT4[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\]
      "PowerdownAfterShutdown"="1"
  10. Install Internet Explorer 6.0 SP1, patch and update the Windows install, install application and configure to your needs.

Things the inexperienced user may not know

If you are playing with NT 4.0 for the first time, then there may be some things that you are not aware of. Here are a list of a few points that are worth noting should you be the Windows equivalent of a millennial (pun intended).

  1. The total install size (less the page file), after patching and cleaning up uninstall data was ~320MB. They don’t make them like that any more!
  2. NT 4.0 does not support plug and play. If you want to add hardware, you have to do it manually via a plethora of different places in the control panel – there is no device manager!
  3. NT 4.0 is extremely insecure by default. Know that it has no built-in firewall and that the base system policy and security configuration is insecure by default (even file system permissions are a free for all). You should keep this in mind when attempting to do anything at all with NT. If it really needs to be on the network you should at a minimum harden system policy and add a firewall (ZoneAlarm Free was the go to back in the day).
  4. There are no display drivers for Hyper-V. This means that there is no mouse integration and as such you will be unable to install NT 4.0 over a Remote Desktop session. It also means that you will be stuck at a max resolution of 800×600 in 16 colour using official means.
    1. Unofficially, you can make use of the great work of the VBEMP NT project to increase the resolution and get NT 4.0 running at modern resolutions and up to ‘True Colour’ (24-bit). This does not offer any cursor integration between the VM and the Hyper-V Manager, preventing mouse use over a Remote Desktop connection and requiring the ctrl +  alt + left cursor to escape the Hyper-V Connection window.
      View: VBEMP NT (tested with NT 4.0 stable version 3.0)
  5. There are no sound drivers for NT 4.0 in Hyper-V (unlike there used to be in Virtual PC) as Hyper-V does not emulate any sound adapters.
  6. The disk performance is fairly poor, until you have patched up to SP6a + the SP6a URP (Q299444). You can further improve performance by enabling DMA Mode on the IDE adapter and write caching on the VHDX.
  7. NT 4.0 by default does not use SMB signing and uses LAN Manager authentication instead of NTLM. It can use NTLM v1/2 once it has been fully patched. However, be aware that this means that it will be unable to communicate with Windows XP SP2+ or Windows Server 2003 in their default configurations. You will have to perform some security hardening on NT 4.0 or security weakening on XP+ to get SMB working. Hint: It’s the same process in the registry on both, so security harden NT 4.0 after installing SP6a.
  8. NT 4.0 ONLY supports SMB 1.0 / CIFS (“SMB 1.5”). Microsoft have been removing support for SMB 1.0 with each successive Windows release. Under Widows Server 2016 and Windows 10 SMB 1.0 support is an optional component/feature that you may need to install manually.
    Note: You should not be using SMB 1.0 at all in 2018 as it is a 100% exploitable security risk.
  9. After you have performed the install, you may be looking for the easiest way to copy SP6a, Internet Explorer, patches and app installers to the VM. To do this as fast as possible without having to pre-harden the OS either burn the updates into an ISO or do it via Guest authenticated SMB by:
    1. Enable the guest account
    2. Create a SMB share on the root of the C Drive and set Guest access to read/write (modify) under NTFS and Share permissions.
      Note: before it is patched, you will struggle to SMB into NT 4.0 using a username and password combination unless you weaken the security policy on the calling client. Using the guest account bypasses the problem.
    3. Use an intermediate level VM/system as a bridge between newer and older SMB versions. For example I used a Windows Server 2008 VM to pull data from third server with a SMB 2.x file share of updates and drop them onto the NT 4.0 SMB 1.0 share found at c:\shared.
    4. Once NT 4.0 is patched, you should disable the guest account again, remove its permissions to the file share and authenticate into NT 4.0 using a normal user account found in the SAM database. Do note my warning above about SMB signing however, which will scupper you unless you have made mitigations via hardening.
Once you have done all of the above, and have a fully patched system. You will have something resembling the below running in Hyper-V 2016.
NT 4.0 in Hyper-V 2016
NT 4.0 on Windows 10 via Windows Server 2016 Hyper-V install

Error 0x80070005 when attempting to Perform a Shared Nothing migration between Hyper-V hosts or move a Hyper-V VM between CSV’s in the same or separate Clusters

System Requirements:

  • Windows Server 2012 R2
  • Windows Server 2016

The Problem:

Hyper-V 2012 R2 has a lot of new features that are worthy of note and one of the most appealing features for Virtualisation Administrators is shared nothing migration between hosts via SMB. If you are in an environment that doesn’t have shared storage it’s useful enough in itself because for VM purposes it may have just validated your decision not to get shared storage in the first place. Yet less well documented is the features value for setups where when you do have shared storage as you can use shared nothing migration as a mechanism to live migrate VM’s between clusters that are backed onto shared storage – or more specifically between “Cluster Shared Volumes” (CSV).

The picture on the back of the box of the smiling, happy systems administrator performing a shared nothing administrator makes it look so easy right? This is however an all too common occurrence:

0x80070005 Error

'General access denied error'('0x80070005')

 

There was an error during move operation.

Virtual machine migration operation failed at migration source.

Failed to create folder.

 

There was an error during move operation.

Virtual machine migration operation failed at migration source.

Failed to create folder.

Virtual machine migration operation for ‘<VM Name>’ failed at migration source ‘<Source Hypervisor name>’. (Virtual machine ID <VM-SID>)

Migration did not succeed. Failed to create folder ‘<RPC path>…\Virtual Hard Disks’: ‘General access denied error'(0x80070005’).

If you look at the specified destination path (e.g. c:\ClusterStorage\Volume1\test) after receiving this error, you will find that it has created the test folder and it will have created a ‘Planned Virtual Machines’ folder beneath it which will in turn contain a folder named with the VM’s VM-SID (the Virtual Machines unique security ID) and a .xml file named with the same VM-SID.

The migration will however not progress any further.

If you attempt to perform the same operation in PowerShell you will receive the PowerShell version of the same error:

VERBOSE: Move-VM will move the virtual machine "<VM Name>" to host "<Destination Server>"
Move-VM : Virtual machine migration operation for '<VM Name>' failed at migration source '<Source Server>'. (Virtual machine ID<VM-SID>)
Migration did not succeed. Failed to create folder
'\\<Destination Server>\<Source Server>.762091686$\{e166ba26-8a4a-4029-ac34-c2466451e439}\<VM Name>\Virtual Hard Disks': 'General access denied error'('0x80070005').
You do not have permission to perform the operation. Contact your administrator if you believe you should have permission to perform this operation.
At line:1 char:69
+ $vm = Get-VM -Name 'test' -ComputerName "<Source Server>" | Move-VM -Des ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : PermissionDenied: (Microsoft.Hyper...VMMigrationTask:VMMigrationTask) [Move-VM], Virtual izationOperationFailedException + FullyQualifiedErrorId : AccessDenied,Microsoft.HyperV.PowerShell.Commands.MoveVMCommand

Please Note: This document does not specifically address 0x80070005 for Hyper-V Replication Troubleshooting, which is a slightly different (yet related) issue.

More Info

Understanding the topology involved in my setup also reveals my reason for needing to get this working – this is important as setup and reasons yours may differ slightly. What I was attempting to do was migrate between two multi-node Windows Hyper-V Server 2012 R2 clusters while being able to initiate the migration from a third device, a Windows 8.1 management console.

Much of the discussion surrounding 0x80070005 suggests that you simply need to deal with the fact that you need to log onto the source workstation and initiate a push of the VM from the source server to the destination server using CredSSP. This is fine if you have a general purpose commodity server that happens to have Hyper-V on it. In the real world if you have a Hyper-V Cluster, you should not be running it in GUI mode, you should be using Server Core – and if you are using Windows Hyper-V Server to begin with, you don’t even have the option of a GUI.

So we can eliminate the use of the GUI tools or the simplicity of “just RDP into the server” immediately from this discussion. People answering as such are running in very simple Hyper-V setups and in environments with simple, very liberal security policies.

You can of course use PowerShell to perform a CredSSP migration on a Server Core installation and as a mater of good practice the ability to transfer VM’s using CredSSP should be confirmed as working before you start out with Kerberos. To do that, log onto the Source Server and execute the following command in a PowerShell session:

Get-VM -Name ‘<VM Name To Move>’ | Move-VM -DestinationHost “<Destination Server>” -DestinationStoragePath “C:\ClusterStorage\Volume1\<VM Name to Move>” -Verbose

If that doesn’t work, I recommend that you troubleshoot this issue before you look to go any further on the 0x80070005 issue.

Additionally, before make sure that you have done performed the basic troubleshooting steps and also ensure that you are simplifying the problem as much as possible before starting. The following provides an overview of such steps in no particular order:

  • Log-in as a Domain Admin to perform this test (if possible). After you have that working migrate down to delegated users and troubleshoot any issues that they are experiencing
  • Only try to ‘shared nothing’ migrate a VM that is turned off (create a new VM, attached a default sized dynamically expanding disk, don’t add any networks and leave it off as this means that you will only have 4MB of data to test move). Once you can migrate a VM that is off, attempt to migrate a running VM with a Live Migration.
  • Only test migrate between the Source Cluster storage (CSV) owner node and the Destination Cluster storage owner node
  • If possible, make the owner of the source and destination cluster core resources the same node that owns the CSV
  • Remember that you must use Hyper-V Manager after you have de-clustered the VM from within Failover Cluster Manager before you can perform a shared nothing migration – the fact that your VM has anything to do with a cluster is an aside for Hyper-V. Treat this process as a Hypervisor to Hypervisor move that happens to be on a CSV and forget about the cluster.
  • On the ‘Choose a new location for virtual machine’ page of the migration wizard, remember that you must enter a file system path (e.g. C:\ClusterStorage\volume 1\test) and not a UNC path (e.g. \\server\c$\ClusterStorage\volume 1\test). The migration is going to take place using RPC and not SMB. Thus do not use a UNC path.
    'Choose a new location for virtual machine' wizard page
  • Ensure that you can migrate the VM using CredSSP as discussed at the beginning of this section
  • Ensure that your Domain Controllers are running Windows Server 2008 or higher (or at least your logon server), Windows Server 2003 Domain Controllers are known to have issues here (possibly due to lack of AES support). Your domain / forest functional levels can reportedly be Windows Server 2003 if required. I have only tested with Windows Server 2008 domain functional and Windows Server 2008 forest functional levels
  • If you are attempting to move between servers in a domain trust, you must ensure that the domain trust supports AES
  • Keep your initial testing paths simple and avoid overly complicated NTFS structures. For example, target the destination to be a local sub folder of c:\ and not a junction (such as ClusterStorage\Volume #) or a non-drive letter NTFS Mount Point (i.e. a iSCSI share or drive mount point exposed as a sub-folder to a higher file system). See the links below for more on this.View: Snapshot – General access denied error (0x80070005)
    View: Migrating a Virtual Machine problemNote: The iCACLS command listed in the second link does not use the principal of least permission. The command to enact the principal of least permission would be as follows:

    icacls F:\hvtest /grant “NT VIRTUAL MACHINE\Virtual Machines”:(OI)(CI)(R,RD,RA,REA,WD,AD) /T

    Finally, keep in mind that for delegation purposes, permissions must be valid for the user account that you are using to perform the move as well as the SYSTEM account.

  • Initially, forget about testing the migration into the cluster CSV itself. Instead, create a new folder on the root of the C Drive of the destination server and migrate into this. There are a few suggestions online that you need to put a couple of folder depths between the root of the drive and the VM itself so try something like:
  • C:\VM Store\Test\
  • If you are following my advice, you will be testing with a 4MB VM called ‘test’ so there won’t be any issue with storage space and the use of the C Drive for testing
  • User PowerShell for testing, otherwise you will go insane from having to repeatedly re-enter information in the Move VM wizard. The general gist of the command is:
    Get-VM -Name ‘<VM Name To Move>’ -ComputerName “<Source Server>” | Move-VM -DestinationHost “<Destination Server>” -DestinationStoragePath “C:\ClusterStorage\Volume1\<VM Name to Move>” -Verbose

    With the 0x80070005 error, you should find that it will get to 2% and then error after a few seconds.

  • Ensure that you have enabled Kerberos authenticated Live Migrations in the properties for the Hypervisor in Hyper-V Manager
    Hypervisor PropertiesNote: You can perform this action in PowerShell using

    Enable-VMMigration -ComputerName <Server Hostname>
    Set-VMHost -ComputerName <Server Hostname> -VirtualMachineMigrationAuthenticationType Kerberos
  • Ensure that your Hypervisor’s and the Windows 8.1 management VM are up to date (at the same patch level) and are joined to the same domain
  • Ensure that all parties in the process have properly registered DNS records in AD DNS
  • Check your Windows Firewall rules – for testing purposes just turn them off if you can (remember to turn them back on afterwards!)
  • Check your ASA/Hardware Firewall rules for the same
  • Keep an eye on the Hyper-V event logs for any additional information. The log of consequence is found in event Viewer under:Applications and Services Logs > Microsoft > Windows > Hyper-V-VMMS > AdminIf you are experiencing the same problem that I was, you will see three events on the Source Server’s log (20414, 20770 and 21024). The 20770 error is the one being reflected by PowerShell or the Hyper-V Management console. Shortly there-after, the Destination Server will log a 13003 event informing you that the virtual machine from the Source Server (with the same VM-SID) was deleted, indicating that the Destination Server performed a clean-up of the initial migration process.

Permissions

There is a lot of discussion about permissions and 0x80070005 errors. Let us look at the salient points

VERBOSE: Move-VM will move the virtual machine "<VM Name>" to host "<Destination Server>"
Move-VM : Virtual machine migration operation for '<VM Name>' failed at migration source '<Source Server>'. (Virtual machine ID <VM-SID>)
Migration did not succeed. Failed to create folder
'\\<Destination Server>\<Source Server>.762091686$\{e166ba26-8a4a-4029-ac34-c2466451e439}\<VM Name>\Virtual Hard Disks': 'General access denied error'('0x80070005').
You do not have permission to perform the operation. Contact your administrator if you believe you should have permission to perform this operation.
At line:1 char:69
+ $vm = Get-VM -Name 'test' -ComputerName "<Source Server>" | Move-VM -Des ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : PermissionDenied: (Microsoft.Hyper...VMMigrationTask:VMMigrationTask) [Move-VM], VirtualizationOperationFailedException + FullyQualifiedErrorId : AccessDenied,Microsoft.HyperV.PowerShell.Commands.MoveVMCommand
  1. The Migration failed at the Source Server
  2. The Source Server failed the migration because it could not ‘create a folder
  3. We know that the folder in question is the Source Server being unable to create a ‘<VM Name>\Virtual Hard Disks‘ folder
  4. We know that the Source Server was able to create a ‘<VM Name>\Planned Virtual Machines’ folder because we can see it in the file system if we use the GUI Wizard to perform the migration.
    Note: The PowerShell version cleans up after itself!
  5. You have told the Hypervisor to use Kerberos to perform the migration

What does this tell us? It tells us that YOU, the administrator are being told that you cannot create the folder. You are using Kerberos to perform the migration, not CredSSP, so the entire process is being run end-to-end using YOUR credentials. The Management Workstation is logging onto the Source Server as YOU. The Management Workstation is telling the Source Server to initiate the move and in turn the Source Server is delegating your authentication session to the Destination Server and telling it to receive instructions from the Source Server using your credentials. At this point it has nothing to do with ‘NT Virtual Machine’ or VM-SID permissions, this comes after the migration of the core parts of the VM and during initialisation of the VM on the Destination Server. We are not there yet.

So the first thing to check is that your account is authorised to perform the move. If you are a Domain Admin, you should be OK, however you should ensure that the Domain Admin’s security group is a member of the Local Administrators Group on the all participating machines – source server, destination server and management workstation.

If you do not want the user account to have full local admin rights you can add them to the “Hyper-V Administrators” group on each server. To add an account to a local group on Server Core or Windows Hyper-V Server:

net localgroup "Hyper-V Administrators" /add domain\user
net localgroup "Administrators" /add domain\user

Constrained Delegation

When viewing the Delegation tab on the computer account in Active Directory Users & Computers (ADUC) ensure that:

  1. You are using “Trust this computer for delegation to specified services only” (it doesn’t appear to work if you use the “any service” option)
  2. You have selected “Use Kerberos only”
  3. You tick the ‘Expanded’ checkbox to view the full list of entries
  4. That (once Expanded) there are two entries for each type (types being CIFS and Microsoft Virtual System Migration Service), one entry will have the NetBIOS Name and the other will have the FQDN i.e. there are 4 entries for each delegated host, two with NetBIOS Names and two with FQDN entries.
  5. When you create the Kerberos Constrained Delegation, you need to ensure that the “Service Name” field column is blank. If there is something listed in the Service Name column, your delegation is not going to work properly.
  6. You need to have the same number of “CIFS” entries for each host as you do for “Microsoft Virtual System Migration Service”
  7. It is not necessary to add the Management Workstation to the Constrained Delegation

When you issue the Move-VM command in PowerShell, try substituting the -ComputerName and -DestinationHost values for four combinations of the NetBIOS Name and FQDN.

Get-VM -Name ‘<VM Name To Move>’ -ComputerName “<Source Server>” | Move-VM -DestinationHost “<Destination Server>” -DestinationStoragePath “C:\ClusterStorage\Volume1\<VM Name to Move>” -Verbose

For example, if your have Server1 and Server1 and your domain is domain.local the combinations to test are:

Source Destination
Server1 Server2
Server1.domain.local Server2
Server1 Server2.domain.local
Server1.domain.local Server2.domain.local

If you find that one of these works while the others do not, you have an error in the constrained delegation setup for DNS or NetBIOS aliasing. Carefully recreate the delegation.

After you have setup the delegation, go into a LDAP browser, ADSI Edit or the Attribute Editor in ADUC. For each delegated server, find the servicePrincipalName property and look at the value list. You should have two of each of the following entries (one with the NetBIOS Name and the other with the FQDN).

  • Hyper-V Replica Service/
  • Microsoft Virtual System Migration Service/
  • RestrictedKrbHost/

If you do not see these, you have a Delegation Error and/or an issue in creating SPN records. Either delete and try to recreate them by recreating the delegation or carefully add them by hand.

DNS

Bindings. I know that you checked them, but check them again. Trust me. On Server Core where you have very little contact with the actual server console this is very easy to overlook.

Constrained delegation may work with both NetBIOS and DNS, however Kerberos does not care for NetBIOS. If your DNS doesn’t work, you aren’t going to get a successful ticket session creation that you will need in order to pass credentials forward as part of the Constrained Delegation setup.

Check the following using short hand and FQDN lookups i.e. nslookup server1.domain.local and just nslookup server1. Are they both going where you expect? Crucially, which server NIC is the DNS query going out of and once the reply comes back, which NIC is being used to attempt to contact the host?

  1. The management console can query all domain controllers in DNS
  2. The management console can query all Hypervisors in DNS
  3. The hypervisors can all query the management console in DNS
  4. The hypervisors can all query all domain controllers in DNS
  5. The hypervisors can all query each other in DNS

This also requires you to check your default gateway settings.

This is important in the following scenario. Most of you will not encounter this because of the scale of your operations, however the fact is that at Enterprise level I did encounter this problem, hence why I able to write about it.

  1. Lets assume that you follow best practice and have separate public, management, cluster, iSCSI and heartbeat networks.
  2. Your management network is data centre local, on a private network with minimal routing and is designated to management of servers, IPC traffic, un-routed VM’s etc in a secure fashion
  3. Local DNS is available on the management network but does not expose Internet Resolution
  4. Your public VM address ranges come from the public network and are not exposed via NAT/PAT i.e. routing and firewall’s
  5. Your domain controllers exist on a public routed network subnet that is separate from the public VM address ranges used for VM’s
  6. You followed best practice and set your management networks binding order to be the first adapter in the binding order on the hypervisors
  7. You will now receive 0x80070005 when you attempt to replicate, live migrate of off-line migrate a VM between cluster nodes using Kerberos Constrained Delegation

The problem is the adapter binding order caused by the use of local DNS on a network that offers no connectivity to the domain controllers. When the KDC attempts to generate a Kerberos ticket for the constrained delegation, the lookups for the domain controllers will be performed using the DNS servers on the management network and will mistakenly attempt to connect to the domain controllers via the management network. This is simply going to time out – causing the wait during migration. Once it times out, Windows DNS doesn’t defer to the next set of DNS servers or attempt to get to the DC’s on a different NIC. It simply gives up.

The resulting very helpful error code that Hyper-V offers back is Access Denied while seemingly attempting to create files in the file system – the Hypervisor will log that it was unable to create the ‘Virtual Hard Drives’ folder on the destination Hypervisor. What it should actually say here is that it could not properly initialise the end to end Kerberos Constrained Delegation ticket session due to a timeout. It of course doesn’t do that.

In this situation the fixes are one of:

  1. Add an interface on the domain controllers on the management LAN
  2. Add a network interface which can connect to the domain controllers in a higher adapter binding order position in the Hypervisor binding order
  3. Remove the DNS servers from the management networks TCP/IP properties, thus forcing Windows Server to use the first available DNS server configuration on a lower ordinal adapter
  4. Allow routing from the management LAN to the domain controllers. Alias, stub or secondary zone the domain controllers in the management networks DNS and hope you remember to keep them up to date when you make changes to Domain Controller DNS records

Assuming that your constrained delegations are correct, it will start working as soon as the DNS updates have propagated.

The Fix

Ultimately the problem that I had was in the setup of the Constrained Delegation and in another case as discussed above, the DNS binding order. For the Constrained Delegation issueI only had NetBIOS values for the ‘Microsoft Virtual System Migration Service’ while I only had FQDN values for CIFS entries which in turn meant that the associated SPN records were missing.

I was originally using a script by Robin CM for this purpose, it appears that it is this script which isn’t quite ticking all of the boxes.

View: Robin CM’s IT Blog – PowerShell: Kerberos Constrained Delegation for Hyper-V Live Migration

 

In my environment, the following represents a corrected version of the script.

The script assumes that you have placed all of your Hypervisor’s in a dedicated OU. The script will obtain a list of all servers in the OU and automatically create the constrained delegation complete with both pairs of the NetBIOS Name and FQDN records.

In addition, the script also now ensures that the system is not adding a constrained delegation back to itself into the AD database.

You must be a domain admin or have permissions to write to msDS-AllowedToDelegateTo objects in AD in order to run this script.

$OU = [ADSI]"LDAP://OU=Hypervisor's,OU=Servers,DC=ad,DC=domain,DC=co,DC=uk"
$DNSSuffix = "ad.domain.co.uk"
$Computers = @{} # Hash tableforeach ($child in $OU.PSBase.Children){
# add each computer in the OU to the hash table
if ($child.ObjectCategory -like '*computer*'){
$Computers.Add($child.Name.Value, $child.distinguishedName.Value)
}
}# Process each AD computer object in the OU in turn
foreach ($ADObjectName in $Computers.Keys){
Write-Host $ADObjectName
Write-Host "Enable VM Live Migration"
Enable-VMMigration -ComputerName $ADObjectName
Write-Host "Set VM migration authentication to Kerberos"
Set-VMHost -ComputerName $ADObjectName -VirtualMachineMigrationAuthenticationType Kerberos
Write-Host "Processing KCD for AD object"
# Add delegation to the current AD computer object for each computer in the OU
foreach ($ComputerName in $Computers.Keys){
#Write-Host $ComputerName.toUpper() $ADObjectName.toUpper()
if ($ComputerName.toUpper() -ne $ADObjectName.toUpper()) {
Write-Host (" Processing "+$ComputerName+", added ") -NoNewline
$ServiceString = "cifs/"+$ComputerName+"."+$DNSSuffix
Set-ADObject -Identity $Computers.$ADObjectName -Add @{"msDS-AllowedToDelegateTo" = $ServiceString}
$ServiceString = "cifs/"+$ComputerName
Set-ADObject -Identity $Computers.$ADObjectName -Add @{"msDS-AllowedToDelegateTo" = $ServiceString}
Write-Host ("cifs") -NoNewline
$ServiceString = "Microsoft Virtual System Migration Service/"+$ComputerName
Set-ADObject -Identity $Computers.$ADObjectName -Add @{"msDS-AllowedToDelegateTo" = $ServiceString}
$ServiceString = "Microsoft Virtual System Migration Service/"+$ComputerName+"."+$DNSSuffix
Set-ADObject -Identity $Computers.$ADObjectName -Add @{"msDS-AllowedToDelegateTo" = $ServiceString}
Write-Host (", Microsoft Virtual System Migration Service")
}
}
}

Once you have run it, give the system a few minutes so that AD can distribute the update to all DC’s and for the Kerberos session on the respective nodes to refresh.

Update for Windows Server 2016

So I decided to reinstall a node to Hyper-V Server 2016 and have a play with it in amongst HyperV Server 2012 R2.

The experience did not go swimmingly well. Here is a quick overview of some issues and I encountered/created myself to keep in mind when troubleshooting this

  1. The Hyper-V server Win32 installer will perform an in-place upgrade as a clean install. Remember that this means that you will need to delete the AD computer account object and DNS records and then re-join the system to the domain in the correct OU.
  2. Once you have done this, you will need to re-create the Kerberos Constrained Delegation records for all Hyper-V nodes
  3. I was experiencing a problem where I could use Kerberos to Live Migrate or offline migrate to the Hyper-V 2016 host, however I could not migrate back unless I logged onto the 2016 node and use CredSSP to move it back again. Looking at the Windows Server 2008 R2 domain controller security logs, Kerberos authentication was failing. In the end the fix was to add a Delegation for CIFS and the ‘Microsoft Virtual Systems Migration’ delegation classes of the computer account object — TO ITSELF. Yes, if you have Computer Accounts HVNode01, HVNode02, HVNode03, the delegation tab for HVNode01 must include CIFS and MVSM entries in DNS and NetBIOS nomenclature for not only HVNode02 and HVNode03 but ALSO HVNode01 (itself). Once I did this, I could magically migrate the VMs back again.
  4. If you are using Jumbo Frames, remember to perform a test using the following command. If it doesn’t work, fix this before doing anything else
    ping <ipAddress> -l 8500 -f
  5. I made a silly mistake in late night PowerShell command entry when setting up the networking on the 2016 box, I entered
    add-vmnetworkadapter -managementos -Name Management

    when I actually meant to enter

    add-vmnetworkadapter -managementos -Name Management -SwitchName VS_Managmement

    This hooked up a new Virtual network adapter on the Hypervisor called ‘Management’ to each and every Virtual Switch on the Hypervisor. So I wound up with 3 NIC’s called Management all on different networks. They went off and got their own IP addresses from DHCP, registered themselves in DNS and created chaos in the adapter binding order. Naturally the one on the unrouted Management network wound up at the top of the binding order and things got a little upset!

  6. The very first randomly selected non-production critical VM that I attempted to migrate was the nodes local console VM. This VM was not designed to move from the node and didn’t have CPU compatibility mode enabled. This caused additional failure issues.
  7. The second randomly selected non-production critical VM that I attempted to migrate gave no hex error code or message what so ever either through the UI or the event log, just throwing Event ID 24024 and stating that the migration failed and the error message could not be found. To cut a long winded story short, in the end I (correctly) assumed it was the VM itself at fault and decided to Export / Import it in order to lazily cycle the file system permissions. It turns out that when I attempted to re-import the VM (as a restore) the import wizard notified me that it was expecting to find a snapshot file but that the snapshot itself was unavailable (this VM had no snapshot on the UI and no snapshot file in the export snapshots folder). The wizard asked me if it could clear the snapshot remnant and imported the VM. Once it was imported again, it could now live migrate and offline migrate properly. It had nothing to do with the 2016 node.Note: Remember to check on the source Hypervisor for remnants of the original Exported VM which may be left in place on the file system.

With the above issues resolved, everything is working correctly between the Hyper-V Server 2012 R2 nodes and the test Hyper-V Server 2016 node.

Booting Windows Hyper-V Server from USB: Lessons From Practice

System Requirements

  • Windows Hyper-V Server 2008 R2
  • Windows Hyper-V Server 2012
  • Windows Hyper-V Server 2012 R2

The Problem

I recently wanted to explore the viability of pulling a Server RAID controller into a workstation. A few choice pieces of electrical tape to cover PCIe pins later and the card worked as intended… until it melted down a few minutes later.

The inevitable failure got me thinking. Like most enterprise hardware, the PERC 5/i and 6/i do not support processor power management. The onboard processor runs at 100% speed, 100% of the time. As a result the heat that it generated easily overwhelmed the modest airflow of a desktop. The thermals went well past 80 degrees C before it tripped out.

Most of the servers that we are running in one particular production stack were using the same controllers. Despite this, none of them were actually being used as RAID controllers. They were set as HBA/JBOD devices with a single drive attached – i.e. no disk redundancy. The reason why we have a production setup with such a bad design? These servers are clustered hypervisors. It doesn’t much matter if they burn out. There are 20 more to take their place and all actual client data is held within a fully redundant, complex storage network. An admin simply needs to replace the broken part, rebuild the OS and throw it back into the pool.

Cost Rationalisation

Was changing the design of these servers feasible? Each 10,000 RPM 70GB hard drive was at best using 20GB of data – and less than 15GB in most cases. Each of those drives is consuming 15-25w of power, making noise and never sleeping. At the same time each controller is consuming 6-18w of power and again, never sleeping. Both are adding to the heat being thrown down through the backplane and out into the hot isle. All for pretty much needlessly.

Based upon my domestic energy tariff, the potential per-server electricity cost saving stands to be between £3.29 and £4.38 per month. £39.48 and £52.56 per year. This does not include any residual savings in air conditioning costs. While it doesn’t seem a lot. On a cluster of 20 servers that’s between £789.60 and £1051.20 per-year. At that level the potential savings to start to add up.

As an IT designer, it also gives me a budgetary value that I can rationalise any savings against. If we split the difference over 12 months between the upper and lower estimate we get a £46.02 average. If it costs more than that – particularly for old server hardware – it isn’t worth doing: so £46.02 became the ‘per-machine budget’ for my experiment.

Options to Consider

With that said and on the understanding that there is no RAID redundancy involved in the setup I am (re)designing. There were three options to explore:

  1. Pull the RAID controller and attempt to utilise the DVD drive SATA connector with an SSD. This would solve the heat issue, solve the noise issue and reduce power consumption (to ~4w). It will also be faster than the 10,000 RPM rotational drive. The down side is that getting hold of affordable SSD’s (as of writing) isn’t yet an option. Not to mention that various adapters and extra cabling would be required to get the SSD mounted properly (at extra cost). Modifying new cable runs into 1u servers can often be a challenge (it’s bad enough in 3u). The Server BMC also complicates matters as under Dell, OpenManage will notice that you aren’t using a Dell approved drive and this will quickly hit your environmental reporting data. Approximate cost ~£70+ per server. Well over budget.
  2. Pull the RAID controller and mount a SSD/mSATA/m.2 into a PCI-e slot (even potentially the RAID controllers slot) on a PCIe adapter. This solves the cabling problem and has the added advantage of clearing both drive slots. It also means that I can control the bus specification, potentially getting a boost from a SATA III or NVMe controller. Of course this is more expensive although it is easier to get hold of smaller mSATA SSD’s than it is 2.5″ ones. Cost per-server ~£125+. Again, over budget.
  3. Look at SATA DOM or booting from Compact Flash/SD Card. SATA DOM isn’t an option for the PowerEdge 1950 and a NAND flash solution would require modification of the chassis. The headache of managing boot support would also be an issue. Rending this unreaslistic.
  4. Pull the RAID controller, disk and boot the entire enclosure from USB. This solves pretty much all problems but does add one in that these servers do not have an internal USB port. The active OS drive would therefore need to be insecurely exposed and accessible within the rack. Think malicious intent through to “I need a memory stick… ah, no one will notice if I use that one”. The cost of an average consumer USB 3.0/ 16GB USB Flash Drive (UFD) is about £7 – and it just so happened that operations have boxes of new ones lying around for the pilfering fully authorised, fully funded project.

I decided to experiement with option 4 and started to investigate how to boot Hyper-V from USB.

How to

Running Hyper-V Server from a UFD is a supported mechanism (as long as you use supported hardware types and not a consumer off the shelf UFD like I am).

The main Microsoft article on this topic was written for Hyper-V Server 2008 R2, however a set of liner notes with hardware recommendations are also available for 2012/R2.

View: Run Hyper-V Server from a USB Flash Drive

View: Deploying Microsoft Hyper-V Server 2008 R2 on USB Flash Drive

 

So far, so good. The basic premise is that you use disk virtualisation and the Windows 7/8 boot loader to boot strap the operating system. The Hyper-V Server is installed into a VHD and once the boot loader mounts the VHD and loads Windows as if it were any other Virtual Machine. The performance will suffer, but for Windows Server Core, this really doesn’t matter.

Microsoft states that USB 2.0 or higher must be used and that (for OEM redistributors) the UFD must not report itself as being ejectable.

Microsoft recommends the following drives

  • Kingston DataTraveler Ultimate
  • uper Talent Express RC8
  • Western Digital My Passport Enterprise

The closest that I could find were 16GB Kingston DataTraveler G4’s. Based upon UserBenchmark data offer 45% lower performance vs. the DataTraveler Ultimate G3 and 131% lower write speeds. Similarly, USB3Speed reports that the G4 read/write is 102.86/31.48 MB/second on a USB 3.0 bus vs. 174.76/38.46 for the Ultimate G3. So there is a decisive bottle neck being introduced as a result of using a cheaper UFD model.

The Microsoft article recommends the use of 16GB UFD’s rather than 8GB ones to allow for the installation of future updates I grabbed 4x 16GB DataTraveler G4 sticks and proceeded to prepare them to support the boot process.

View: UserBenchmark: DataTraveler Ultimate G3 vs. DataTraveler G4

View: USB3Speed

Check your Server for Suitability

The Microsoft article states that USB 2.0 is supported for Hyper-V Server USB Booting. I confirmed through empirical experimentation that the PowerEdge 1950 does support USB 2.0 and that its firmware supported booting from USB in a reliable, consistent way.

What I mean here is that you don’t want to have to go into a F12 boot menu every time you restart the server because the BIOS/UEFI will not automatically attempt to boot from the USB port. You should – as a matter of course – update your server firmware, including (but not limited to) the BIOS/UEFI as a way to mitigate against any potentially solvable issues in this regard. Do remember however that in a clustered environment, you shold normalise your hardware and firmware setup on all participating nodes before you set out to create the cluster.

In testing, the PowerEdge 1950 demonstrated that it could boot properly from the UFD without intervention. Thus with another tick in the box, the idea was looking increasingly more viable.

The USB Stick & VHD(X) Creation Process

I am not going to repeat the instructions for creating the bootable USB stick, they are clear enough on the Microsoft website. It is a shame that Microsoft closed the Technet Code Library meaning that you can no longer get access to the automated tool.

What I will add is that as I was installing Windows Hyper-V Server 2012 R2, I decided to attempt to use convert VHD to the newer VHDX format. The advantages here are nominal; better crash recovery and support for 4K drives are the main headlines. Regardless I wanted to start with the latest rather than using the VHD as prescribed in the 2008 R2 creation guide.

It didn’t work. The boot loader seemed unable to read the VHDX file. Running the VHDX back through the Hyper-V’s disk editor and into a VHD did however work. After some testing, I discovered that the issue was in the issue was the VHD migration process. To use VHDX you must update the BCD using the Windows 8.1 version of BCD edit and start with the Windows 8.1 boot loader. Repeating the process from scratch in a native VHDX did however result in a bootable OS.

I had initially started testing Hyper-V Server on an 8GB UFD. During the process and having obtained a 16GB drive, I decided to expand the size of the VHD from 7 to 14GB. This was a mistake. The VHD will expands fine, however Windows will not allow you to resize the VHD’s primary partition to fill the newly available space via the GUI or DiskPart. So unless you have access to partition management tools that can work with a mounted VHD(x), you will need to ensure that the size of the VHD is correct when you create it.

The file copy from the management computer onto the UFD of the 14GB VHD file (with write cache enabled) was excruciating. Making around 12.8MB/s from a USB 2.0 port is was getting far less than the benchmarked speed of 31MB/s.

Windows file copy showing 12.8MB/s

Finally, I created a Tools folder on the root of each UFD and copied the Windows 8.1 x64 versions of:

  • ImageX.exe
  • bcdedit.exe
  • bcdboot.exe
  • bootsect.exe

I also copued the x86 version of a Microsoft utility called dskcache.exe into here. dskcache can be used to enable/disable write caching and buffer flusging on connected hard drives. You could directly inject these into the VHD if you wanted to, however if left on the UFD, they are servicable.

Also note that this is your best opportunity to inject drivers into the VHD should you have any special hardware requirements.

The Results

USB 2.0

Despite the Microsoft article stating that USB 2.0 is supported, it became obvious within about 20 seconds of the boot process that something was not right. The time that it took to boot was agonising. Given the poor sustain file write speed shown above, this shouldn’t be overly surprising.

It took well over 60 second for the boot loader itself to start booting, let alone bootstrap the VHD load rest of the operating system. The initial boot time was about 25 minutes – although does have to go through OOBE and perform the driver and HAL customisation processes during the initial boot, so it isn’t very fair to be overly critical at this stage.

The next point of suffering was encountered at the lock screen. On pressing Ctrl + Alt + Del, a 15 second delay elapsed before the screen refreshed and offers the log-in text fields. After resetting the password and logging on, the blue Hyper-V Server configuration sconfig script took around 90 seconds to load. In short, the system was painfully unresponsive.

I had expected it to be sluggish – but I was not expecting it to be quite this bad.

Windows had loaded the UFD’s VHD file with the write cache enabled but buffer flusging (‘advanced features’) disabled. I thus used dskcache.exe to enable both settings.

dskcache +p +w

… and rebooted.

The boot time was around 4 minutes, the Ctrl + Alt + Del screen was still sluggish as was the login process – but it was certainly faster. Having completed the first Windows Update run, boot times to a password entry screen had reduced to a far more respectable 1 minute and 17 seconds. The sluggishness (while still there) had again reduced to 10-15 seconds from log-in to sconfig.

So what is the problem? There is certainly a lot of variables here:

  • The Datatraveler G4 does not offer the performance it is supposed to
  • The bus is USB 2.0
  • There is an artificial abstraction layer being imposed by the disk virtualisation process in and out of the VHD
  • Behind the scenes, Windows still likely thinks that this device is removable and is reacting accordingly
  • While the VHD upload process was a linear one that consisted of a single large file, the operating system will be making thousands of random seeks and random small writes. Random I/O and Linear I/O always offer different statistics – the latter being more synthetic than real world usage will otherwise offer.

8GB

As I mentioned previously, my original test with Hyper-V Server 2012 was on an 8GB UFD with a 7GB primary partition. After install, Hyper-V Server 2012 R2 consumes 2.98GB with no Page File. By the time Windows Update had scanned, downloaded attempted to install updates – including the 870MB Windows Server 2012 R2 Update 1 (KB2919255) – there was only 154MB of free disk space available. It was unable to complete the installation as a result.

Having ascertained that I could not resize the partition post-creation I recreated the VHDX once again, from scratch onto one of the 16GB.

16GB

Installing the Hyper-V Server 2012 R2 into a 14GB VHD on a 16GB stick at left plenty of available disk space. By the time that Windows Update had got around to having downloaded and subsequently attempted to install all available Windows Server 2012 R2 updates, there was 4.52 GB free.

At this point the Hypervisor itself still has not been configured and required support tools such as security software, Dell OpenManage or Dell EqualLogic Host Integration Tools.

Therefore, as with the advice offered on the Microsoft article, do not attempt to run Windows Hyper-V Server 2012 R2 from anything smaller than a 16GB memory stick. If you do, you are going to encounter longevity and maintenance problems with your deployments. In practice you should not consider using anything smaller than 32GB. I can see a time within the next couple of years when the 16GB installation will (as with the 8GB installation) be too large to continue to self-update.

This is significant and should be something that you factor during design as if Fail over Cluster Manager spots a mismatched DSM driver version (i.e. out of sync Windows Update state between cluster nodes in the case of the Microsoft driver), the validation will fail and Microsoft will not offer support for your setup. Therefore being unable to install updates is not an a situation that you want for your clustered Hyper-V Server environments.

Windows Update

As a side note, it is worth pointing out that I ran Windows Update on the live UFD in the server while it was booted. One of the advantages of the UFD approach is that it is easy to keep a box of pre-configured UFD’s in a draw that can be grabbed as a fast way to stand-up a new server or recover a server when its existing UFD has failed. Windows Update maintenance of these UFD’s is made far easier if you use DISM to off-line service the VHD’s and apply Windows Updates to the image before you even start to use the memory stick.

You can periodically update the box of UFD’s to the latest patch revision meaning that should you ever need to use one and you will have a far more up to date fresh install of the OS to hand. Something that is significantly faster than performing on-line servicing.

Improving Performance

There were three choices at this point. Abandon the project, focus on the UFD and buy the higher spec drive (£30 vs £7) or focus on the controller. Looking at prices, the controller was the cheaper option to explore.

USB 2.0 is an old technology. Its maximum theoretical bit rate is 480Mbps (that’s Megabits per second, not MegaBytes). This equates to 60MB/s (MegaBytes second). If we compare this with USB 3.0 whose maximum theoretical bit rate is 5Gbps (Gigabits) or 640MB/s we can see a very clear route to better performance. In practice, USB 3.0 isn’t going to get anywhere near 640MB/s, however a quick trip to eBay revealed controller pricing of between £6 and £35 making it something that was easier to swallow inside my £46.02 budget.

After researching chipset options, I narrowed it down to there being three chip options. The Etrom EJ198, which seems to have the fastest benchmark figures. The Renesas (formerly NEC) D720202 which is the new version of the D720201 which (coming in a close second) and finally the cheap and cheerful VIA Labs (VLI) 805-06 1501.

After further research, I found a lot of reports of compatibility issues with the Etrom which, coupled with its higher price, meant I abandoned it. So I picked up a £6.45 VLI dual port card and a dual port Renesas card for £12.66 simply as a means to have two different chips to test with.

Total spend on project: £25.91. Still well within budget.

Before starting I had a working theory that introducing the USB PCIe controller was going to break the BIOS’s ability to boot from the USB port. Despite extensive research, I was unable to find any controller cards online that stated the presence of an Option ROM to explicitly offer boot support. So ultimately I may have spent £25.91 for nothing particularly as USB 3.0 may not be able to add anything to the already I/O constrained cheaper 16GB UFD; but at this point there was still £20.11 left in the budget witch was available to use if chasing USB 3.0 was a red-herring. Consequently I was able to pickup a Kingston DataTraveler Ultimate G3 for £16.99 from eBay to allow for a thorough exploration of both avenues.

Total spend on project: £42.90. Still £3.20 left in the budget for a cup of tea!

Kingston DataTraveler Ultimate G3

The first thing to note was that it is a far larger memory stick and as such is a lot more obvious and significantly more intrusive sitting on the rear I/O plane of the server. You would definitely want to internally mount this larger UFD simply to protect it from damage caused during routine maintenance and cable management activities.

I elected not to re-create the experiment from scratch with a full 30/31GB VHDX, so instead I copied the existing 14GB VHDX from the existing UFD. Over a USB 2.0 bus a UFD to UFD copy resulted in a 18.1MB/s transfer speed – an immediate 5.3MB/s improvement. Repeating the file transfer from the hard drive onto the UFD increased this further to 24.6MB/s – an improvement. of 11.8MB/s and nearly a doubling of the write speed onto the UFD.

Windows file copy showing 24.6MB/s

Testing the connection on the Servers USB 2.0 bus, the performance difference was immediate. While still occasionally lagging and significantly slower than compared to even a 7k rotational hard drive. Its responsiveness was now at a point where I concluded that performance was acceptable – even for the USB 2.0 bus.

Rolling the VHDX back to an older, un-patched version of the image and having the server self-update was a better experience; with the update process lasting for a period of a few hours rather than all day.

I did however start to experience some operational problems with the higher specification drive. For example, while I had no problems with the cheaper drive (eventually) completing tasks, the DataTraveler Ultimate G3 could not complete some DISM servicing activities, citing “Error: 1726 The remote procedure call failed”. This could be illustrative of the start of a drive failure or some form of corruption in the VHD.

USB 3.0

The cheaper £6.45 USB 3.0 controller arrived first and I threw it into a PCIe 1x slot on the test system. I then retested the file copy on both the Ultimate G3 and the DataTraveler G4 to see if there was any improvement in performance.

Windows file copy showing 16.3MB/s

The DataTraveler G4 copied up at around the 16.3 MB/s mark, this is a 3.5 MB/s improvement over the 12.8 MB/s off of the USB 2.0 controller but nothing compared to the 24.6 MB/s of the DataTraveler Ultimate G3 on the USB 2.0 controller.

So what about the performance of the DataTraveler Ultimate G3 on the USB 3.0 bus? The result was quite phenomenal in comparison

Windows file copy showing 88.2MB/s

88.2 MB/s, some 63.6 MB/s faster than the same drive on the USB 2.0 bus – some 705.6 Megabits per-second. Not bad for a £6.45 VIA Labs chip from eBay!

As anticipated however, the lack of an OptionROM was the downfall to the experiment. The BIOS was unable to ‘see’ the USB 3.0 controller as an add-in device during POST and thus was unable to boot from it.

I attempted to create a dual USB boot solution where the VHDX file lived on a memory stick attached to the USB 3.0 bus. A second memory stick containing the boot loader existed on the motherboards USB 2.0 port. Sadly however no amount of tinkering could get the system to link one to the other. 'No bootable device -- insert boot disk and press any key'.

The second, more expensive Renesas USB 3.0 controller arrived around a week later. Just as with the cheaper VIA Labs controller, there was no possibility of getting it to boot directly either.

Writing onto the cheaper DataTraveler Ultimate G4 using a Renesas driver actually managed a throughput of 19.3 MB/s. Repeating the test with the DataTraveler Ultra G3 yielded a write speed of 93.0 MB/s, again showing an improvement over the VIA. Be it a not particularly significant one given that it was double the price.

In summary the write speeds for performing the large file transfer of the VHD onto the memory stick are shown below.

Write Speed: MegaBytes second (MB/s) – Higher is better
Memory Stick USB 2.0 USB 3.0 (VIA) USB 3.0 (Renesas)
DataTraveler G4
12.8
16.3
19.3
DataTraveler Ultimate G3
24.6
88.2
93.0

From a subjective point of view, the use of the DataTraveler Ultimate G3 on the USB 2.0 bus was “acceptable”. Acceptable given what the system needed to do. Thus the randiom read/write bottleneck can be conculded as being in the memory stick and not the controller itself.

Update 08/04/2019: The VIA controller only lasted around 6 months before it started causing system instability (blue screens). Shortly there-after it died. The Renesas controller is still going strong!

 

Conclusion

So having spend £42 on the experiment, what can conclusions can be drawn.

Many of you have probably been shouting the obvious here. That the best way to reduce costs would be to obtain more efficient servers and consolidate the old ones into fewer appliances. This is true beceause newer servers:

  • Have more efficient, less power hungry, higher capacity components
  • Emit less heat
  • Have more efficient power supplies
  • Have newer, better fans
  • Can consolidate more virtual servers

In the real world most of us don’t work for Google or Microsoft and we cannot get management to agree to write blank cheques. Neither can most start-ups, home lab builders, ‘hand-me-down’ dev-test environments or backup environments. The short of it is if you want to save some money, reduce heat and in turn reduce noise (always useful in a home environment). A £40 – £50 saving a year can go a long way. So spending £42 wasn’t unreasonable.

USB 2.0 is ‘good enough’, especially for testing environments. There are clear performance advantages with USB 3.0, however you are going to need USB 3.0 enabled boot support to make practical use of this technology. Even if you have that, you should consider other solutions such as a small SSD or SATA DOM before considering USB 3.0. If you are in a position to add bootable USB 3.0 to your system. It is however a very viable option.

The biggest headline from his process has been that not all UFD’s are created equally. The wide and varied margin between different models from the same company was surprising – espeically with both devices claiming USB 3.0 featuresets. The benchmark statistics are so stark as to prove that there is virtually no point in having USB 3.0 if you are going to use a low-end UFD.

For Hyper-V Server, with the correct investment in your UFD, you can make USB 2.0 suffice for your needs and as long as you realise that it will not be as fast as a rotational drive. Despite this, if you do not reboot your envrionment very often, it might just be good enough for your requirements.

For me personally, I will be getting the testing cluster migrated over to VHDX/UFD booting hypervisors. There is a cost saving rational that helps me to keep the testing devices running. On a more personal level, for home, I have created UFD devices for a couple of desktop machines that are in my lab and these have been setup as off-line nodes in my cluster. The value here is that they can become hypervisors for a short time without interfering with the OS or drives. Even more importantly I do not have to worry about multi-booting. With these UFD’s I plan on simplifying the maintenance process of the main environment so that I no longer need to have down time on my setup.

So why would you want to consider creating a UFD boot setup for your hypervisors? There are some advantages just as there are clearly some disadvantages

Advantages

  • There is potentially a financial saving to be made as a result of power consumption reduction. This is especially true for large clusters and whole racks of servers using  shared storage
  • It is a very easy way to make a low-cost, reportable environmental sustainability push. This is particularly true if you are not yet able to dispose of your legacy hardware
  • It works well with Microsoft’s push towards the use of SMB 3.0 for low-cost Hyper-V shared storage setups for SMB’s
  • If you accept RAID as being unnecessary in a clustered environment. In the event of a UFD failure you can easily keep a box of pre-configured UFD’s in a draw. Allowing you to get the Hypervisor up and running again and back into the cluster very quickly. Offline servicing can also be used to very easily keep the off-line UFD devices patched
  • Heat reduction was my main driver. By removing the hot RAID/SAS JBOD controllers there is a thermalsaving. There is also potentially an area of additional cost saving in environmental cooling
  • It is extremely cheap to implement. Not specifying your new Hyper-V Server purchase with hard drives will more than pay for the cost and time of setting up the environment. Most new servers will have an internal USB port within the chassis and you can use this to your advantage for security. The UFD approach is cheaper than similar SSD/mSATA alternatives
  • Removing hard drives cuts down on power, use, heat and noise. This is less important for Enterprise but for a small business or an average home/home lab user this might be a very important driver
  • The convenience of a UFD makes this a very good option to keep in mind for emergency planning/disaster recovery. You can throw a pre-configured UFD into any server or even desktop and have it running a serviceable hypervisor minutes. All without impacting the original server’s drives. Simply remove the UFD and reboot and it goes back to doing whatever it was doing previously. This is potentially very useful for a SMB with limited resources who need to service a running Hypervisor without downtime. If you can temporarily promote a seperate machine to be a Hypervisor by plugging in a UFD and rebooting. You can creatively increase your organisational uptime
  • It is becoming difficult to purchase small (32/64GB) SSD drives while it remains easy to obtain smaller UFD’s. This saves money as you will not need to buy a 128GB SSD to support a 20GB requirement
  • You can use both VHD or the newer VHDX formats. VHDX offers better failure safeguards, 4k sector support and is the only real choice for UEFI setups

Disadvantages

  • There is no support for disk redundancy from the setup described in this article. If you require OS’s underpinned by a mirror. Then this is not something to consider and you should look at SSD’s
  • Most Enterprise scale deployments will make use of scripted, rapidly provisioned PXE deployments of Hyper-V Server. The use of VHDX means that you will be unable to use these technologies
  • The UFD to VHD(x) abstraction process introduced by disk virtualisation adds a performance penalty
  • As has been demonstrated, UFD’s are slower than rotational drives and are considerably slower than SSD’s
  • The longevity of the UFD being used for this purpose is unknown. In the absence of reliable MTBF figures, most Enterprise users probably wouldn’t (and shouldn’t) consider it
  • Integration with server management tools such as OpenManage may be a problem for your OEM. This in turn may have an impact on support and warranty options.

 

In summary: For the average Enterprise user on primary production kit it may not be something that you want to consider. In some use cases, such as for backup, testing or disaster recovery environments there are clear advantages. Especially if you are prepared to be creative!