Hyper-V Error Message Cheat-sheet

The aim of this article is to create a slowly growing cheat-sheet of Hyper-V Errors and known fixes.

Errors without Error Codes

These errors are UI errors that do not provide a Hex Error Code / Win32 Error Code.

There was an error during move operation.

Migration operation on ‘<VM Name>’ failed.

See 0x80041024
There was an error during move operation.

Virtual machine migration operation failed at migration source.

Failed to create folder.

See 0x80070005

 

Errors with Error Codes

The following errors have error codes either stated on GUI error messages, as part of PowerShell error messages or in the Hyper-V event logs in Event Viewer.

0x80041024

Event ID: 16000

The Hyper-V Virtual Machine Management service encountered an unexpected error: Provider is not capable of the attempted operation (0x80041024).

  1. One or both of the Hyper-V servers has an invalid or inaccessible IP address/network address on the “Incoming live migrations” list on the “Live Migrations” tab in Hyper-V settings
  2. The live migration network is down, the/a cable is unplugged or faulty
0x80070002 Could not create backup checkpoint for virtual machine ‘<VM Name>’: The system cannot find the file specified. (0x80070002). (Virtual machine ID <VM GUID>).

For more on this, seem my in-depth article.

  1. Check that you have enough disk space on the volume to perform the VSS. If your volume is sub-15%, try using the Hyper-V Manager to change the snapshot directory to another volume – plug in an external NTFS formatted hard drive if you have to.
  2. Check the permissions of the VHD stated in the error.
    icacls “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\<VHD File>” /grant “NT VIRTUAL MACHINE\Virtual Machines”:F
  3. Check that you can manually checkpoint/snapshot the VM while it is running.
    In Hyper-V Manager or in PowerShell, force a checkpoint on the VM and then delete it and wait for it to merge back. If this works, you are not having a physical VSS issue. If it fails, you need to troubleshoot this and not the WSB error.
  4. Live Migrate the VM off of the current server and onto a different Hypervisor, attempt the backup here, then bring it back to the original server and try again. This process will reset the permissions on the VM file set. If you cannot live or offline migrate the VM, the you need to troubleshoot this and not the WSB error
  5. Ensure that all VHDX files associated with the VM are on the same storage volume and not spread across multiple Volumes/LUNs (be it individual disks, logical RAID or iSCSI disks). If they are, move them to a common location and retry the operation.
0x80070005 ‘General access denied error'(‘0x80070005’)

For more on this, see my in-depth article.

  1. If you are using a management console (not the local Hyper-V console to perform the migration. You must setup Kerberos Constrained Delegation for all of your Hyper-V hosts machine accounts in Active Directory.
  2. Ensure that both NetBIOS and fully qualified DNS entries exist in the Constrained delegation (click the “Expanded” check box to verify this) e.g. myserver1 and mysever1.mydomain.local for both “CIFS” and “Microsoft Virtual System Migration Service”
  3. For Hyper-V 2008, 2008 R2, 2012 and 2012 R2 “Trust this computer for delegation to specified services only” should be used along with “Use Kerberos only”
  4. For Hyper-V 2016 and 2019 or a mixed environment “Trust this computer for delegation to specified services only” must be used along with “Use any authentication protocol”
0x8007274C

Event ID: 20306

The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host ‘<Hypervisor FQDN>’: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (0x8007274C).

  1. The Live Migration firewall rules have not been enabled in Windows Firewall (or your third party firewall). This is especially easy to forget on Server Core and Hyper-V Server installs.
  2. No DNS server(s) or invalid DNS server(s) have been specified on the network configuration for the hypervisor
  3. Specified DNS server(s) are unreachable
  4. An intermediate firewall or IDS blocked the request

How to compact VHDX files in the most efficient way

Have you ever run the Hyper-V “Edit Disk…” tool’s ‘compact’ action, to discover that your VHD or VHDX disk file didn’t shrink? This article discusses how to optimise the chances for success when compacting your virtual hard drives on Windows Server.

 

Why Compact VHDX files?

Compacting virtual disk files is an activity that most Hyper-V administrators seldom ever do. In most cases, the space savings are going to be nominal and outweigh the benefits. A well designed Hyper-V deployment will ensure accounting for future storage optimisation/loading during the capacity planning phase; with scale-out design considerations being made during this stage too.

Compaction is not a fix for poor design.

If you are unfamiliar with what compaction does at the technical level, Alataro has a good introduction guide.

View: Altaro “Why You Should Be Compacting Your Hyper-V Virtual Disks”

In practice, compaction can help in two scenarios:

  1. Improving the speed and disk space use for non-VSS block-based backups
  2. Reducing the amount of data transferred during live migration

 

Why doesn’t Edit Disk… work?

It is common for the Hyper-V Manager Compact tool to not achieve any reduction in the size of the VHDX file, putting many administrators off from spending time running it.

Two problems exist with the GUI compaction wizard:

  1. The VHDX file is not in a pre-optimised state prior to compaction, reducing the success rate
  2. For performance reasons. Execution through the wizard does not use the full range of compaction tools available to the system. These can only be accessed (currently) via PowerShell.

So how do you properly shrink a virtual disk?

 

Optimise your VM

Inevitably there will be some data inside the VM which is not necessary to retain. Removing this prior to compaction, will automatically increase the amount of space that you save.

Optimising Windows

Under Windows, running the Disk Clean-up tool is a good place to start. Additional places to clear-out include:

  1. C:\Windows\Temp
  2. C:\Windows\SoftwareDistribution\Download
    The SoftwareDistribution\Download folder will usually save over 1GB of space as it contains the installation sources for Windows Updates. Microsoft Update will re-download the files again if it needs them on the next update scan.
  3. C:\Users\<user>\AppData\Local\Temp

You can run defrag from within the VM, however if you are in a position where you can off-line compact the virtual disk, it is more time efficient to defrag the VHDX while offline.

Optimising Linux

Optimisation should also be performed on Linux systems. Unlike with Windows, Linux self-clears its own Temp space on restart. You can also free space from the updates process using your package manager. The below example can be used to free space using apt:

sudo apt-get update
sudo apt-get autoremove
sudo apt-get autoclean
sudo apt-get clean

Under native Linux file systems, there is no concept of running a defragmentation. ext4 performs this automatically on behalf of the system. This optimisation does not release free space in a “physical” fashion on the hard drive. Unless this “physical” space is released (zeroed out), Hyper-V will be unable to compact the disk.

Fortunately it is possible to force Linux to zero-out unused disk space in the VM. It can be performed using the following command

su
cd /
cat /dev/zero > zero.dat ; sync ; sleep 1 ; sync ; rm -f zero.dat

This command creates a file on the drive mounted in the “cd /” line. It then writes 0’s into this file until the system literally runs out of disk space. The new 0’s file is then deleted. If you are compacting a VHDX that is not the root partition, you must change the “cd /” line to represent the correct drive e.g. “cd /mnt/myDisk“.

Note: You will completely fill the volume for a few seconds. This will impact other write activities occurring on the disk and so can be considered to be a dangerous process.

 

Shrinking an Online VHDX

It is possible to perform an online compaction of a virtual disk. Hyper-V itself performs light-touch background optimisation automatically. If you cannot shutdown a VM and can only optimise an online VHDX then you must:

  1. Delete temp files
  2. Internally defragment the disk from within the VM itself
  3. Use Disk Management or diskpart to shrink the size of the mounted partition currently in use on the VHDX
diskpart
list vol
sel vol #
shrink
  1. Perform the compact using Hyper-V Manager or PowerShell
    Optimize-VHD <path to VHDX> -Mode Full
  2. Reverse the reduction in the size of the partition using Disk Management or diskpart
diskpart

list vol
sel vol #
extend

 

Shrinking an Offline VHDX

The most efficient method to free space from a VHDX file is to perform the compact wile it is offline. The disadvantage of an offline compaction is that the VM will need to be shutdown prior to the operation. The amount of time that the VM will be down for is directly proportional to the amount of work required to complete the optimisation process. You can improve this through online defragmentation prior to starting, however this takes significant additional administrative effort.

The following steps outline the process that I use to greatest effect. I use X:\ as the example drive letter in this scenario and the use of PowerShell is expected.

  1. Get the initial VHDX size
    $sizeBefore = (Get-Item <path to VHDX file>).length
  2. {optionally} Defrag the VM while online
  3. Shutdown the VM
  4. Mount the VHDX in read/write mode (not read only mode)
    Mount-VHD <path to VHDX file>
  5. Purge Temp files and the contents of X:\Windows\SoftwareDistribution\Download
  6. Defragment the VHDX while mounted to the management system
    1. If the VHDX is stored on SSD drives
      defrag x: /x
      defrag x: /k /l
      defrag x: /x
      defrag x: /k
    2. If the VHDX is stored on Hard Drives (/x /k /l are retained for Trim capable SANs)
      defrag x: /d
      defrag x: /x
      defrag x: /k /l
      defrag x: /x
      defrag x: /k /l

      Note: Defrag /d  can be particularly time consuming
  7. Dismount the VHDX
    Dismount-VHD <path to VHDX file>
  8. Perform a full optimisation of the VHD file
    Optimize-VHD <path to VHDX file> -Mode Full
  9. Get the new size of the VHDX
    $sizeAfter = (Get-Item <path to VHDX file>).length
  10. Start the VM
  11. Find out how much disk space you saved (if any)
    Write-Host "Total disk space Saved: $(($sizeBefore - $sizeAfter) /1Mb) MB" -Foreground Yellow

 

You will note that I repeat the steps of running defrag /x and defrag /k /l. In experimenting, this is because the repitition appears to allow a small amount of additiona space to be freed in some situations as show in the table below.

 

Results

Efficiency Purgable slabs
Size at start 73,991,716,864 100% 11
/k Slab consolidation 69,965,185,024 100% 11
/l Retrim 69,965,185,024 100% 11
/x Free space consolidation 69,965,185,024 100% 9
/x /l /k (repeat) 69,898,076,160 100% 9

The table shows the first /l retrim operation reducing the number of purgable slabs, after which a further 67,108,864 bytes (64MB) of space is freed – the two 32MB slabs.

 

Why not run Defrag /d on an SSD?

Defrag /d aka “traditional defrag” physically moves the file data to the end of its home partition, reconstructs the file in a contiguous series of blocks and moves the file back to the start of the disk. This process is unnecessary on an SSD. Here, there is virtually no likelihood that the data is stored in a contiguous fashion on the SSD NAND flash. There is also no performance benefit for the file being stored contiguously. While you can perform defrag /d on an SSD. In reality, you are needlessly shortening its cell-write life and the step should be skipped.

 

Conclusion

It is unfortunate that the process of compacting a VHDX file is not a seamless one. To realise the highest returns, it is necessary to shutdown the VM; which may not be practical in many scenarios. Equally, the amount of time required to perform the offline compact scales with the utilisation size of the VHDX, number of files and number of tasks performed as part of the maintenance.

Done right, and with the help of script automation, it can be a valuable task – especially before planned VM moves. I regularly save over 130GB in total when draining a hypervisor for maintenance in my home lab – around 25-30 minutes less file copy time over 1Gbps Ethernet. A worthwhile saving as it only takes 20 seconds to execute the automation script that does the work for me.

Creating a Virtual TV Streaming Server

In 2019, streaming your TV entertainment has become so popular that it is almost the norm. Systems such as Plex and Kodi create easy to understand, consistent and familiar cross-platform environments in which the whole family can consume media.
IPTV is an extension of such systems adding live broadcast playback and Personal Video Recorder (PVR) functionality. PVR adds the ability to watch, pause and record live TV; be it from aerial, satellite, cable or online sources. Many of these setups will use a local TV tuners plugged into a stand-alone media centre device. But what if you want to provide TV to multiple media centre appliances simultaneously? And what if you want that system to be a virtual TV streaming server instead of a dedicated streaming PC?

This article discusses how to create such a setup.

 

Why Virtualise?

I have spoken to a number of hardware and software providers in the course of this experiment. One thing that has been consistent has been their response: first laughter, followed by a dismissive “why would you want to do that?”.

Virtualisation is the process of taking what would be considered to be the work of a physical computer. Lifting it up and placing it – along with other workloads – onto another computer. This usually means that a single physical computer (a server) runs multiple, often different operating systems at the same time.

If you want to provide an always-on TV experience to multiple devices, then by definition this requires the TV server to itself always be on. In a non-virtual design, especially in a residential setting a non-virtual TV server may sit idle most of the day, until prime time.

Traditionally – and the way that the industry see it – you would introduce a dedicated TV server device. In an environment where already have an existing always on “24/7 devices – be it an existing server or NAS – virtualisation can allowing you to make use of your existing always-on hardware. Preventing you from having to introduce any new equipment. In essence, while your virtual TV server waits for prime time, the physical computer is doing other, more resource efficient things.

Virtualisation can therefore save you physical space (be it on the floor or in a rack). It can reduce equipment noise, reduce heat and most importantly, save power. It does so by encouraging you to spec correctly; leading to higher financial returns on equipment that you already own. So how do you create a virtual TV Server?

 

Virtualisation Platform

If you want to create a virtual TV server then, the platform that you choose to use will likely be the one you already have. It is easy to critique a solution and say that “you should be using something else”. Just as DVBLogic, Hauppauge and TBS have said I should use a physical device, I’ll get 50 emails telling me I should have used Proxmox, Unraid or Debian+KVM. I didn’t want to use those. I wanted to use Hyper-V.

Creating a virtual TV server is a lot easier in VMWare ESXi or KVM. Your hardware options are substantially broader due to feature maturity. For Hyper-V users, where Discrete Device Access (DDA) – Hyper-V PCIe pass through – was only introduced in 2016. The robustness of PCI Express pass-through is not yet mature and is cripplingly limiting.

Hyper-V’s issues stem from Microsoft’s design decisions. DDA follows a very robust, standards compliant implementation of the VT-d and PCI Express 3.0 specification. In 2019, most non-data centre/consumer level hardware is not manufactured to support these standards. Complicating it further, WHQL driver validation is not yet strict enough to ensure that drivers are fully compliant; and this is where most DDA related issues occur.

Hyper-V was designed to run Windows as efficiently as possible. This contrasts with their competitors, whose broader interest was to make the most efficient hypervisor platform on the market. DDA’s is a microcosm of Hyper-V’s core design limitations: Microsoft’s stated intention was to allow pass-through of select Graphic cards, GPU accelerators and NVMe controllers, not to create a robust PCIe pass-through solution. This in turn limits TV tuner hardware option.

 

Choosing TV Tuners

Once you understand your platform, it is important to choose your hardware accordingly.

The first discriminator will be to choose what broadcast standard you require: be it DVB-T, DVB-T2, DVB-S, DVB-S2, DVB-C, DVB-C2 or legacy Analogue.

Equally important will be matching the capabilities of your platform to the hardware device.

VMWare & KVM

VMWare and KVM derivatives offer a broader set of compatible hardware than Hyper-V. KVM is far more forgiving compared to its competitors – especially when running on non-server hardware. The chances of success are also greater if you intend to run a Linux distribution within your Virtual Machine, rather than Windows.

I have have had no luck with Hauppauge products in this regard, however there are some reports of success on-line with TBS. Comparatively, TBS offers a wider range of products, along with open source drivers. While out of reach to most users, this does offer the possibility of the community adding better support for virtualisation as platforms mature.

Reported examples of working hardware include the DVB-S2 TBS 6902 (see the comments and reviews section in the Amazon link). Despite few examples of success, getting a PCIe tuner to work reliably will remain difficult until the Tuner manufacturers migrate onto the PCI3 specification and are compelled (largely by Microsoft) to write compatible drivers.

If you wish to have a higher chance of success, with lower risk however, please follow my suggestions for Hyper-V.

Hyper-V

I was unable to get any PCIe tuner, from any manufacturer to work under Hyper-V Discrete Device Assignment (DDA). Windows VM’s would blue screen as soon as the kernel attempted to load the driver, while Linux VM’s – although stable – could not initialise the hardware device. In one set of tests, I was able to render the Hypervisor’s parent partition unusable for further testing as Hyper-V locked the hardware device and refused to release it.

After a full re-install, the situation was solved, however my testing reveals that Windows Server 2019 has not provided any improvement in using DDA with this type of legacy-bus hardware.

The solution to the problem was ultimately USB 3.0.

It is likely that your server motherboard has USB 3.0 ports on it. It is important to understand immediately that it is not possible for you to use these ports – in most cases. The embedded USB controllers on motherboards cannot usually be released to a VM by the your systems IOMMU gateway. Where they can be, it will be confusing as to which physical ports are in use, leading to difficulties in troubleshooting. Consequently, I suggest that you do not even try.

Using an inexpensive, off-brand PCIe controller from eBay . I was able to achieve a stable PCIe device pass-through with both Linux and Windows VMs under Hyper-V Server 2019. With this in place, it became possible to build a working virtualised TV Server solution.

 

The Software Design

Running on Hyper-V Server 2019. I installed a trial of DVBLogic’s TV Mosaic 1.0 into a Windows 10 Pro 1809 virtual machine.

TV Mosaic Screen Shot
TV Mosaic showing available DVB-T2 (Freeview HD) channels

DVBLogic’s trial activation system is not designed to expect to see virtual machines. Over the 3-4 months that I was experimenting, I expired trial activations for both TV Mosaic as well as its predecessor DVBLink. No matter what server, VM or physical location I tried from, I was unable to activate the trial again. If you wish to activate a trial on a VM. You will need to contact DVBLogic until such a time that they fix the issue.

 

The Hardware Design

Knowing that it would be necessary to replace my existing and PCIe TV Tuners with USB ones meant that I had to re-consider my design. In my original physical setup the HVR-4400 provided access to DVB-S satellite channels, with the TBS-6205 providing DVB-T2 coverage.

At the time, the only USB device that I could find to substitute the satellite tuner was the then new . The DVB-S2 device was well over £100 at the time (it has subsequently reduced considerably) and I was not willing to experiment on such a high cost tuner.

As I intended to use DVBLogic’s TV Mosaic for the project, I chose the DVB-T2 . Asking DVBLogic to support any issues would be easier if it was within their own range.

I did not want to run the TV signal down to the server rack, so chose to run the USB from the server to the signal amplifier in the attic. I purchased a good quality 5m USB 3.0 cable and a mid-cost 7 port powered hub. It was necessary to ensure that the hub used a USB type-B upstream connector to allow proper connectivity.

I already had the £12 USB 3.0 controller from a 2015 project. As will be discussed below. It is very important that the USB controller you pick has its own power connector on it. Do not rely solely on PCIe bus power.

The design was to run a single USB 3.0 5Gbps line into the attic to a powered USB 3.0 hub. The TVButler tuner would connect to the hub, and then take a short 2m coax run to the near-by signal amplifier. If the design worked. I would add additional tuners to the hub at a later time; including possibly restoring satellite connectivity.

USB 3 Hub and TV Tuners
StarTech ST93007U2C USB 3 Hub and 3x DVBLogic TVButler TV Tuners

 

The Final Specification

  • SuperMicro X11SPL-F
  • Intel Xeon Silver 4108
  • Noctua NH-U12S DX-3647, 120mm cooler for Intel Xeon LGA3647
  • Kingston Technology KSM26RD8/16MEI 16 GB DDR4 2666 MHz ECC, CL19, 2RX8
  • SuperMicro AOM-TPM-9670V-S Vertical TMP 2.0 Module
  • STW USB 3.0 PCIe dual port USB 3.0 5Gbps controller
  • StarTech ST93007U2C 7 Port USB 3.0 Powered Hub
  • 4x DVBLogic TVButler USB TV Tuners
  • LINDY Anthra Line 36744 USB 3.0 Type A to B Cable

 

Troubleshooting

The follow are the two main issues that I encountered when implementing the Virtual TV Server.

Single Tuner Dropouts: USB Bus Power

It was able to see the TVButler Tuner and it had a strong signal, but it would drop out after a few minutes of playback. The VM had to be rebooted to restore functionality. I removed the hub and extension cable and temporarily ran the signal down to the server rack. The issue persisted.

In my haste to minimise the hypervisors downtime. I had neglected to fit the USB 3.0 controller’s power connector. Despite using a mains powered hub, the solution was unstable. After connecting the power supply, the issue went away completely and in single tuner mode, it was stable.

 

Multiple Tuner Dropouts: All hubs are not created equally

After purchasing several additional TVButler tuners I setup the hub in the attic. Every 36 hours or so, I would discover that one or more of the Tuners was missing from TV Mosaic. Further investigation revealed that the tuner was missing from Windows device manager. 1 out of every 8 reboots would temporarily fix the problem.

7 out of the 8 restarts, would usually result in the driver for the bottom TV Tuner on the hub failing to load with “error 10”. Additional testing revealed that all of the tuners worked individually, as did the extension cable.

When it did work, HD channels would not play at all and SD channels would artefact as frequently as every 10 seconds.

The clue came from watching the hub while the VM rebooted. As the VM restarted, the ‘device present’ LEDs would flicker. When the reboot worked, the tuners would initialise in descending order and the LEDs remained lit. When it didn’t, the lights would enumerate randomly, flicker and, after a few seconds, the last device on the £24 RSHTECH 7 port powered hub would blink out.

Although mains powered, the flickering suggested that it didn’t have sufficient current to support the load. I swapped in a 2 port StarTech hub from my desk and with 2 tuners present and had no issues. Returning the RSHTECH it to Amazon, I ordered a – at more than double the price.

The ST93007U2C worked perfectly. All of the tuners worked properly and there were no issues at reboot.

 

Conclusion

As I conclude this article, the system has been in place for nearly 2 months. I have licensed TV Mosaic onto the Windows 10 VM to get around the trial issues, and it has been performing as well as I had hoped.

The Windows VM’s current uptime is 31 days, 8 minutes and at no point during the last two months have I experienced any crashes from the VM or hypervisor. Picture quality is excellent and I have artificially stress tested it to well beyond even its worst case ‘general’ use several times – with all tuners playing back HD channels while TV Mosaic transcodes the streams.

To an Intel Xeon Silver 4108, this worst-case work load is virtually irrelevant.

At idle, the server sits at around 44w, with typical non-TV load pulling 52w. Turning the TV Server VM on or off makes no difference to this figure. When TV is playing-back, this figure may rise by 8-16w. Contrast this with the old physical server, which was drawing 60-80w at all times. As a Windows 10 machine, it couldn’t function as a true server. Consequently, the Xeon Silver server would also be on anyway, taking mean idle load up to around 115w. The 71w saving (115w-44w) equates to an energy cost saving of just under £100-per year.

I spent £210.25 in total on this project, meaning that it will have paid for itself in fractionally over 2 years. If I factor in income from selling the old tuners and physical PC, I will have already broken even. So to DVBLogic, TBS and Hauppauge, all of whom queried with me the sanity of wanting to Virtualise a TV Server. You have your answer.

You can virtualise a TV Server, even on Hyper-V and, if you already have a always on “24/7” virtualisation stack. There is a good reason to do it.

Create a Slipstreamed Hyper-V Server 2019 installation image with working Remote Desktop for Administration

If you have been following the saga of the non-working Hyper-V Server 2019 release from November. You may be aware that the most prominent issue – that of Remote Desktop Services for Administration not working – has now been resolved in the February 2019 patch release cycle.

This article outlines how to create updated media for Hyper-V Server 2019 using the original installation medium and patch it into a working state.

Note from the author

Please note that if you intend to use Hyper-V Server in a production environment, you should wait for Microsoft to re-issue the office ISO. Once it is released, it will be made available in the Microsoft Server Evaluation Centre.

View: Microsoft Evaluation Centre

Pre-requisites

You will need access to a Windows 10, Windows Server 2016 or Windows Server 2019 system in order to update the installer.

Obtain and install the Windows ADK 1809 (or later) selecting the Deployment Tools option (providing you with an updated version of DISM)
Download: Windows Assessment & Deployment Kit (ADK)

Retrieve the original Hyper-V Server 2019 ISO
Download: Hyper-V Server 2019 (1809)

Download following updates from the Microsoft Update Catalogue
Note: This is correct as of early March 2019. It is suggested that you apply newer cumulative and servicing updates as they are released in the future.

  1. KB4470788
  2. KB4482887
  3. KB4483452

View: Microsoft Update Catalogue

[Optional] If you wish to apply any language regionalisation (e.g. EN-GB), source the CAB file(s) for the language features that you require. For example:
Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~en-GB~10.0.17763.1.cab

Updating the Installation Image

To update the installation image:

  1. Create a folder on C:\ called ‘Mount’
  2. Add a second folder on C:\ called ‘hvs’
  3. In the hvs folder, create a subfolder called ‘Updates’
  4. Extract the entire contents of the ISO from the Hyper-V Server 2019 ISO into C:\hvs
  5. Place the three MSU files from the Microsoft Update Catalogue into the C:\hvs\Updates folder
  6. [Optionally] Place the CAB file for the language pack into the C:\hvs folder and for convenience rename it ‘lp.cab’
  7. Open an elevated Command Prompt
  8. Issue:
    cd /d "C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64"
    To navigate into the working folder for the updated version of DISM.exe
  9. Issue:
    dism.exe /mount-image /ImageFile:"C:\hvs\Sources\install.wim" /Index:1 /MountDir:"C:\Mount"
    To unpack the installation image into the C:\Mount folder
    Note: Do not navigate into this folder with CMD, PowerShell or Windows Explorer. If you leave a handle open against this folder when you try to re-pack the install.wim, it will fail.
  10. Once the mounting is complete, patch the installation by issuing:
    dism.exe /Image:"C:\Mount" /Add-Package /PackagePath:"C:\hvs\Updates"
  11. [Optional] Apply the language pack by issuing (change en-GB to your language as applicable):
    dism.exe /Image:"C:\Mount" /ScratchDir:"C:\Windows\Temp" /Add-Package /PackagePath:"C:\hvs\lp.cab"
    dism.exe /Image:"C:\Mount" /Set-SKUIntlDefaults:en-GB
    If you intend to use ImageX, DISM or WDS to deploy this image, you can skip the following command. If you intend to create a new bootable ISO or UFD, issue:
    dism.exe /image:"C:\Mount" /gen-langini /distribution:"C:\hvs"
    This will create a new Lang.ini file which must be included in the ISO/UFD media (but is not required for other deployment methods)
  12. Dismount and re-package the install.wim file by issuing:
    dism.exe /unmount-image /MountDir:"C:\Mount" /Commit
  13. Once DISM has processed the installation image, the new Install.wim file can be found at:
    C:\hvs\Sources\install.wim
  14. At this point you will have a working installation image which you can use to create a new ISO, UFD or install via WDS. You should delete the Updates folder and [optional] lp.cab from C:\hvs before creating a new ISO or bootable UFD.

If it goes wrong at any point, issue the following command to abort the process and go back and try again:
dism.exe /unmount-image /MountDir:"C:\Mount" /Discard

Delete the C:\Mount and C:\hvs folders once you have finished creating you new deployment media.

Final Word

If you follow the above, you will have not only a fixed RDP experience, but also a current patched version of Hyper-V Server. Eliminating a little time spent waiting for Windows Update to run.

If you are going to enable RDP for Administration. As ever, do not forget to enable the firewall rule in PowerShell. SConfig.cmd does not do this for you!

Enable-NetFirewallRule -DisplayGroup "Remote Desktop"