When you are designing your storage subsystem. On modern hardware, you will often be asked to choose between formatting using 512 byte or 4K (4096 byte) sectors. This article discusses whether there is any statistically observable performance difference between the two in a 512 vs. 4K performance test.
NB: Do not get confused between the EXT4 INODE size and the LUN sector size. The INODE size places a mathematical cap on the number of files that a file system can store, and by consequence how large the volume can be. The sector size relates to how the file system interacts with the physical underlying hardware.
A QNAP TS-1277XU-RP with 8x WD Red Pro 7200 RPM WD6003FFBX-68MU3N0 drives running firmware 83.00A83 were installed with 8 drives in bays 5 – 12
Storage shelf firmware was updated to QTS version 4.3.6.0923, providing the latest platform enhancements
A Storage Pool comprising all 8 disks in RAID 6 was configured, ensuring redundancy
A 4GB volume was added allowing QNAP app installation so that the systme could finish installing
The disk shelf was rebooted after it had completed its own setup tasks
RAID sync was allowed to fully complete over the next 12 hours
Two identical 4096 GB iSCSI targets were created with identical configurations apart from one having 512 byte and the other 4k sector sizes
SSD caching was disabled on the storage shelf
2x10Gbps Ethernet, dedicated iSCSI connections were made available through two Dell PowerConnect SAN switches. Each NIC on its own VLAN. 9k jumbo frames were enabled accross the fabric
A Windows Server 2016 hypervisor was connected to the iSCSI target and mounted the storage volume. iSCSI MPIO was enabled in Round Robin mode. Representing a typical hypervisor configuration
The two storage LUNs were formatted with 64K NTFS partitions (recommended for dedicated VHDX volumes)
A Windows 10 VM was migrated onto each of the targets and the test performed using Anvil’s Storage Utilities 188.8.131.5240101. The VM had no live network connections. The Super Fetch and Windows Update services were disabled, preventing undesirable disk I/O. The VM was not rebooted between tests, had no other running tasks and had been idling for 6 hours prior to the test
No other tasks, load or data were present on the storage array
512 vs. 4K Performace Results
The results of the two tests are shown below.
4K Diff % +/-
Analysis and Recommendations
The results show that there is little difference between the two. Repeating the tests multiple times showed that the figures for both the 512 byte and 4K LUNs are within the margin of error of each other. A bias towards 512 byte was consistently present, but was not statistically significant.
The drives in the test disk array are 512e drives. 512e is an industry transition technology between pure 512 byte and pure 4K drives. 512e drives use physical 4K sectors on the platter, but that the firmware uses 512 byte logic. A firmware emulation layer converts between the two. This creates a performance penalty during write operations due to the computation and delay of the re-mapping operation. Neither sector size will prevent this from occurring.
My recommendations are
If all of your drives are legacy 512 byte drives, only use 512
Should you intend to mount the LUN with an operating system that does not support 4K sectors. Only use 512
In situations where you have 512e drives, you can use either. Unless you intend to clone the LUN onto 4K drives in the future, stick with 512 for maximum compatibility
Never create an array that mixes 512 and 4K disks. Ensure that you create storage pools and volumes accordingly
In 2019, streaming your TV entertainment has become so popular that it is almost the norm. Systems such as Plex and Kodi create easy to understand, consistent and familiar cross-platform environments in which the whole family can consume media.
IPTV is an extension of such systems adding live broadcast playback and Personal Video Recorder (PVR) functionality. PVR adds the ability to watch, pause and record live TV; be it from aerial, satellite, cable or online sources. Many of these setups will use a local TV tuners plugged into a stand-alone media centre device. But what if you want to provide TV to multiple media centre appliances simultaneously? And what if you want that system to be a virtual TV streaming server instead of a dedicated streaming PC?
This article discusses how to create such a setup.
I have spoken to a number of hardware and software providers in the course of this experiment. One thing that has been consistent has been their response: first laughter, followed by a dismissive “why would you want to do that?”.
Virtualisation is the process of taking what would be considered to be the work of a physical computer. Lifting it up and placing it – along with other workloads – onto another computer. This usually means that a single physical computer (a server) runs multiple, often different operating systems at the same time.
If you want to provide an always-on TV experience to multiple devices, then by definition this requires the TV server to itself always be on. In a non-virtual design, especially in a residential setting a non-virtual TV server may sit idle most of the day, until prime time.
Traditionally – and the way that the industry see it – you would introduce a dedicated TV server device. In an environment where already have an existing always on “24/7 devices – be it an existing server or NAS – virtualisation can allowing you to make use of your existing always-on hardware. Preventing you from having to introduce any new equipment. In essence, while your virtual TV server waits for prime time, the physical computer is doing other, more resource efficient things.
Virtualisation can therefore save you physical space (be it on the floor or in a rack). It can reduce equipment noise, reduce heat and most importantly, save power. It does so by encouraging you to spec correctly; leading to higher financial returns on equipment that you already own. So how do you create a virtual TV Server?
If you want to create a virtual TV server then, the platform that you choose to use will likely be the one you already have. It is easy to critique a solution and say that “you should be using something else”. Just as DVBLogic, Hauppauge and TBS have said I should use a physical device, I’ll get 50 emails telling me I should have used Proxmox, Unraid or Debian+KVM. I didn’t want to use those. I wanted to use Hyper-V.
Creating a virtual TV server is a lot easier in VMWare ESXi or KVM. Your hardware options are substantially broader due to feature maturity. For Hyper-V users, where Discrete Device Access (DDA) – Hyper-V PCIe pass through – was only introduced in 2016. The robustness of PCI Express pass-through is not yet mature and is cripplingly limiting.
Hyper-V’s issues stem from Microsoft’s design decisions. DDA follows a very robust, standards compliant implementation of the VT-d and PCI Express 3.0 specification. In 2019, most non-data centre/consumer level hardware is not manufactured to support these standards. Complicating it further, WHQL driver validation is not yet strict enough to ensure that drivers are fully compliant; and this is where most DDA related issues occur.
Hyper-V was designed to run Windows as efficiently as possible. This contrasts with their competitors, whose broader interest was to make the most efficient hypervisor platform on the market. DDA’s is a microcosm of Hyper-V’s core design limitations: Microsoft’s stated intention was to allow pass-through of select Graphic cards, GPU accelerators and NVMe controllers, not to create a robust PCIe pass-through solution. This in turn limits TV tuner hardware option.
Choosing TV Tuners
Once you understand your platform, it is important to choose your hardware accordingly.
The first discriminator will be to choose what broadcast standard you require: be it DVB-T, DVB-T2, DVB-S, DVB-S2, DVB-C, DVB-C2 or legacy Analogue.
Equally important will be matching the capabilities of your platform to the hardware device.
VMWare & KVM
VMWare and KVM derivatives offer a broader set of compatible hardware than Hyper-V. KVM is far more forgiving compared to its competitors – especially when running on non-server hardware. The chances of success are also greater if you intend to run a Linux distribution within your Virtual Machine, rather than Windows.
I have have had no luck with Hauppauge products in this regard, however there are some reports of success on-line with TBS. Comparatively, TBS offers a wider range of products, along with open source drivers. While out of reach to most users, this does offer the possibility of the community adding better support for virtualisation as platforms mature.
Reported examples of working hardware include the DVB-S2 TBS 6902 (see the comments and reviews section in the Amazon link). Despite few examples of success, getting a PCIe tuner to work reliably will remain difficult until the Tuner manufacturers migrate onto the PCI3 specification and are compelled (largely by Microsoft) to write compatible drivers.
If you wish to have a higher chance of success, with lower risk however, please follow my suggestions for Hyper-V.
I was unable to get any PCIe tuner, from any manufacturer to work under Hyper-V Discrete Device Assignment (DDA). Windows VM’s would blue screen as soon as the kernel attempted to load the driver, while Linux VM’s – although stable – could not initialise the hardware device. In one set of tests, I was able to render the Hypervisor’s parent partition unusable for further testing as Hyper-V locked the hardware device and refused to release it.
After a full re-install, the situation was solved, however my testing reveals that Windows Server 2019 has not provided any improvement in using DDA with this type of legacy-bus hardware.
The solution to the problem was ultimately USB 3.0.
It is likely that your server motherboard has USB 3.0 ports on it. It is important to understand immediately that it is not possible for you to use these ports – in most cases. The embedded USB controllers on motherboards cannot usually be released to a VM by the your systems IOMMU gateway. Where they can be, it will be confusing as to which physical ports are in use, leading to difficulties in troubleshooting. Consequently, I suggest that you do not even try.
Using an inexpensive, off-brand PCIe controller from eBay . I was able to achieve a stable PCIe device pass-through with both Linux and Windows VMs under Hyper-V Server 2019. With this in place, it became possible to build a working virtualised TV Server solution.
The Software Design
Running on Hyper-V Server 2019. I installed a trial of DVBLogic’s TV Mosaic 1.0 into a Windows 10 Pro 1809 virtual machine.
DVBLogic’s trial activation system is not designed to expect to see virtual machines. Over the 3-4 months that I was experimenting, I expired trial activations for both TV Mosaic as well as its predecessor DVBLink. No matter what server, VM or physical location I tried from, I was unable to activate the trial again. If you wish to activate a trial on a VM. You will need to contact DVBLogic until such a time that they fix the issue.
The Hardware Design
Knowing that it would be necessary to replace my existing and PCIe TV Tuners with USB ones meant that I had to re-consider my design. In my original physical setup the HVR-4400 provided access to DVB-S satellite channels, with the TBS-6205 providing DVB-T2 coverage.
At the time, the only USB device that I could find to substitute the satellite tuner was the then new . The DVB-S2 device was well over £100 at the time (it has subsequently reduced considerably) and I was not willing to experiment on such a high cost tuner.
As I intended to use DVBLogic’s TV Mosaic for the project, I chose the DVB-T2 . Asking DVBLogic to support any issues would be easier if it was within their own range.
I did not want to run the TV signal down to the server rack, so chose to run the USB from the server to the signal amplifier in the attic. I purchased a good quality 5m USB 3.0 cable and a mid-cost 7 port powered hub. It was necessary to ensure that the hub used a USB type-B upstream connector to allow proper connectivity.
I already had the £12 USB 3.0 controller from a 2015 project. As will be discussed below. It is very important that the USB controller you pick has its own power connector on it. Do not rely solely on PCIe bus power.
The design was to run a single USB 3.0 5Gbps line into the attic to a powered USB 3.0 hub. The TVButler tuner would connect to the hub, and then take a short 2m coax run to the near-by signal amplifier. If the design worked. I would add additional tuners to the hub at a later time; including possibly restoring satellite connectivity.
The Final Specification
Intel Xeon Silver 4108
Noctua NH-U12S DX-3647, 120mm cooler for Intel Xeon LGA3647
STW USB 3.0 PCIe dual port USB 3.0 5Gbps controller
StarTech ST93007U2C 7 Port USB 3.0 Powered Hub
4x DVBLogic TVButler USB TV Tuners
LINDY Anthra Line 36744 USB 3.0 Type A to B Cable
The follow are the two main issues that I encountered when implementing the Virtual TV Server.
Single Tuner Dropouts: USB Bus Power
It was able to see the TVButler Tuner and it had a strong signal, but it would drop out after a few minutes of playback. The VM had to be rebooted to restore functionality. I removed the hub and extension cable and temporarily ran the signal down to the server rack. The issue persisted.
In my haste to minimise the hypervisors downtime. I had neglected to fit the USB 3.0 controller’s power connector. Despite using a mains powered hub, the solution was unstable. After connecting the power supply, the issue went away completely and in single tuner mode, it was stable.
Multiple Tuner Dropouts: All hubs are not created equally
After purchasing several additional TVButler tuners I setup the hub in the attic. Every 36 hours or so, I would discover that one or more of the Tuners was missing from TV Mosaic. Further investigation revealed that the tuner was missing from Windows device manager. 1 out of every 8 reboots would temporarily fix the problem.
7 out of the 8 restarts, would usually result in the driver for the bottom TV Tuner on the hub failing to load with “error 10”. Additional testing revealed that all of the tuners worked individually, as did the extension cable.
When it did work, HD channels would not play at all and SD channels would artefact as frequently as every 10 seconds.
The clue came from watching the hub while the VM rebooted. As the VM restarted, the ‘device present’ LEDs would flicker. When the reboot worked, the tuners would initialise in descending order and the LEDs remained lit. When it didn’t, the lights would enumerate randomly, flicker and, after a few seconds, the last device on the £24 RSHTECH 7 port powered hub would blink out.
Although mains powered, the flickering suggested that it didn’t have sufficient current to support the load. I swapped in a 2 port StarTech hub from my desk and with 2 tuners present and had no issues. Returning the RSHTECH it to Amazon, I ordered a – at more than double the price.
The ST93007U2C worked perfectly. All of the tuners worked properly and there were no issues at reboot.
As I conclude this article, the system has been in place for nearly 2 months. I have licensed TV Mosaic onto the Windows 10 VM to get around the trial issues, and it has been performing as well as I had hoped.
The Windows VM’s current uptime is 31 days, 8 minutes and at no point during the last two months have I experienced any crashes from the VM or hypervisor. Picture quality is excellent and I have artificially stress tested it to well beyond even its worst case ‘general’ use several times – with all tuners playing back HD channels while TV Mosaic transcodes the streams.
To an Intel Xeon Silver 4108, this worst-case work load is virtually irrelevant.
At idle, the server sits at around 44w, with typical non-TV load pulling 52w. Turning the TV Server VM on or off makes no difference to this figure. When TV is playing-back, this figure may rise by 8-16w. Contrast this with the old physical server, which was drawing 60-80w at all times. As a Windows 10 machine, it couldn’t function as a true server. Consequently, the Xeon Silver server would also be on anyway, taking mean idle load up to around 115w. The 71w saving (115w-44w) equates to an energy cost saving of just under £100-per year.
I spent £210.25 in total on this project, meaning that it will have paid for itself in fractionally over 2 years. If I factor in income from selling the old tuners and physical PC, I will have already broken even. So to DVBLogic, TBS and Hauppauge, all of whom queried with me the sanity of wanting to Virtualise a TV Server. You have your answer.
You can virtualise a TV Server, even on Hyper-V and, if you already have a always on “24/7” virtualisation stack. There is a good reason to do it.
If you have been following the saga of the non-working Hyper-V Server 2019 release from November. You may be aware that the most prominent issue – that of Remote Desktop Services for Administration not working – has now been resolved in the February 2019 patch release cycle.
This article outlines how to create updated media for Hyper-V Server 2019 using the original installation medium and patch it into a working state.
Note from the author
Please note that if you intend to use Hyper-V Server in a production environment, you should wait for Microsoft to re-issue the office ISO. Once it is released, it will be made available in the Microsoft Server Evaluation Centre.
Download following updates from the Microsoft Update Catalogue Note: This is correct as of early March 2019. It is suggested that you apply newer cumulative and servicing updates as they are released in the future.
[Optional] If you wish to apply any language regionalisation (e.g. EN-GB), source the CAB file(s) for the language features that you require. For example: Microsoft-Windows-Server-LanguagePack-Package~31bf3856ad364e35~amd64~en-GB~10.0.17763.1.cab
Updating the Installation Image
To update the installation image:
Create a folder on C:\ called ‘Mount’
Add a second folder on C:\ called ‘hvs’
In the hvs folder, create a subfolder called ‘Updates’
Extract the entire contents of the ISO from the Hyper-V Server 2019 ISO into C:\hvs
Place the three MSU files from the Microsoft Update Catalogue into the C:\hvs\Updates folder
[Optionally] Place the CAB file for the language pack into the C:\hvs folder and for convenience rename it ‘lp.cab’
Open an elevated Command Prompt
Issue: cd /d "C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\Deployment Tools\amd64"
To navigate into the working folder for the updated version of DISM.exe
Issue: dism.exe /mount-image /ImageFile:"C:\hvs\Sources\install.wim" /Index:1 /MountDir:"C:\Mount"
To unpack the installation image into the C:\Mount folder Note: Do not navigate into this folder with CMD, PowerShell or Windows Explorer. If you leave a handle open against this folder when you try to re-pack the install.wim, it will fail.
Once the mounting is complete, patch the installation by issuing: dism.exe /Image:"C:\Mount" /Add-Package /PackagePath:"C:\hvs\Updates"
[Optional] Apply the language pack by issuing (change en-GB to your language as applicable): dism.exe /Image:"C:\Mount" /ScratchDir:"C:\Windows\Temp" /Add-Package /PackagePath:"C:\hvs\lp.cab" dism.exe /Image:"C:\Mount" /Set-SKUIntlDefaults:en-GB
If you intend to use ImageX, DISM or WDS to deploy this image, you can skip the following command. If you intend to create a new bootable ISO or UFD, issue: dism.exe /image:"C:\Mount" /gen-langini /distribution:"C:\hvs"
This will create a new Lang.ini file which must be included in the ISO/UFD media (but is not required for other deployment methods)
Dismount and re-package the install.wim file by issuing: dism.exe /unmount-image /MountDir:"C:\Mount" /Commit
Once DISM has processed the installation image, the new Install.wim file can be found at:
At this point you will have a working installation image which you can use to create a new ISO, UFD or install via WDS. You should delete the Updates folder and [optional] lp.cab from C:\hvs before creating a new ISO or bootable UFD.
If it goes wrong at any point, issue the following command to abort the process and go back and try again: dism.exe /unmount-image /MountDir:"C:\Mount" /Discard
Delete the C:\Mount and C:\hvs folders once you have finished creating you new deployment media.
If you follow the above, you will have not only a fixed RDP experience, but also a current patched version of Hyper-V Server. Eliminating a little time spent waiting for Windows Update to run.
If you are going to enable RDP for Administration. As ever, do not forget to enable the firewall rule in PowerShell. SConfig.cmd does not do this for you!
The release of the Windows Server 2019 family has not gone well for Microsoft. Hitting RTM in September 2018 (1809), the release became available through the different distribution channels (VL, MPN, Visual Studio) from late October 2018. It was promptly pulled from the public domain and re-packaged.
With Windows Server 2019, Microsoft chose to skip the RTM phase and sent the code directly to General Availability (GA). The mainstream Windows Server 1809 branch was blighted – a not necessarily fairly – by a widely publicised data-loss bug. Under some circumstances, Known Folder Redirects under upgrade installs of Windows 10 1809 would erase the contents of the target. As a precautionary measure, Microsoft withdrew Windows Server 2019 due to it sharing the same code base as Windows 10. Reissuing it, and Windows 10 with fixes on 13th November 2018.
Despite this, there have been a spate of other reported issues in the Windows Server 2019 release. Examples include
Third party OEMs having seemingly been caught by surprise by the release of a new Server OS. Creating a situation of having immature or no driver infrastructure in place yet (e.g. Intel RSTe CPU utilisation issues using 2016 drivers)
Slow online live migrations between Windows Server 2019 hosts (offline migration occurs at full speed)
After installing Hyper-V Server from the ISO, the Windows Boot Loader will always an operating system selection list on some UEFI systems
Microsoft were vocally chided by the user community for taking until 17th January 2019 to restore the evaluation ISO for the regular release of Windows Server 2019. As well as for failing to comment on the situation with Hyper-V Server 2019. This irked many IT professionals as they lost the Christmas testing period. Interfering with opportunities for initial product evaluation ahead of the New Year. The delay will cause a deferral of adoption in many organisations; being especially true for Exchange Server 2019, which requires Windows Server 2019 as a pre-requisite.
Unlike the Windows Server 2019 evaluation ISO, the Hyper-V Server 2019 did not return and to date (2nd March 2019) it remains unavailable.
Hyper-V Server 2019 1809
As with the full version of Windows Server 2019. Hyper-V Server 2019 was only released for a matter of days before its withdrawal.
The availability of the ISO was fleeting for several key reasons.
Hyper-V Server 2019’s shared codebase means that it is impacted by the redirected folders bug. However as a rule, you should not be upgrade installing Hyper-V Server. See my article on upgrade installing Hyper-V Server for reasons why this is not advised.
Remote Desktop for Administration does not work at all in the release . This has been resolved via KB4482887 from 2019-02 Patch Tuesday. NB: Don’t forget to enable the firewall rule: Enable-NetFirewallRule -DisplayGroup "Remote Desktop"
sconfig.cmd no-longer loads automatically after joining a domain (this is true on 2019 Server Core also).
When installed in Hyper-V, the Virtual Machine Connection display fails to draw correctly, leading to frequent display corruption.
Windows Server 2019 has had a rough start. Many commentators are expressing concerns that Microsoft does not yet possess the experience to successfully navigate the SaaS landscape with Windows. Observationally, QA has been in decline at Microsoft for some years, but up until now we have not seen such deficits impact the mainstream Server ecosystem.
With Microsoft posting a resolution to the Remote Desktop for Administration issue via KB4482887. It is logical that Microsoft will seek to re-release Hyper-V Server 2019 in coming days and weeks. Microsoft for their part have made no comment on the issues with Hyper-V Server 2019, or any planning for its re-release. So this remains speculation, but the community may see it under a 1903 re-packaging hopefully as early as next week.
In the interim, you should continue to avoid using the original release ISO in your test labs.