Performance impact of 512byte vs 4K sector sizes

When you are designing your storage subsystem. On modern hardware, you will often be asked to choose between formatting using 512 byte or 4K (4096 byte) sectors. This article discusses whether there is any statistically observable performance difference between the two in a 512 vs. 4K performance test.

NB: Do not get confused between the EXT4 INODE size and the LUN sector size. The INODE size places a mathematical cap on the number of files that a file system can store, and by consequence how large the volume can be. The sector size relates to how the file system interacts with the physical underlying hardware.

QNAP Sector Size selection
Sector Size selection on QNAP QTS 4.3.6

Method

  • A QNAP TS-1277XU-RP with 8x WD Red Pro 7200 RPM WD6003FFBX-68MU3N0 drives running firmware 83.00A83 were installed with 8 drives in bays 5 – 12
  • Storage shelf firmware was updated to QTS version 4.3.6.0923, providing the latest platform enhancements
  • A Storage Pool comprising all 8 disks in RAID 6 was configured, ensuring redundancy
  • A 4GB volume was added allowing QNAP app installation so that the systme could finish installing
  • The disk shelf was rebooted after it had completed its own setup tasks
  • RAID sync was allowed to fully complete over the next 12 hours
  • Two identical 4096 GB iSCSI targets were created with identical configurations apart from one having 512 byte and the other 4k sector sizes
  • SSD caching was disabled on the storage shelf
  • 2x10Gbps Ethernet, dedicated iSCSI connections were made available through two Dell PowerConnect SAN switches. Each NIC on its own VLAN. 9k jumbo frames were enabled accross the fabric
  • A Windows Server 2016 hypervisor was connected to the iSCSI target and mounted the storage volume. iSCSI MPIO was enabled in Round Robin mode. Representing a typical hypervisor configuration
  • The two storage LUNs were formatted with 64K NTFS partitions (recommended for dedicated VHDX volumes)
  • A Windows 10 VM was migrated onto each of the targets and the test performed using Anvil’s Storage Utilities 1.1.0.20140101. The VM had no live network connections. The Super Fetch and Windows Update services were disabled, preventing undesirable disk I/O. The VM was not rebooted between tests, had no other running tasks and had been idling for 6 hours prior to the test
  • No other tasks, load or data were present on the storage array

 

512 vs. 4K Performace Results

The results of the two tests are shown below.

Anvil Storage Utilities Screenshot with 512bytes results
Anvil Storage Utilities Screenshot with 512bytes results

"Anvil

IOPS
512 byte 4K 4K Diff 4K Diff % +/-
Read Seq 4MB 417.45 403.3 -14.15 -3.51
4K 3001.56 3164.56 163 5.15
4K QD4 6021.45 6006.7 -14.75 -0.25
4K QD16 24228.16 24062.61 -165.55 -0.69
32K 2742.39 2807.47 65.08 2.32
128K 2628.86 2620.8 -8.06 -0.31
Write Seq 4MB 233.2 230.79 -2.41 -1.04
4K 2090.79 2165.45 74.66 3.45
4K QD4 5976.18 5983.65 7.47 0.12
4K QD16 8254.84 7874.67 -380.17 -4.83

 

Analysis and Recommendations

The results show that there is little difference between the two. Repeating the tests multiple times showed that the figures for both the 512 byte and 4K LUNs are within the margin of error of each other. A bias towards 512 byte was consistently present, but was not statistically significant.

The drives in the test disk array are 512e drives. 512e is an industry transition technology between pure 512 byte and pure 4K drives. 512e drives use physical 4K sectors on the platter, but that the firmware uses 512 byte logic. A firmware emulation layer converts between the two. This creates a performance penalty during write operations due to the computation and delay of the re-mapping operation. Neither sector size will prevent this from occurring.

My recommendations are

  • If all of your drives are legacy 512 byte drives, only use 512
  • Should you intend to mount the LUN with an operating system that does not support 4K sectors. Only use 512
  • In situations where you have 512e drives, you can use either. Unless you intend to clone the LUN onto 4K drives in the future, stick with 512 for maximum compatibility
  • Never create an array that mixes 512 and 4K disks. Ensure that you create storage pools and volumes accordingly
  • Where all of your drives are 4K, only use 4K

 

WorkFolders Folder shows Sync Error even though its contents are fully synchronised

WorkFolders allows you to perform policy based file HTTPS synchronisation between corporate servers and BYOD devices or teleworker devices. This article discusses a workaround to a problem where an anonymous sync error appears on a directory despite all of its contents synchronising successfully.

Outline of the Problem

Assume the following directory structure

C:\Users\CompanyUser\WorkFolders\Documents\WorkFolders Test

The following files/folders are present within WorkFolders Test:

WorkFolders Test\Problem Folder
WorkFolders Test\Problem Folder\File in Problem Folder.docx
WorkFolders Test\This file is OK.txt

Windows Explorer will display a Green circle with a tick for file/folder object that is synchronised and a Red circle with a cross for a faulted file/folder. After synchronising, the sync results will display as follows:

WorkFolders Test\Problem Folder [CROSS]
WorkFolders Test\Problem Folder\File in Problem Folder.docx [TICK]
WorkFolders Test\This file is OK.txt [TICK]

No errors are displayed in the Control Panel WorkFolders applet. There are no related errors in the clients WorkFolders Management/Operational Event Viewer logs. No relevant errors are present in the file servers SyncShare Operational/Reporting logs.

WorkFolders Error Screenshot - outer folder
The parent folder shows that its sub-folder has a sync error
WorkFolders Error Screenshot - inner folder
The contents of the error’d folder are however correctly synchronised.

Analysis

Although WorkFolders is indicating that the issue is being caused by the “Problem Folder” directory. The issue is being caused by the “File in Problem Folder.docx”.

The following symptoms will be true:

  1. Renaming “Problem Folder” will not fix the issue
  2. Altering the filename of “File in Problem Folder” will not fix the issue
  3. Changing the “File in Problem Folder.docx” file extension (e.g. to .txt) will not fix the issue
  4. Opening and saving the “File in Problem Folder.docx” will not solve the issue
  5. Moving “File in Problem Folder.docx” out of “Problem Folder” will clear the sync error, but the error will immediately migrate to the new location
  6. Rebooting the client computer will not help
  7. Restarting the server will not help
  8. The file does not have any connected temp files or lock files associated with it in the client file system

 

There is nothing wrong with the file itself. It is not corrupt, pay-loaded with a virus or violating any policy. It is my (unproven) belief that the record for the file in the WorkFolders synchronisation database is corrupt. Performing any of the above steps will not alter the record in the WorkFolders client database, thus the problem cannot be ameliorated.

 

Fixing the problem

One you have identified the problem file(s). You can use one of the methods below to correct the error.

Save the file as a completely new file

  1. Open the file in its associated editor (e.g. Microsoft Word for docx files)
  2. File > Save as…
  3. Save the file in its original location, but with a different file name (do not overwrite the original)
  4. Delete the original file
  5. Allow WorkFolders to re-sync
  6. Rename the new file as required

This approach is easy for an end-user to perform, but can be very time consuming if you are troubleshooting a large number of such issues. It requires you to know which file is causing the problem in the first place.

 

Compression

  1. Compress the file using Windows Compressed folders (Right click > Send to… > Compressed (zipped) file)
  2. Delete the original file
  3. Wait for the folder to re-sync and clear the error
  4. Extract the original file from the zip back into the desired location

This method will create a new record in the WorkFolders synchronisation database and the error will not reappear. You can use the technique to fix an entire folder structure without having to first identify the problem file. It is also easy for an end-user to perform.

 

Move the file

  1. Move the file outside of the WorkFolders monitored file system. For example, move the file to C:\ or into the Recycle Bin
  2. Allow the original folder to re-sync and clear the error
  3. Return the original file to its original location and allow it to re-sync

Again, this method will create a new synchronisation record. In a managed environment this may be harder for an end-user to perform due to permissions. It is however easier for an administrator to perform as you can cut/paste the entire file structure out and then back into the WorkFolders sync root.

If you use this method, remember to move it to a location within the same drive letter. If you do, the move will preserve permissions, file dates and will not physically copy the underlying data to the new location (just update the MFT).

Creating a Virtual TV Streaming Server

In 2019, streaming your TV entertainment has become so popular that it is almost the norm. Systems such as Plex and Kodi create easy to understand, consistent and familiar cross-platform environments in which the whole family can consume media.
IPTV is an extension of such systems adding live broadcast playback and Personal Video Recorder (PVR) functionality. PVR adds the ability to watch, pause and record live TV; be it from aerial, satellite, cable or online sources. Many of these setups will use a local TV tuners plugged into a stand-alone media centre device. But what if you want to provide TV to multiple media centre appliances simultaneously? And what if you want that system to be a virtual TV streaming server instead of a dedicated streaming PC?

This article discusses how to create such a setup.

 

Why Virtualise?

I have spoken to a number of hardware and software providers in the course of this experiment. One thing that has been consistent has been their response: first laughter, followed by a dismissive “why would you want to do that?”.

Virtualisation is the process of taking what would be considered to be the work of a physical computer. Lifting it up and placing it – along with other workloads – onto another computer. This usually means that a single physical computer (a server) runs multiple, often different operating systems at the same time.

If you want to provide an always-on TV experience to multiple devices, then by definition this requires the TV server to itself always be on. In a non-virtual design, especially in a residential setting a non-virtual TV server may sit idle most of the day, until prime time.

Traditionally – and the way that the industry see it – you would introduce a dedicated TV server device. In an environment where already have an existing always on “24/7 devices – be it an existing server or NAS – virtualisation can allowing you to make use of your existing always-on hardware. Preventing you from having to introduce any new equipment. In essence, while your virtual TV server waits for prime time, the physical computer is doing other, more resource efficient things.

Virtualisation can therefore save you physical space (be it on the floor or in a rack). It can reduce equipment noise, reduce heat and most importantly, save power. It does so by encouraging you to spec correctly; leading to higher financial returns on equipment that you already own. So how do you create a virtual TV Server?

 

Virtualisation Platform

If you want to create a virtual TV server then, the platform that you choose to use will likely be the one you already have. It is easy to critique a solution and say that “you should be using something else”. Just as DVBLogic, Hauppauge and TBS have said I should use a physical device, I’ll get 50 emails telling me I should have used Proxmox, Unraid or Debian+KVM. I didn’t want to use those. I wanted to use Hyper-V.

Creating a virtual TV server is a lot easier in VMWare ESXi or KVM. Your hardware options are substantially broader due to feature maturity. For Hyper-V users, where Discrete Device Access (DDA) – Hyper-V PCIe pass through – was only introduced in 2016. The robustness of PCI Express pass-through is not yet mature and is cripplingly limiting.

Hyper-V’s issues stem from Microsoft’s design decisions. DDA follows a very robust, standards compliant implementation of the VT-d and PCI Express 3.0 specification. In 2019, most non-data centre/consumer level hardware is not manufactured to support these standards. Complicating it further, WHQL driver validation is not yet strict enough to ensure that drivers are fully compliant; and this is where most DDA related issues occur.

Hyper-V was designed to run Windows as efficiently as possible. This contrasts with their competitors, whose broader interest was to make the most efficient hypervisor platform on the market. DDA’s is a microcosm of Hyper-V’s core design limitations: Microsoft’s stated intention was to allow pass-through of select Graphic cards, GPU accelerators and NVMe controllers, not to create a robust PCIe pass-through solution. This in turn limits TV tuner hardware option.

 

Choosing TV Tuners

Once you understand your platform, it is important to choose your hardware accordingly.

The first discriminator will be to choose what broadcast standard you require: be it DVB-T, DVB-T2, DVB-S, DVB-S2, DVB-C, DVB-C2 or legacy Analogue.

Equally important will be matching the capabilities of your platform to the hardware device.

VMWare & KVM

VMWare and KVM derivatives offer a broader set of compatible hardware than Hyper-V. KVM is far more forgiving compared to its competitors – especially when running on non-server hardware. The chances of success are also greater if you intend to run a Linux distribution within your Virtual Machine, rather than Windows.

I have have had no luck with Hauppauge products in this regard, however there are some reports of success on-line with TBS. Comparatively, TBS offers a wider range of products, along with open source drivers. While out of reach to most users, this does offer the possibility of the community adding better support for virtualisation as platforms mature.

Reported examples of working hardware include the DVB-S2 TBS 6902 (see the comments and reviews section in the Amazon link). Despite few examples of success, getting a PCIe tuner to work reliably will remain difficult until the Tuner manufacturers migrate onto the PCI3 specification and are compelled (largely by Microsoft) to write compatible drivers.

If you wish to have a higher chance of success, with lower risk however, please follow my suggestions for Hyper-V.

Hyper-V

I was unable to get any PCIe tuner, from any manufacturer to work under Hyper-V Discrete Device Assignment (DDA). Windows VM’s would blue screen as soon as the kernel attempted to load the driver, while Linux VM’s – although stable – could not initialise the hardware device. In one set of tests, I was able to render the Hypervisor’s parent partition unusable for further testing as Hyper-V locked the hardware device and refused to release it.

After a full re-install, the situation was solved, however my testing reveals that Windows Server 2019 has not provided any improvement in using DDA with this type of legacy-bus hardware.

The solution to the problem was ultimately USB 3.0.

It is likely that your server motherboard has USB 3.0 ports on it. It is important to understand immediately that it is not possible for you to use these ports – in most cases. The embedded USB controllers on motherboards cannot usually be released to a VM by the your systems IOMMU gateway. Where they can be, it will be confusing as to which physical ports are in use, leading to difficulties in troubleshooting. Consequently, I suggest that you do not even try.

Using an inexpensive, off-brand PCIe controller from eBay . I was able to achieve a stable PCIe device pass-through with both Linux and Windows VMs under Hyper-V Server 2019. With this in place, it became possible to build a working virtualised TV Server solution.

 

The Software Design

Running on Hyper-V Server 2019. I installed a trial of DVBLogic’s TV Mosaic 1.0 into a Windows 10 Pro 1809 virtual machine.

TV Mosaic Screen Shot
TV Mosaic showing available DVB-T2 (Freeview HD) channels

DVBLogic’s trial activation system is not designed to expect to see virtual machines. Over the 3-4 months that I was experimenting, I expired trial activations for both TV Mosaic as well as its predecessor DVBLink. No matter what server, VM or physical location I tried from, I was unable to activate the trial again. If you wish to activate a trial on a VM. You will need to contact DVBLogic until such a time that they fix the issue.

 

The Hardware Design

Knowing that it would be necessary to replace my existing and PCIe TV Tuners with USB ones meant that I had to re-consider my design. In my original physical setup the HVR-4400 provided access to DVB-S satellite channels, with the TBS-6205 providing DVB-T2 coverage.

At the time, the only USB device that I could find to substitute the satellite tuner was the then new . The DVB-S2 device was well over £100 at the time (it has subsequently reduced considerably) and I was not willing to experiment on such a high cost tuner.

As I intended to use DVBLogic’s TV Mosaic for the project, I chose the DVB-T2 . Asking DVBLogic to support any issues would be easier if it was within their own range.

I did not want to run the TV signal down to the server rack, so chose to run the USB from the server to the signal amplifier in the attic. I purchased a good quality 5m USB 3.0 cable and a mid-cost 7 port powered hub. It was necessary to ensure that the hub used a USB type-B upstream connector to allow proper connectivity.

I already had the £12 USB 3.0 controller from a 2015 project. As will be discussed below. It is very important that the USB controller you pick has its own power connector on it. Do not rely solely on PCIe bus power.

The design was to run a single USB 3.0 5Gbps line into the attic to a powered USB 3.0 hub. The TVButler tuner would connect to the hub, and then take a short 2m coax run to the near-by signal amplifier. If the design worked. I would add additional tuners to the hub at a later time; including possibly restoring satellite connectivity.

USB 3 Hub and TV Tuners
StarTech ST93007U2C USB 3 Hub and 3x DVBLogic TVButler TV Tuners

 

The Final Specification

  • SuperMicro X11SPL-F
  • Intel Xeon Silver 4108
  • Noctua NH-U12S DX-3647, 120mm cooler for Intel Xeon LGA3647
  • Kingston Technology KSM26RD8/16MEI 16 GB DDR4 2666 MHz ECC, CL19, 2RX8
  • SuperMicro AOM-TPM-9670V-S Vertical TMP 2.0 Module
  • STW USB 3.0 PCIe dual port USB 3.0 5Gbps controller
  • StarTech ST93007U2C 7 Port USB 3.0 Powered Hub
  • 4x DVBLogic TVButler USB TV Tuners
  • LINDY Anthra Line 36744 USB 3.0 Type A to B Cable

 

Troubleshooting

The follow are the two main issues that I encountered when implementing the Virtual TV Server.

Single Tuner Dropouts: USB Bus Power

It was able to see the TVButler Tuner and it had a strong signal, but it would drop out after a few minutes of playback. The VM had to be rebooted to restore functionality. I removed the hub and extension cable and temporarily ran the signal down to the server rack. The issue persisted.

In my haste to minimise the hypervisors downtime. I had neglected to fit the USB 3.0 controller’s power connector. Despite using a mains powered hub, the solution was unstable. After connecting the power supply, the issue went away completely and in single tuner mode, it was stable.

 

Multiple Tuner Dropouts: All hubs are not created equally

After purchasing several additional TVButler tuners I setup the hub in the attic. Every 36 hours or so, I would discover that one or more of the Tuners was missing from TV Mosaic. Further investigation revealed that the tuner was missing from Windows device manager. 1 out of every 8 reboots would temporarily fix the problem.

7 out of the 8 restarts, would usually result in the driver for the bottom TV Tuner on the hub failing to load with “error 10”. Additional testing revealed that all of the tuners worked individually, as did the extension cable.

When it did work, HD channels would not play at all and SD channels would artefact as frequently as every 10 seconds.

The clue came from watching the hub while the VM rebooted. As the VM restarted, the ‘device present’ LEDs would flicker. When the reboot worked, the tuners would initialise in descending order and the LEDs remained lit. When it didn’t, the lights would enumerate randomly, flicker and, after a few seconds, the last device on the £24 RSHTECH 7 port powered hub would blink out.

Although mains powered, the flickering suggested that it didn’t have sufficient current to support the load. I swapped in a 2 port StarTech hub from my desk and with 2 tuners present and had no issues. Returning the RSHTECH it to Amazon, I ordered a – at more than double the price.

The ST93007U2C worked perfectly. All of the tuners worked properly and there were no issues at reboot.

 

Conclusion

As I conclude this article, the system has been in place for nearly 2 months. I have licensed TV Mosaic onto the Windows 10 VM to get around the trial issues, and it has been performing as well as I had hoped.

The Windows VM’s current uptime is 31 days, 8 minutes and at no point during the last two months have I experienced any crashes from the VM or hypervisor. Picture quality is excellent and I have artificially stress tested it to well beyond even its worst case ‘general’ use several times – with all tuners playing back HD channels while TV Mosaic transcodes the streams.

To an Intel Xeon Silver 4108, this worst-case work load is virtually irrelevant.

At idle, the server sits at around 44w, with typical non-TV load pulling 52w. Turning the TV Server VM on or off makes no difference to this figure. When TV is playing-back, this figure may rise by 8-16w. Contrast this with the old physical server, which was drawing 60-80w at all times. As a Windows 10 machine, it couldn’t function as a true server. Consequently, the Xeon Silver server would also be on anyway, taking mean idle load up to around 115w. The 71w saving (115w-44w) equates to an energy cost saving of just under £100-per year.

I spent £210.25 in total on this project, meaning that it will have paid for itself in fractionally over 2 years. If I factor in income from selling the old tuners and physical PC, I will have already broken even. So to DVBLogic, TBS and Hauppauge, all of whom queried with me the sanity of wanting to Virtualise a TV Server. You have your answer.

You can virtualise a TV Server, even on Hyper-V and, if you already have a always on “24/7” virtualisation stack. There is a good reason to do it.

Unable to update NuGet or Packages in Powershell due to “WARNING: Unable to download the list of available providers. Check your internet connection.”

When attempting to install or update PowerShell Modules, NuGet or NuGet packages in PowerShell 5. You receive one or more of the following errors

WARNING: Unable to resolve package source 'https://www.powershellgallery.com/api/v2/'.

The underlying connection was closed: An unexpected error occurred on a receive.

WARNING: Unable to download the list of available providers. Check your internet connection.

Equally, you may receive the same error when attempting to run a WGET or an Invoke-WebRequest command e.g.

wget https://www.google.com/

You are unable to install/update the software component or make an outbound internet connection.

This issue may be especially prevalent on IIS installations serving HTTPS websites.

The Fix

Conventional troubleshooting is fairly well documented on-line

  1. Ensure that you are actually able to open a https webpage in a web browser
  2. Ensure that your DNS is working correctly.
  3. Check to see whether wget can connect to a non-https site e.g.
    wget http://www.google.com/
  4. Check to see whether or not you need to use a Proxy server. If so, you must configure PowerShell to use your Proxy Server before you proceed. This may require you to to configure PowerShell with your Proxy Server credentials.
    $webclient=New-Object System.Net.WebClient
    $webclient.Proxy.Credentials = [System.Net.CredentialCache]::DefaultNetworkCredentials

A less obvious issue to explore related to the default operating system security configuration for using SSL.

More Info

By default, Windows Server and Windows client will allow SSL3, TLS 1.0, TLS 1.1 and TLS 1.2. The .net Framework is also configured to allow these protocols, and, by default, any outbound request for a SSL site will attempt to use SSL3/TLS 1.0 as its default protocol.

In secure environments, where system administrators have enabled recommended best practice on Windows systems to disable the use of SSL1, 2,3 and TLS 1.0. PowerShell is not currently clever enough to internally compare its configuration to that of the operating system. consequently, when attempting to make an outbound https request in such an environment. PowerShell will attempt to use one of the older protocols which has been disabled by the operating system’s networking stack. Instead of re-attempting the request using a higher protocol. PowerShell will fail the request with one of the error messages listed at the beginning of the article.

As NuGet and Update-Module both attempt to make connections to Microsoft servers using HTTPS, they too will fail.

Encountering this issue on a SSL enabled IIS install will be more common, as it is more likely that system administrators will have applied best practice and disabled legacy encryption protocols on these servers. their public facing, high visibility should demand such a response.

To fix the issue there are two options:

  1. Reconfigure and reboot the system to re-enable client use of TLS 1.0 (and possibly SSL3) via
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\<protocol>\Client

    DisabledByDefault = 0
    Enabled = ffffffff (hex)

  2. Alternatively, you must set-up each PowerShell environment so that the script itself knows not to use the legacy protocol versions. This is achieved via the following code which restricted PowerShell to only using TLS 1.1 and TLS 1.2.
    [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls11,Tls12'