Change the Kodi TV Guide / Programme Listing Font Size under Kodi 17 / Krypton

System Requirements:

  • Kodi Krypton, 17.0, 17.1, 17.2, 17.3, 17.4

The Problem:

If you are a Kodi user, you will have no doubt noticed that with the change in version from 16 to 17, the XBMB Foundation changed the default skin from Confluence to Estuary. While Estuary is a nice looking skin, with Kodi 17, the programme has become more limited in the font customisation’s that are available through the skin/UI settings menu. Where as under previous versions, you had some pre-defined normal, large, extra large style choices, under version 17 all you can do is switch between default font and the system default font under the guise of it having higher legibility.

If your use case is such that you have a TV that is a long way away, or more importantly, if you need to support a user with accessibility needs who relied on Kodi’s ability to resize the text size in order to navigation the system, then you are somewhat out of luck at the current time.

This article outlines how to change the font size on the Kodi TV Guide/Programme Listings page by modifying the UI skin configuration files directly. It is also intended as a very high level illustration of how to modify other UI elements.

The Fix

The process of changing the font size is fairly easy, one you know what you are doing. The process of changing the font size is a three step process

  1. Find the correct configuration file
  2. Establish what your legal parameter values are
  3. Edit the configuration file in the correct place

Find the correct configuration file

The Estuary skin configuration and interface files are located at .\addons\skin.estuary\, relative to the install path. For example, on Windows, this is most likely:

C:Program Files (x86)\Kodi\addons\skin.estuary\

The key repository that you need to review is the xml folder (C:Program Files (x86)\Kodi\addons\skin.estuary\xml\).

In this folder, you will find the layout parameter files for Kodi. For the purposes of this guide, the file that we need to edit is MyPVRGuide.xml. You need to identify the correct file for the section that you want to edit in order to proceed.

If you are running Windows and User Account Control (UAC) is enabled, copy the file to your desktop and then make a backup copy of it. If you try and edit the fire directly under Program Files (x86) you will get an access denied error when you attempt to save the changes.

Establish what your legal parameter values are

Note: If you just want to edit the font and don’t care about the ‘how’, you can skip this section. Just realise that you cannot put any value that you want in the font size field when editing the skin.

For the font size, Kodi does not use an integer or point value for the font size (12, 36, 48 etc). Instead, it uses an XML enumerable type definition in its DTD representing allowed values. This means that you need to know what the allows values are and that if you want to use a non-standard value, you have to perform a far more complex series of edits to allow Kodi to support a new font size. As this change is something that you will likely need to make every time Kodi receives an official update, I strongly advise not attempting to create your own DTD value and consequently how to do it will remain beyond the scope of this article.

In order to ascertain what the allowed values are, I used a Command Prompt string search to look for all instances of “>font” under the XML folder.

The following command

findstr /s /l ^>font "c:\Program Files (x86)\Kodi\addons\skin.estuary\xml\*.xml"

Yielded the following block of definitions after de-duplication

<name>font10</name>
<name>font12</name>
<name>font13</name>
<name>font14</name>
<name>font25_narrow</name>
<name>font27</name>
<name>font27_narrow</name>
<name>font37</name>
<name>font45</name>
<name>font60</name>
<name>font_clock</name>
<name>font_flag</name>
<name>font20_title</name>
<name>font25_title</name>
<name>font30_title</name>
<name>font32_title</name>
<name>font36_title</name>
<name>font45_title</name>
<name>font52_title</name>
<name>font_MainMenu</name>

These fontXXXXXX values represent the allowed enumerate values that the skin ill accept. Anything else will be ignored and substituted with a default (font12).

Edit the configuration file in the correct place

Go back to the skin .xml file you copied to your desktop. Open the file in Notepad or your preferred text editor.

You now need to find the section that controls the UI element that you want to modify. Read through it looking for clues in the XML naming and if all else fails, it is a case of trial and error to find it (unless you want to go and read the Kodi skin documentation).

If you want to edit the TV Guide programme listing entry under Kodi 17.4, then look for the following section:

<itemlayout height="62" width="60">
<control type="image" id="2">
<width>58</width>
<height>58</height>
<texture border="3" fallback="windows/pvr/epg-genres/0.png">$INFO[ListItem.Property(GenreType),windows/pvr/epg-genres/,.png]</texture>
</control>
<control type="label" id="1">
<left>6</left>
<top>0</top>
<width>50</width>
<height>36</height>
<aligny>center</aligny>
<font>font12</font>
<label>$INFO[ListItem.Label]</label>
</control>
<control type="image">
<left>6</left>
<top>35</top>
<width>20</width>
<height>20</height>
<texture>$VAR[PVRTimerIcon]</texture>
</control>
</itemlayout>

The <font>font12</font> line controls the text size. In the use case that I had, I found that changing it to <font>font30_title</font> was acceptable. Remember: You can only use one of the lookup values shown in the section above and as this is XML, it is case sensitive!

Change the value, save the file and copy it back to the XML folder. Now (re)start Kodi to view your changes.

If you edit this value, you will notice that when you highlight the programme in the TV guide, the highlight goes back to a smaller font size, while non-highlighted programme’s will display in the new, larger font size. This is because the highlight is controlled by a different section. To change the highlight, go back to the .xml file in Notepad and edit the following section accordingly:

<focusedchannellayout height="62" width="350">
<control type="label">
<left>2</left>
<top>-2</top>
<width>75</width>
<height>60</height>
<font>font12</font>
<label>$INFO[ListItem.ChannelNumberLabel]</label>
<textcolor>button_focus</textcolor>
<align>center</align>
<aligny>center</aligny>
</control>
<control type="label" id="1">
<left>68</left>
<top>-2</top>
<width>350</width>
<height>60</height>
<font>font12</font>
<label>$INFO[ListItem.ChannelName]</label>
<textcolor>button_focus</textcolor>
<aligny>center</aligny>
<textoffsetx>10</textoffsetx>
</control>
</focusedchannellayout>

Note: in version 17.4 this is immediately ABOVE the section you just edited!

Change font12 to match your new value, save, put the file back in the XML folder and (re)start Kodi. The highlight font size should now match the rest of the Programme Guide.

Once you know where to go, the process it fairly easy. Do keep pin mind though that when you update Kodi to a new version, it will overwrite your changes and you will need to go back in and edit the font sizes once again. Hopefully the XBMC UI team will get around to restoring some degree of internal configurability for this soon, as not everyone in this world has 20:20 vision!

Blue Screen (BSOD): CONFIG INITIALIZATION FAILED after WIM Image Creation Process

System Requirements:

  • Windows 7, 8.0, 8.1, 10
  • Windows Server 2008 R2, 2012, 2012 R2, 2016
  • DISM, MDT, SCCM, WAIK, WADK

The Problem:

I am a DVBLink user. DVBLink does not play nicely with Windows Service and consequently it wants to run on a client OS. This means that I have lots of server hardware running server Operating Systems and one device with 4 TV Tuners in it running Windows 10.

After modifying the registry of an off-line WIM image, after the initial image has been inflated onto the drive, the system blue screens (BSOD) at the first reboot with

:(
Your PC ran into a problem and needs to restart. We'll restart for you.For more information abotu this issue and possible fixes, visit https://www.windows.com/stopcodeIf you call a support person, give them this info:
Stop code: COMFIG INITIALIZATION FAILED

BSOD

The newly imaged system will now get stuck in a boot loop.

More Info

You have a corrupted registry.

The Fix

There are a number of possibilities to explore first

Check that you haven’t deleted the contents of CurrentControlSet (reference machine prior to sysprep) or ControlSet001 (reference machine and WIM file) from the registry

Check that you haven’t deleted the SYSTEM file from C:\Windows\System32\Config (this is a hidden file and it has no file extension)

Finally, if you injected registry data into an offline WIM image, ensure that you did not create the Key .\CurrentControlSet in the C:\Windows\System32\Config\SYSTEM. CurrentControlSet is a virtualised key that is loaded and unloaded dynamically as part of the Windows boot proceess (it is actually a copy of ControlSet001). When the system goes through shutdown or a reboot, CurrentControlSet is cleared and ControlSet001 is copied in-place. If the key CurrentControlSet exists in the WIM file’s registry, Windows will present the CONFIG INITIALISATION FAILED blue screen of death as it is not expecting the CurrentControlSet key to exist at all.

To fix the problem, re-mount your image and from the SYSTEM container move any data from CurrentControlSet into ControlSet001 and then completely delete the key for CurrentControlSet

iSCSI MPIO Recommendations & Best Practice on Windows Server

System Requirements:

  • Windows Server 2008 Storage Server
  • Windows Server 2008 R2 Storage Server
  • Windows Server 2012, 2012 R2, 2016

The Problem:

I needed to outline some of the general thinking relating to exactly how a practitioner should logically and physically understand MPIO, however most of the discourse on the subject skips a fair amount of the obvious questions that people starting out with the technology may be asking (or trying to answer). I therefore present some thinking on the subject of understanding MPIO optimisation and best practice for iSCSI.

The information presented in this document is intended for those who are new to the concept of iSCSI and MPIO and is not intended to be product specific.

More Info

Multi-Path Input/Output or MPIO is a server technology that usually sits on the storage side of load balancing, failover and aggregation technologies. If you are getting into SAS, iSCSI or Enterprise RAID solutions where it is most commonly used (encountered), then you this may (or may not) help you with understanding what MPIO is any why it (possibly) isn’t what you think it is!

The document is written from the perspective of an iSCSI user where it can be conceptually a little harder for new users to understand the best way to approach MPIO.

Logically understanding what MPIO is all about

So you have 2x1Gbps ports in a MPIO team, that means you’ve got a 2Gbps link right? Wrong. That isn’t what is going on with MPIO.

MPIO (and in fact pretty much the majority of balancing and aggregation technologies) doesn’t double the speed, but it does roughly double the bandwidth available to the system. Confused? Think of it like this:

You own a car. The car has a top speed of 70mph and not one mph more. You get on a one way, single track road in a country where there are no speed limits. You are now happily driving along at 70mph. Some bright spark at your local council decides that you should be able to drive at 140mph, so they cut down the trees on one side of the road and add a second one-way carriage way, going in the same direction as the first.

Can your car now drive at 140mph because of the new lane? No. The public official is wrong. Your engine can only offer you 70mph. The extra lane doesn’t help you, but it does help the guy in the car next to you also driving at 70mph arrive at the other end of the road at the same time. It also means that when you encounter a tractor ambling along in your lane, you have somewhere else to go without slowing down.

This is fundamentally what MPIO is doing. So why isn’t it a 2Gbps link? Basically, because networking technology is a serial communications medium and by adding a second lane and calling it a faster way to get data to the end of it, you get into the different world of parallel communications. Under parallel communications you have to split (fragment) information into smaller pieces and push it down each one of the wires to the destination. This in turn infers the need to have more complicated buffer/caching designs to store information as part of a strategy that is designed to be able to cope with each section of the data arriving at a different time, it arriving all at the same time, in a different order than intended or of course, it not arriving at all. Something known as clock skew.

To fix this, you need to introduce overhead either to synchronise delivery to be reliable (thus slowing it down and reducing error tolerance) or adding overheard mechanisms designed to deconstruct, sequence, wait for or re-request missing or corrupt data sections and track timing – alls something that you really don’t want in an iSCSI or SAS environment where response time (latency) is king. Consequently, there is a diminishing return on how much of this parallel working you can derive a benefit from in any system, including an MPIO system. iSCSI MPIO, if correctly configured, will offer something at around about the boundary between worth-while and not bothering in the first place. Yet it is important to understand that it will not be a 100% increase in performance, neither will likely be a 50% increase, but more realistically something around the 30-40% mark.

Performance is only one of the intended design considerations for MPIO, and in that it is not the primary consideration. The primary consideration is for fault tolerance and reliability.

In a correctly designed iSCSI system, independent NICs connected to more than one switch and usually to more than one controller on the storage side and more than one server on the host side. If one of these fails, in a correctly implemented system, your production service probably won’t even notice. You can even be as bold to perform live switch re-wiring on iSCSI systems without impacting the client services involved – although it should be stressed that this is for bragging rights and in practice should not be attempted.

To summarise, MPIO allows you to get twice (+) as much data down to the end of the link, but you cannot get it there any faster. In general, if you can avoid using fragmented streams, you will reap the maximum benefit. The obvious approach here is that each “lane” should be using unrelated data: instead of carving up a single video file and pushing little bits of it down each lane one bit at a time (MPIO can do this), one lane is used for the video and the second lane is used for literally anything and everything else. This is a simplification of what MPIO generally does, however in practice is offers a good way to get your head around it.

Techniques

So how does MPIO carve up the traffic?

There are broadly speaking 4 different paradigms for carving up MPIO traffic

Failover/Redundant In this mode, one link is active, while the other is passive i.e. up, but not doing anything. If the first link fails, the second path takes over and all existing traffic streams continue to receive the same bandwidth (% of the total available pie) on the same terms as before. This would give us a completely separate road that can only be used in emergencies. It may not be as fast or robust, or it may be identically spec’d and just as capable. A failover design may or may not return traffic to the first channel once it becomes available once again.
Round Robin This mode alternates traffic between channel 1 and channel 2, then goes back to channel 1, channel 2 and so on. Both links are active, both receive traffic in a slight skew as the data is de-queued at the sender. This offers the 2 lane analogy used above with each 70mph car getting to the end at roughly the same time.
Least Queue Depth This puts the traffic into the channel that has the least amount on it (or to be more accurately about to go onto it). If one channel is busier than the other (e.g. the large video file) then it will put other traffic down the second channel, allowing the video to transfer without needing to slow down to allow new traffic to join, delaying its delivery. There are many different algorithms that exist on how this is achieved, including varieties that use hashing to offer clients consistent paths based upon Layer 2 or Layer 3 addresses.
Path Weighting Weighted paths and least blocking methods assess the state/capabilities of the channels. This is more useful if there are lots of hops between source and destination, multiple routes between a destination or different channels have different capabilities. For example, if you have iSCSI running through a routed network, then there could be multiple ways for it to get there. One route may go through 5 routers and another 18 routers. Generally, the 5 router path might be preferable, provided the lower hop route genuinely gets the data there faster. Equally the weighting could be based upon the speed of the path through to the recipient or finally, if channel 1 is 10Gbps and channel 2 is only 1Gbps, then you might prefer the 10Gbps path to be used with a higher preference. Usually, a lower weighting number means a higher preference. This would be the equivalent of a 70mph road with a backup road with a max speed of 50mph. You know that it will get you to the destination, but you can guarantee that if you have to use it, it will take longer.

So, more lanes equal more stuff then?

Sounds simple doesn’t it? Just keep throwing lanes into the road and then everyone gets to travel smoothly at 70mph.

In principle, it is a nice idea, but in practice it doesn’t actually work in most iSCSI implementations.

For starters, server grade network card (which you should be using for MPIO, and not client adapters) are expensive and server backplanes can only accept a finite amount of them. Server NIC’s also consume power and power consumes money! Keep that in mind if you do decide to throw extra ports at an iSCSI solution.

The reality is that if you have an MPIO solution that will allow you to experiment with more than 2 NIC adapters in a MPIO group, you will likely see the performance gain rapidly tail off. In turn it will actually wind up presenting you with steadily worsening performance, not the increase that you are expecting.

Attempting to MPIO iSCSI traffic across 4x 1Gbps NIC’s actually offers worse read and write speeds for a virtual machine than 2x 1Gbps under a Hyper-V environment (see tests E and F below). The system starts to waste so much time trying to break apart and put back together each lane’s worth of traffic that it just doesn’t help the hypervisor.

Where a 4 NIC configuration is beneficial is actually in providing you with a “RAID 6” MPIO solution. Here you can have 2 active and 2 passive adapters – remember in an idealised scenario they could be 2x10Gbe and 2x1Gbe with a hard-coded preference for the 10Gbe and a method of failing traffic back to the 10Gbe. Just be aware that you can only use the 10Gbe set OR the 1Gbe set at the same time, not one port from each. The exception to this rule is for hashing based channel assignment as these offer more paths to “permanently” assign data into, without the overhead of path swapping or de-fragmentation of traffic.

Some DSM’s (effectively a OEM specific MPIO driver under Windows, such as Dell Host Integration Tools [HIT] or NetApp Host Utilities) logically limit a MPIO to two active NIC’s if the storage controller is only exposing 2 usable NICs back to the HIT instance. Dell EqualLogic Host Integration Tools (the EqualLogic DSM) will grab the first two paths it finds and shutdown any others into a passive state, no matter how hard you try to start them up.

What should a MPIO network “look” like?

Ultimately this is down to what you want to get out of the MPIO solution and within the bounds of what your hardware vendor will support.

There are effectively three schools of thought here (I won’t comment on which is right because as you’ll see, it isn’t that simple)

MPIO is about Meshing

If you see MPIO is a mesh then 2 NIC’s in a server connecting to 2 NIC’s in a storage appliance equals a mesh where each NIC has a path to the other. This is more aligned with how you probably already think about Ethernet networks.

MPIO is about Pathing

If you see MPIO in this model it is simple about more than one line being drawn between two different end points, with no line crossing or adding any complexity, complication and confusion. This is more aligned with how you likely currently think about SAS, Fibre Channel and hard drive wiring.

MPIO is about Redundancy

This is the purest of the three views. It sees the complexity and overheads associated with MPIO as being a problem – there will always be some sort of increase in latency, a drop in some aspect performance by trying to squeeze more bandwidth out of MPIO. This view attempts to keep the design simple, run everything at an unimpeded wire speed but maintain the failover functionality afforded by MPIO.

The three schools of thought are outlined in the diagram below.

Why not Meshing?

When you start out with MPIO, you may be tempted towards implementing option 1. After all, your Server NICs (circles) are likely connected to a switch, as is your storage array (squares). The switch allows you to design to this topology and if you allow the MPIO system to have knowledge over all possible permeations of connectivity, the system will highly redundant, making it very robust.

Yes and no! Yes, it is very robust, but at this point in your implementation, how do you know which path traffic is taking? How do you know that it is optimised? What is stopping Server NIC1 and Server NIC2 from both talking to storage NIC1 at the same time? If they do that, then they have to share 1Gbps of bandwidth between them while Storage NIC2 is left idle. Suddenly all of your services will have intermediate bursts of speed and infuriating drops in performance. The more server NICs that you add, the faster the decrease in performance will be. With 4 Server NICs, there is nothing to stop the MPIO load balancer from intermittently pushing the data from all 4 Server NICs towards a single Storage NIC.

In a Round Robin setup, in a full Mesh design (as shown in #1) it will likely order the RR protocol in the order that you gave the system access to the paths. Given the following IP Addresses

Server: 192.168.0.1, 192.168.0.2
Storage: 192.168.0.11, 192.168.0.12

The RR table could like this

  1. 192.168.0.1 -> 192.168.0.11
  2. 192.168.0.2 -> 192.168.0.11
  3. 192.168.0.1 -> 192.168.0.12
  4. 192.168.0.2 -> 192.168.0.12

Or it could like like this

  1. 192.168.0.1 -> 192.168.0.11
  2. 192.168.0.1 -> 192.168.0.12
  3. 192.168.0.2 -> 192.168.0.11
  4. 192.168.0.2 -> 192.168.0.12

In both examples you either have two different sets of traffic being sent from the same Server NIC concurrently or received by the same Storage NIC concurrently. This is going to undermine performance, not improve it (this is outlined in Mbps terms in the tests shown later in this document).

In a failure situation, the performance issue is exacerbated

  • If #3 fails, then nothing changes in performance or bandwidth.
  • If #2 fails then the total bandwidth available to the system halves and all services contend using the first link.
  • If #1 fails then as with #2, all services suffer with contended bandwidth, however the system also has the overhead of MPIO to further reduce performance.

What benefit is there to MPIO operating in scenario #1? In this failed state, should one of the Storage NICs also fail, the system will continue to operate. In #2 if the working Storage NIC fails, the entire system will fail despite the fact that the Storage NIC on the second path is actually working. It is up to you and your design as to whether you think that the performance hit that you will experience is worth this extra safeguard? In a highly secure system, mission critical or safety system it may be worth the extra overhead.

There are however some middleware layers that can manage this for you. Dell Host Integration Tools (HIT), does, for example, attempt to undertake some management of these types of situations, optimising the mesh by putting the links that will cause overhead into a failover only state, while maintaining the optimal number of active mesh links. In my experience though, the HIT solution is not able to perfectly manage the optimal risk. It does not provide any consideration over redundant NIC controllers. For example, if you have 2 physical Dual Port NICs in your Server with the intention of one port from each NIC making up the active “pair”, Dell HIT is not able to detect or be programmed to ensure that the active paths are prioritised around ensuring that the correct controller is being used. In my experience, it will tend to bunch them together onto the same physical NIC controller, leaving the second controller idle.

Fixing this problem requires an additional layer of complex, expensive and usually proprietary middleware logic, further impacting performance and increasing cost. Therefore, industry best practice is to avoid thinking of iSCSI MPIO as being a Full or even a Partial Mesh, but instead think of it as offering independent channels akin to those shown in #2. It is for this reason that virtually all iSCSI MPIO vendors insist that each Server -> Storage NIC pair exist on its own logical IP subnet as this completely negates the possibility of interweaving the MPIO paths while also ensuring that any subnet-local issue (such as a broadcast or unicast storm) is only likely to take down one of the subnets, not both.

iSCSI as part of a Virtual Network Adapter, Converged Fabric LBFO Team

Since the release of Windows Server 2012, Microsoft have allowed to be hinted at the idea of using iSCSI through Converged Fabric* Load balancing Failover (LBFO) teams — as long as the iSCSI NICs are Virtual and they connect through a Hyper-V VM Switch which itself backs onto a Windows Server LBFO team. Even the venerable Aidan Finn has hinted at it. I have, however, never seen a discussion of it being attempted online, neither have I ever seen it benchmarked.

To be clear over what we are talking about when I say a Virtualised, Converged Fabric, LBFO Team:

  1. 4x 1Gbps Ethernet physical adapters
  2. Grouped into a Windows Server 2016 LBFO Team, appearing to Windows as a single logical network adapter called “ConvergedNIC”
  3. “ConvergedNIC” is connected to an External Virtual Switch called “ConvergedSwitch”
  4. A Virtual Machine Network Adapter is created on the Hypervisor’s Parent Partition (ManagementOS) and this is assigned to the correct VLAN, given an IP address and hooked up to the iSCSI Target
  5. 4 physical NICs, no MPIO, 1 logical NIC

So, does it work?

Yes! It does work and it appears to be stable and even usable; but with some sacrifice in performance (keep reading for some benchmark numbers as “test A” below). I have however had test VMs running under this design for nearly a year without any perceivable issues in either VM or hypervisor stability.

* If you are not familiar with the Concept of a Converged Fabric: A Converged Fabric is a data centre architecture model in which the concept of 1 NIC = 1 Network/Subnet/VLAN/Traffic Type is abandoned. Instead, NICs are usually pooled together into Teams with multiple traffic types, Networks, Subnets and VLANs being allowed to use any of the available bandwidth within the team. Quality of Service (QoS) algorithms are used to ensure that priority traffic types are defined (such as iSCSI in this example), ensuring that the iSCSI system is never starved for bandwidth by someone performing a large file transfer across the team. A Converged Fabric architecture is considered to be more efficient, lower cost and offer better failover reliability than traditional methods in which entire 1GbE or 10GbE NICs could be left idle, waiting for traffic that while high bandwidth, may be infrequent. A Converged Fabric architecture allows other users/systems to benefit from the available bandwidth when not needed by its primary application. It can also offer the primary application additional bandwidth in some situations.

If you have an 8 NIC Hypervisor setup with 2 physical iSCSI NICs, 2 physical production network NICs, 1 physical heartbeat NIC, 1 physical live migration NIC, 1 management network NIC and 1 out of bounds management NIC, then you are paying to power but to not derive much of any benefit from NICs 4-8 due to how infrequently they are used. If this sounds familiar to you, then you should consider migrating to a Converged Fabric design.

Quantifying Best Practice

So far, this article has discussed MPIO, meshing, pathing and redundancy as well as a quick detour into using converged fabric LBFO for iSCSI connections. So let’s look at some numbers that underpin these approaches.

Tests were undertaken using the following hardware configuration:

  • Dell EqualLogic PS4110x running firmware 9.1.1 R436216, with 2 active 1GbE NIC’s on a single controller
  • Dell PowerEdge P630 with 8x1GbE adapters (4x Broadcom NetXtreme and 4x Intel I350 adapters) with 9K Jumbo Frames correctly enabled
  • Windows Server 2016
  • Switching on Cat6a cabling via 2x Cisco Catalyst 2960-48’s
  • The 64K block, GPT formatted, 3TB target LUN was setup as a CSV and the nodes were in a Cluster with a second identical node idling as a second cluster member (CSV-FS has a natural performance hit compared to NTFS)

7 tests were performed as outlined in the following table

Physical Paths
Active NICs
Test Description Active Passive Intel Broadcom LBFO Team Dell HIT MPIO Mode
A
4 NIC in LBFO Team, No MPIO
4
0
0
4
Y
N
n/a
B
4 NICs, fully meshed, RR
8
0
2
2
N
N
Round Robin
C
2 NICs, no mesh (point to point)
2
2
2
0
N
N
Round Robin
D
1 NIC only (control test)
1
1
1
0
N
N
n/a
E
4 NICs, fully meshed, LQD
8
0
2
2
N
N
Least Queue Depth
F
4 NICs, partial mesh, RR
4
0
2
2
N
N
Round Robin
G
2 NICs, no mesh (point to point) with EqualLogic Host Integration Tools
1
1
2
0
N
Y
Least Queue Depth

If you are more visual, the following diagram summarises the above in a graphical format

The Results

The following table summarises the read/write performance of each test on Sequential 4MB reads as outlined through “Anvil’s Storage Utilities”, version 1.1.0, build 1st January 2014. all tests were performed on the same Windows 10 Enterprise VM without rebooting in between each test and without performing any other activities on the VM disk.

The results below are ordered by test, from the test offering the best performance to the test offering the worst performance, using the Read MB/s column as the sort index.

Sequential 4MB (Read)
Sequential 4MB (Write)
Test
Response (ms)
MB read
IOPS
MB/s
Control Deviance (%)
Response (ms)
MB written
IOPS
MB/s
Control Deviance (%)
C
30.4791
1052
32.81
131.24
32.17
21.7266
1024
46.03
184.11
70.25
F
39.801
804
25.13
100.50
1.21
468.9896
772
2.13
8.53
-92.11
D
40.2814
796
24.83
99.30
0
36.9883
1024
27.04
108.14
0
A
51.3782
624
19.46
77.85
-21.60
89.5977
1024
11.16
44.64
-58.72
G
60.7197
528
16.47
65.88
-33.66
23.8047
1024
42.01
168.03
55.38
E
273.9667
120
3.65
14.60
-85.30
1010.7556
360
0.99
3.96
-96.34
B
404.65
80
2.47
9.89
-90.04
964.766
376
1.04
4.15
-96.16

Response (ms) = Lower is better
MB read/written = Higher is better
IOPS = Higher is better

Control Deviance (%) = the positive or negative impact in MB/s performance compared to the single NIC, no MPIO control test (test D).

Test A | Converged Fabric LBFO

The Microsoft dream of virtualising everything does hold up – at not being completely terrible. Sitting in the middle of the table, using a fully converged fabric, virtualised setup across 4 NICs resulted in a 22% reduction in read speed compared to a single NIC and a 59% reduction in write speed.

There may be some improvements to made by creating multiple Virtual iSCSI interfaces connected to the virtual switch, however these were not tried. Based upon the current view of the technology, while it works and offers a data centre design simplification, that simplification factor is not worth the performance sacrifice.

Test B | Round Robin, Full Mesh

This test proves that viewing an iSCSI setup as a full mesh and throwing NICs at the proverbial problem is going to do nothing to help you. Your iSCSI should be configured in a 1:1 “path” setup between initiator and target. Any additional NICs should be put into “Round Robin with subset” i.e. made to be passive fail-over adapters. That is a 90% and 96% reduction in respective read/write performance!

Test C | Round Robin, 1:1 Paths

This test proves how you are supposed to use iSCSI. Two, non-crossing paths allows for a full bandwidth connection down each path between the initiator and the target. This configuration provided an increase in performance over a single adapter and was the only test that provided improvements to both read and write metrics.

Test D | Control

This was the baseline control test for this experiment. 1 NIC talking to 1 controller port. Nothing complicated here.

Test E | Least Queue Depth, Full Mesh

This test repeated Test B, but changed the MPIO model from RR to LQD to see if it made any difference. Read performance was slightly better than under RR, but was still 85% worse than the control test.

Test F | Partial Active Mesh

This test looked to see whether having a partial active mesh made any difference. There was a very small 1% increase in read performance from this, but a significant write penalty. In practice, you cannot push/pull 2Gbps to/from a 1Gbps source, so the design is not conducive towards improved speed under a synthetic load.

Test G | Least Queue Depth, 1:1 Paths

Test G was a genuine surprise. I was expecting to see Dell EqualLogic Host Integration Tools (HIT) version 4.9 offer an increase in performance, not a decrease. However, repeating the test yielded the same results. In my experience, this has never usually been the case, with VM’s feeling more responsive with HIT installed compared to not. Experience suggests to me that something else was at play here, perhaps the HIT version being poorly optimised for Windows Server 2016, or the Dell stack getting grumbly about it using a retail Intel I350-T4 adapter instead of a Dell one. Dell HIT forces the use of pathing no matter what you try and set all other adapters into passive mode. It used LQD as the MPIO algorithm. Evidently this resulted in an increase in writes but a reduction in read performance, be it not as high as without HIT being installed.

Although not shown in the results above, HIT did help improve performance in some of the Anvil Tests. The long queue depth tests resulted in higher IOPS figures for both read and write values by a small margin. None of the other tests yielded such an improvement.

Conclusion

As you can see from these results. There is only one way that you should be conceptually thinking about your iSCSI environment – 1:1, point to point paths. Anything over and above this should be set to being passive/failover/offline in order not to impact performance.

General Subnet Recommendations

Subnet recommendations go hand in hand with this, but you should note are generally made by the storage vendor — and you should follow their advice. I have encapsulated the general recommendations/requirements of a number of providers in the table below. The subnet count column is in essence a statement that for each NIC on the storage device, there should be a dedicated subnet (and ideally broadcast domain/VLAN) back to the iSCSI server.

Vendor Subnet Count Source
Dell (Non-EqualLogic)
2 View
Dell EMC
2 View
Dell EqualLogic
1 View
Microsoft
2 View
NetApp
? I couldn’t find any guidance from an official source. There is community evidence of both being used by end-users
NetGear
2 View
QNAP
2 View
Synology
2 View

As you can see, with the exception of Dell EqualLogic which provides a middleware solution known as the Host Integration Tools (HIT) to cope with this, most vendors are quite specific on the use of a “single path” logical topology for server/storage connectivity — aka one subnet per storage appliance NIC.

General Advice

I will end this piece with some general advice and tips for working with MPIO. It isn’t exhaustive, but they are some quick observations from experience of using the technology for many years. Some of them are obvious, some of them might help you avoid a head scratcher.

  1. If you are using an enterprise iSCSI solution, follow the vendor’s advice, forget anything you read on the Internet. Everyone is a know-it-all on the internet and there are plenty of “I’m a Linux user so I know best” screaming matches about how EqualLogic are wrong about the recommendations for EqualLogic’s own hardware. I’m pretty sure that EqualLogic… uh, tested their stuff before writing their user manual.
  2. If you are using an enterprise solution and the vendor offers a DSM (MPIO driver), use it. Dell HIT vs the generic Microsoft DSM for Windows Server is noticeably faster, but only works will Dell SAN hardware (naturally). Also ensure that you keep you DSMs up to date.
  3. Follow you vendor’s guidelines with respect to subnets. If in doubt, drop them an email. You’ll usually find them quite accommodating.
  4. Unless your vendor has expressly told you to, you do not MPIO back from the storage system – i.e. don’t team, MPIO, load balance etc on the storage side. Do it all on the server initiating the request.
  5. Stick to two port/1:1 path MPIO designs. If you need more create multiple pairs and have each on different networks going to different storage systems so that the driver knows where to send traffic explicitly while maintaining isolation.
  6. If you want to think about your MPIO as a meshing design, it has to be meshed for redundancy, not active links (unless your system needs to keep living, breathing human beings alive and do so at all costs).
  7. With iSCSI and SAN MPIO, try and avoid network hops (routers).
  8. All ports in a group must be the same type, speed and duplex.
  9. Disable port negotiation and manually set the speed on the client and switch, this will make failover/failback processes faster for your redundant paths.
  10. Use VLAN’s as much as possible (try and avoid overlaying broadcast domains across a shared Layer 2 topology).
  11. Use Jumbo Frames as much as possible unless the iSCSI subnet involves client traffic.
    Hint: Your iSCSI subnet should not involve client traffic!
  12. Ensure that your NIC drivers and firmware are kept up-to-date
  13. Disable all Windows NIC service bindings apart from vanilla IPv4 on your iSCSI networks. For example, Client for Microsoft Networks, QoS Packet Scheduler, File and Printer Sharing for Microsoft Networks etc. If you aren’t using it, disable IPv6 too on the iSCSI interfaces to prevent IPv6 node-chatter.
  14. In the driver config for your server grade NIC (because you are using server grade NIC’s, right?) max out the send and receive buffer sizes on the iSCSI port. If the server NIC has iSCSI features that are relevant (such as iSCSI offloading), enable them.
  15. When you are building a Windows Server, script the MPIO install, enable MPIO during the script and set the default policy as part of the build process —- then patch and REBOOT the system before you even start configuration. If I had a £1 for every time I’d had to rescue someone from not doing that and then not REBOOTING…
  16. If you are using a SOHO/SME general purpose commodity NAS, if (and only if) you have a UPS, disable Journaling and/or Sync Writes on your iSCSI partitions/devices. There is a benefit, but remember if you are hosting SMB shares on a commodity appliance you actually do want Journaling running on those volumes.
  17. Keep your NAS/SAN firmware up to date.
  18. Keep your storage system and iSCSI block sizes, cluster and sector sizes optimised for the workload. Generally this means bigger is better for virtualisation storage and video. 256/64K, 128/64K or 64/64K depending on what your solution can offer.
  19. Keep volumes under 80% of capacity as much as possible.
  20. Use UPS’s: Remember, iSCSI and SAS are hard drive/storage protocols. They are designed to get data onto permanent storage medium just like RAID controllers. RAID controllers have backup batteries because you do not want to lose what is in process in the RAID controller cache when the power goes out. Similarly, you need to think of your iSCSI and External SAS sub-systems much the same as you would a RAID sub-system.
  21. If you have a robust UPS solution, enable write caching and write behind/write back cache features on your storage systems and iSCSI mounted services to gain extra performance benefits. Be mindful that there is risk in this if your power and shutdown solution isn’t bullet proof.
  22. Test it! Build a test VM and yank a cable out a few times. You’ll be glad you sacrificed a Windows install or two to ensure it is right when you actually pull an iSCSI cable out of a running server… Believe me I know what a relief that is.

Simple Fourm YouTube API Video Embed Sample Script

System Requirements:

  • HTML

The Problem:

www.hpcfactor.com’s forum software dates from 2004. At some point in the interim I rigged it to support a basic

BBS markup tag which uses a <embed> tag and the legacy a Flash object to inject a video from YouTube onto the page:

<object width="640" height="390">
 <param name="movie" value="{param}"></param>
 <param name="allowFullScreen" value="true"></param>
 <param name="allowScriptAccess" value="always"></param>
 <embed src="{param}" type="application/x-shockwave-flash" allowfullscreen="true" allowScriptAccess="always" width="640" height="390"></embed>
</object>

It is however currently 2016 and HTML 5 is the norm now, so at the behest of a user I wanted to update it.

More Info

Using the formal YouTube APU Reference for iframe Embeds sample. I needed to make something quick and easy that could be injected into the (extremely limited) BBS Markup system on the 12 year old hpcfactor.com forum. The only default substitution available without re-writing the forum core is {param} which takes the <URL> value from the BBS

markup.

View: YouTube Player API Reference for iframe embeds

 

The code sample shown below was the quickly generated solution. The steps it goes through are:

  1. Drop a DIV in-line on the page for the YouTube API to replace with the iFrame
  2. Drop the JavaScript in-line as well (it has some unnecessary duplication but won’t execute)
  3. The JavaScript checks to see if the YouTube API is already on the stack, if it is not it loads the YouTube API, if it is, it skips to the call to the JavaScript to embed the iFrame
  4. If the youTube API is not loaded, it loads the youTube API, sets up an array structure to act as a callback and pointer reference source to track all video files requested to be embedded on the page
  5. It sets up the onYouTubePlayerAPIReady listener to listen for the ‘ready’ callback from the youTube API, at which point JavaScript will be told to parse the sources array and create the iFrame’s
  6. A registration function is provided. This receives video registrations even before the YouTube API is ready. As soon as the YouTube API callback to the onYouTubePlayerAPIReady listener, the video registrations are processed. Until it does, they queue.
  7. A function to test to see if the video has already been embedded is provided and is flagged once it has been embedded

<div id="videoPlayer_{param}"></div>
<script type="text/javascript">
// Load the IFrame Player API code asynchronously.
if (document.getElementById('youTubeExternal') === null) {
var tag = document.createElement('script');
tag.id = 'youTubeExternal';
tag.type = 'text/javascript';
tag.src = "https://www.youtube.com/player_api";
var headTag = document.getElementsByTagName('head')[0];
headTag.appendChild(tag);
var arrVideoStack = new Array();
arrVideoStack[0] = new Array(); // 0 = player reference, 1 = Display Elm ID, 2 = Video ID, 3 = IsLoaded
arrVideoStack[1] = new Array();
arrVideoStack[2] = new Array();
arrVideoStack[3] = new Array();
var iVideoCount = 0;
var bolYtReady = false;// Replace the 'ytplayer' element with an <iframe> and
// YouTube player after the API code downloads.
function onYouTubePlayerAPIReady() {
bolYtReady = true;
loadVideos();
}function registerNewVideo(strTargetElmId, strVideoId) {
strVideoId = strVideoId.replace('https://www.youtube.com/watch?v=', '');
strVideoId = strVideoId.replace('https://www.youtube.com/v/', '');
strVideoId = strVideoId.split('&')[0]; // In case there are any more parameters
arrVideoStack[0][iVideoCount] = null;
arrVideoStack[1][iVideoCount] = strTargetElmId;
arrVideoStack[2][iVideoCount] = strVideoId;
arrVideoStack[3][iVideoCount] = false; iVideoCount++;
if (bolYtReady) {
loadVideos();
}
}function loadVideos() {
if (bolYtReady) {
if (iVideoCount > 0) {
for (i = 0; i < iVideoCount; i++) {
// If it hasn't already loaded, load it
if (!arrVideoStack[3][i]) {
arrVideoStack[0][i] = new YT.Player(arrVideoStack[1][i], {
height: '640',
width: '390',
videoId: arrVideoStack[2][i]
});
arrVideoStack[3][i] = true;
}
}
}
}
}
}

registerNewVideo('videoPlayer_{param}', '{param}');
</script>

If you would like to test it without the BBS markup parts in-situ you can copy and paste the following version into a .html file and double click on it

<div id="videoPlayer_Test1"></div>

<div id="videoPlayer_Test2"></div>

<div id="videoPlayer_Test3"></div><script type="text/javascript">

// Load the IFrame Player API code asynchronously.

if (document.getElementById('youTubeExternal') === null) {

var tag = document.createElement('script');

tag.id = 'youTubeExternal';

tag.type = 'text/javascript';

tag.src = "https://www.youtube.com/player_api";

var headTag = document.getElementsByTagName('head')[0];

headTag.appendChild(tag);

var arrVideoStack = new Array();

arrVideoStack[0] = new Array(); // 0 = player reference, 1 = Display Elm ID, 2 = Video ID, 3 = IsLoaded

arrVideoStack[1] = new Array();

arrVideoStack[2] = new Array();

arrVideoStack[3] = new Array();

var iVideoCount = 0;

var bolYtReady = false;// Replace the 'ytplayer' element with an <iframe> and

// YouTube player after the API code downloads.

function onYouTubePlayerAPIReady() {

bolYtReady = true;

loadVideos();

}function registerNewVideo(strTargetElmId, strVideoId) {

strVideoId = strVideoId.replace('https://www.youtube.com/watch?v=', '');

strVideoId = strVideoId.replace('https://www.youtube.com/v/', '');

strVideoId = strVideoId.split('&')[0]; // In case there are any more parameters

arrVideoStack[0][iVideoCount] = null;

arrVideoStack[1][iVideoCount] = strTargetElmId;

arrVideoStack[2][iVideoCount] = strVideoId;

arrVideoStack[3][iVideoCount] = false; iVideoCount++;

if (bolYtReady) {

loadVideos();

}

}

function loadVideos() {
if (bolYtReady) {
if (iVideoCount > 0) {
for (i = 0; i < iVideoCount; i++) {
// If it hasn't already loaded, load it
if (!arrVideoStack[3][i]) {
arrVideoStack[0][i] = new YT.Player(arrVideoStack[1][i], {
height: '640',
width: '390',
videoId: arrVideoStack[2][i]
});
arrVideoStack[3][i] = true;
}
}
}
}
}
}

registerNewVideo('videoPlayer_Test1', 'dQw4w9WgXcQ');
registerNewVideo('videoPlayer_Test2', 'm2ATf01v4hw');
registerNewVideo('videoPlayer_Test3', 'YVhxcpItk_M');
</script>