Hyper-V Discrete Device Assignment (DDA) with a TV Tuner (Hauppauge HVR-4400)

System Requirements:

  • Windows Server 2016
  • Hauppauge HVR-4400 PCIe Tuner

The Problem:

I am a DVBLink user. DVBLink does not play nicely with Windows Service and consequently it wants to run on a client OS. This means that I have lots of server hardware running server Operating Systems and one device with 4 TV Tuners in it running Windows 10.

With the release of Windows Server 2016 came the promise of VMWare like PCIe Pass-through, allowing physical devices on the PCI bus to be attached to VMs. The plan is to attach the PCIe TV Tuner and attempt to get DVBLink working in a VM so that the physical unit can be decommissioned (saving the power bill).

More Info

As part of the process, I was considering building a new server at the start of 2017 to perform a consolidation against. The Windows 10 DVBLink machine would be one consolidated devices onto more powerful modern hardware. I would also need new TV Tuners as only 2 of the 4 in the DVBLink TV Server is PCIe, the rest are PCI. Again, there are opportunities to consolidate that into fewer PCIe devices too.

The driver for the new server was Hyper-V PCIe Pass-through, or “Discrete Device Assignment” (DDA) as Microsoft are calling it. It is however quite difficult to find out whether BIOS firmware supports the proper implementations of I/O-MMU VT-d to permit it, making the purchase a risk. Equally, there is no guarantee that DDA will work with a TV Tuner.

Consequently, I decided to borrow a dual CPU Dell PowerEdge R630 to perform the experiment as there were several reports on-line that the R6xx and R7xx have the proper VT-d and SR-IOV feature set for this type of activity. Well done Dell (why don’t you advertise this?!).

After updating firmware, adding the TV Tuner and installing Windows Hyper-V Server 2016 on the machine, the first step was to – as an experiment – attempt to install the TV Tuner drivers on Windows Server 2016 (which errored). After that it was time to run the DDA Survey Script from Microsoft.

Download: DDA Survey Script (GitHub)

 

This was promising. The script found two devices that it stated were capable of being used with DDA

PERC H730 Mini
Express Endpoint -- more secure.
And its interrupts are message-based, assignment can work.
PCIROOT(0)#PCI(0100)#PCI(0000)

and

Hauppauge WinTV HVR-4400 (Model 121xxx, Hybrid DVB-T/S2, IR)
Express Endpoint -- more secure.
And it has no interrupts at all -- assignment can work.
PCIROOT(0)#PCI(0200)#PCI(0000)

The next step was to dismount the device from the Hypervisor and make it available to Hyper-V

# Find the HVR-4400
$pnpdevs = Get-PnpDevice -PresentOnly | Where-Object {$_.Class -eq "Media"} | Where-Object {$_.Service -eq "HCW85BDA"}# ... or if you know the hardware ID
$pnpdevs = Get-PnpDevice -PresentOnly | Where-Object {$_.InstanceId -eq "PCI\VEN_14F1&DEV_888
0&SUBSYS_C1080070&REV_04\4&39CDA168&0&0010"}foreach ($pnpdev in $pnpdevs) {
Disable-PnpDevice -InstanceId $pnpdev.InstanceId -Confirm:$false
Write-Host 'Device ' $pnpdev.InstanceId ' Disabled. NOTE: If this hangs, reboot and try again'
$instanceId = $pnpdev.InstanceId
$locationpath = ($pnpdev | get-pnpdeviceproperty DEVPKEY_Device_LocationPaths).data[0]
Write-Host 'Dismounting Device At: ' $locationpath ' (' $instanceId ')'
Dismount-VmHostAssignableDevice -locationpath $locationpath
Write-Host $locationpath
}

Initially, it hung PowerShell (and the system) so I had to hard reset the server. In this instance it was in fact necessary to reboot after issuing

Disable-PnpDevice

After trying again and rebooting the Dismount-VmHostAssignableDevice failed with

dismount-vmhostassignabledevice : The operation failed.
The manufacturer of this device has not supplied any directives for securing this device while exposing it to a
virtual machine. The device should only be exposed to trusted virtual machines.
This device is not supported when passed through to a virtual machine.
The operation failed.
The manufacturer of this device has not supplied any directives for securing this device while exposing it to a
virtual machine. The device should only be exposed to trusted virtual machines.
This device is not supported and has not been tested when passed through to a virtual machine. It may or may not
function. The system as a whole may become unstable. The device should only be exposed to trusted virtual machines.
At line:1 char:1

It would not proceed past this point. The trick was to change the line to

Dismount-VmHostAssignableDevice -locationpath $locationpath -Force

The next step was to ensure that the VM’s Automatic Stop Action was set to anything other than “Save”

Set-VM -Name “10-TEST” -AutomaticStopAction Shutdown

… and at this point it was simply a case of creating a VM and assigning the device

Add-VMAssignableDevice -LocationPath “$locationpath” -VMName “10-Test”

At which point the device immediately popped up in Device Manager under Windows 10 in the Generation 2 VM

DDA PCIe Passthrough in Device Manager

…. before the VM blue screened a few seconds later.

Blue Screen of Death

I tried to use several versions of the HVR-4400 driver that I could find and it made no difference. The VM would crash whenever it attempted to talk to the card. The Hypervisor itself did not seem to be impacted by the Blue Screen event and itself did not crash.

I also tried fully removing the device from the Hypervisor using DEVCON and clearing out the driver using pnputil. Superficially, this action made it worse as the VM wouldn’t boot at all now if it had a driver on-file for the TV Tuner. Before it would at least boot.

So this project was a failure and I will not be investing in new server hardware just yet. I’ll wait to see if Microsoft improve the feature set as allegedly this type of insanity (and yes, it is insane) is possible in VMWare. I do not want to change away from Hyper-V at the current time though, so I will have to stick with a client machine as a service.

This does not mean of course that this cannot work in Hyper-V. The HVR-4400 is a card from 2011/2012. So it is not exactly new hardware. PCIe TV Tuners designed to modern electrical standards and for use on PCIe 3.0 bus architectures may provide better interoperability out of the box. I just don’t have any other cards to test with and am in a bit of a chicken and egg situation over wanting to invest in new cards and servers unless I know they will work nicely together.

If you are interested in this too and would like me to have a go testing your hardware, please get in touch via HPC:Factor.

Intel Xeon & Core Processor Memory Type Support Matrix

System Requirements:

  • Intel Xeon Processor
  • Intel Core Processor
  • DDR 3 / DDR 4

The Problem:

I wanted to change a motherboard and CPU without changing RAM and I was unable to find a cross-tabulation of DDR memory support for each processor generation. So I created one for the Nehalem and higher Core desktop and Xeon processor lines.

More Info

The DDR generation matrix follows.

Amazon Logo Search for/buy on Amazon (and help support this site)
eBay Logo Search for/buy on eBay (and help support this site)

Please Note: Please ensure that you check with Intel Ark before making any buying decisions. I will not accept responsibility for incorrect purchases as a result of any errors in the table below.

View: Intel Ark

RAM Support
DDR3
DDR4
CPU Series

[Models…]
Max (GB) Chan ECC 800 978 1066 1333 1600 1866 2133 1333 1600 1866 2133 2400 2666 Buy Notes
E7-8000 v4

E7-8894 v4, E7-8893 v4, E7-8891 v4, E7-8890 v4, E7-8880 v4, E7-8870 v4, E7-8867 v4, E7-8860 v4
3143.68
4
Y
Y
Y
Y
Y
Y
Y
DDR4-1333/1600/1866 DDR3-1066/1333/1600
E7-4000 v4

E7-4850 v4, E7-4830 v4, E7-4820 v4, E7-4809 v4
3143.68
4
Y
Y
Y
Y
Y
Y
Y
DDR4-1333/1600/1866 DDR3-1066/1333/1600
E7-8000 v3

E7-8893 v3, E7-8891 v3, E7-8890 v3, E7-8880L v3, E7-8880 v3, E7-8870 v3, E7-8867 v3, E7-8860 v3
1576.96
4
Y
Y
Y
Y
Y
Y
Y
DDR4-1333/1600/1866 DDR3-1066/1333/1600
E7-4000 v3

E7-4850 v3, E7-4830 v3, E7-4820 v3, E7-4809 v3
1576.96
4
Y
Y
Y
Y
Y
Y
Y
DDR4-1333/1600/1866 DDR3-1066/1333/1600
E7-8000 v2

E7-8850 v2, E7-8857 v2, E7-8870 v2, E7-8880 v2, E7-8880L v2, E7-8890 v2, E7-8891 v2, E7-8893 v2
1576.96
4
Y
Y
Y
Y
DDR3 1066/1333/1600
E7-4000 v2

E7-4890 v2, E7-4880 v2, E7-4870 v2, E7-4860 v2, E7-4850 v2, E7-4830 v2, E7-4820 v2, E7-4809 v2
1576.96
4
Y
Y
Y
Y
DDR3 1066/1333/1600
E7-2000 v2

E7-2890 v2, E7-2880 v2, E7-2870 v2, E7-2850 v2
1576.96
4
Y
Y
Y
Y
DDR3 1066/1333/1600
E7-8000 (v1)

E7-8880, E7-8867L, E7-8860, E7-8850, E7-8837, E7-8830
4198.40
4
Y
Y
Y
Y
Y
DDR3 800/978/1066/1333 (Max Speed 1066 MHz)
E7-4000 (v1)

E7-4870, E7-4860, E7-4850, E7-4830, E7-4820, E7-4807
2099.20
4
Y
Y
Y
Y
Y
DDR3 800/978/1066/1333 (Max Speed 1066 MHz)
E7-2000 (v1)

E7-2870, E7-2860, E7-2850, E7-2830, E7-2820, E7-2803
1044.48
4
Y
Y
Y
Y
Y
DDR3 800/978/1066/1333 (Max Speed 1066 MHz)
E5-4000 v4

E5-4669 v4, E5-4667 v4, E5-4660 v4, E5-4655 v4, E5-4650 v4, E5-4640 v4, E5-4628L v4, E5-4627 v4, E5-4620 v4, E5-4610 v4
1576.96
4
Y
DDR4 1600/1866/2133/2400
E5-2000 v4

E5-2699A v4, E5-2699R v4, E5-2699 v4, E5-2698 v4, E5-2697 v4, E5-2697A v4, E5-2695 v4, E5-2690 v4, E5-2687W v4,
E5-2683 v4, E5-2680 v4, E5-2667 v4, E5-2660 v4, E5-2658 v4, E5-2650 v4, E5-2650L v4, E5-2648L v4, E5-2643 v4,
E5-2640 v4, E5-2637 v4, E5-2630 v4, E5-2630L v4, E5-2628L v4, E5-2623 v4, E5-2620 v4, E5-2618L v4, E5-2609 v4,
E5-2608L v4, E5-2603 v4
1576.96
4
Y
Y
Y
Y
Y
DDR4 1600/1866/2133/2400
E5-1000 v4

E5-1680 v4, E5-1660 v4, E5-1650 v4, E5-1630 v4, E5-1620 v4
1576.96
4
Y
Y
Y
Y
Y
DDR4 1600/1866/2133/2400
E5-4000 v3

E5-4669 v3, E5-4667 v3, E5-4660 v3, E5-4655 v3, E5-4650 v3, E5-4648 v3, E5-4640 v3, E5-4627 v3, E5-4620 v3, E5-4610 v3
768.00
4
Y
DDR4
E5-2000 v3

E5-2699 v3, E5-2698 v3, E5-2697 v3, E5-2695 v3, E5-2690 v3, E5-2687 v3, E5-2683 v3, E5-2680 v3, E5-2670 v3, E5-2667 v3,
E5-2660 v3, E5-2658A v3, E5-2658 v3, E5-2650L v3, E5-2650 v3, E5-2648L v3, E5-2643 v3, E5-2640 v3, E5-2637 v3,
E5-2630L v3, E5-2630 v3, E5-2628L v3, E5-2623 v3, E5-2620 v3, E5-2618L v3, E5-2609 v3, E5-2608L v3, E5-2603 v3,
E5-2438 v3, E5-2428L v3, E5-2418 v3, E5-2408L v3
768.00
4
Y
Y
Y
Y
DDR4 1600/1866/2133
E5-1000 v3

E5-1680 v3, E5-1660 v3, E5-1650 v3, E5-1630 v3, E5-1620 v3, E5-1428L
768.00
4
Y
Y
Y
Y
Y
DDR4 1333/1600/1866/2133
E5-4000 v2

E5-4603 v2, E5-4607 v2, E5-4610 v2, E5-4620 v2, E5-4624L v2, E5-4627 v2, E5-4640 v2, E5-4650 v2, E5-4657L v2
768.00
4
Y
Y
Y
Y
Y
DDR3 800/1066/1333/1600
E5-2000 v2

E5-2603 v2, E5-2609 v2, E5-2618L v2, E5-2630 v2, E5-2630L v2, E5-2637 v2, E5-2640 v2,
E5-2643 v2, E5-2648L v2, E5-2650 v2, E5-2650L v2, E5-2658 v2, E5-2660 v2, E5-2667 v2, E5-2670 v2, E5-2680 v2,
E5-2687W v2, E5-2690 v2, E5-2695 v2, E5-2697 v2
E5-2403 v2, E5-2407 v2, E5-2418L v2, E5-24 v2, E5-2420 v2, E5-2428L v2, E5-2430 v2, E5-2430L v2, E5-2440 v2,
E5-2448L v2, E5-2450 v2, E5-2450L v2, E5-2470 v2
768.00
4
Y
Y
Y
Y
Y
Y
DDR3 800/1066/1333/1600/1866
E5-1000 v2

E5-1660 v2, E5-1650 v2, E5-1620 v2, E5-1428L v2
256.00
4
Y
Y
Y
Y
Y
Y
DDR3 800/1066/1333/1600/1866
E5-4000 (v1)

E5-4603, E5-4607, E5-4610, E5-4617, E5-4620, E5-4640, E5-4650, E5-4650L
384.00
4
Y
Y
Y
Y
Y
DDR3 800/1066/1333/1600
E5-2000 (v1)

E5-2603, E5-2609, E5-2620, E5-2630, E5-2630L, E5-2637, E5-2640, E5-2643, E5-2648L, E5-2650, E5-2650L,
E5-2658, E5-2660, E5-2665, E5-2667, E5-2670, E5-2680, E5-2687W, E5-2690
E5-2403, E5-2407, E5-2418L, E5-2420, E5-2428L, E5-2430, E5-2430L, E5-2440, E5-2448L, E5-2450, E5-2450L E5-2470
384.00
4
Y
Y
Y
Y
DDR3 800/1066/1333
E5-1600 (v1)

E5-1620, E5-1650, E5-1660
256.00
4
Y
Y
Y
Y
Y
DDR3 800/1066/1333/1600
E5-1400 (v1)

E5-1428L
3
Y
Y
Y
Y
DDR3 800/1066/1333
E3-1500 v6

E3-1535M v6, E3-1505M v6, E3-1505L v6, E3-1501L v6, E3-1501M v6
64.00
2
Y
Y
Y
Y
DDR4-2400, LPDDR3-2133, DDR3L-1600
E3-1200 v6

E3-1285 v6, E3-1280 v6, E3-1275 v6, E3-1270 v6, E3-1245 v6, E3-1240 v6, E3-1230 v6, E3-1225 v6, E3-1220 v6
64.00
2
Y
Y
Y
DDR4-2400, DDR3L-1866
E3-1500 v5

E3-1585 v5, E3-1585L v5, E3-1578L v5, E3-1575M v5, E3-1565L v5, E3-1558L v5, E3-1545M v5, E3-1535M v5,
E3-1515M v5, E3-1505M v5, E3-1505L v5
64.00
2
Y
Y
Y
DDR3L,LPDDR3 1600MHz, DDR4 2133MHz at 1.2V
E3-1200 v5

E3-1280 v5, E3-1275 v5, E3-1270 v5, E3-1268L v5, E3-1260L v5, E3-1245 v5, E3-1240L v5, E3-1240 v5, E3-1235L v5,
E3-1230 v5, E3-1225 v5, E3-1220 v5
64.00
2
Y
Y
Y
Y
Y
DDR4-1866/2133, DDR3L-1333/1600 @ 1.35V
E3-1200 v4

E3-1285L v4, E3-1285 v4, E3-1278L v4, E3-1265L v4, E3-1258L v4
32.00
2
Y
Y
Y
Y
DDR3 and DDR3L 1333/1600/1866 at 1.5V
E3-1200 v3

E3-1286L v3, E3-1286 v3, E3-1285L v3, E3-1285 v3, E3-1281 v3, E3-1280 v3, E3-1276 v3, E3-1275L v3, E3-1271 v3
E3-1270 v3, E3-1268L v3, E3-1275 v3, E3-1265L v3, E3-1246 v3, E3-1245 v3, E3-1241 v3, E3-1240L v3, E3-1240 v3,
E3-1231 v3, E3-1230L v3, E3-1230 v3, E3-1226 v3, E3-1225 v3, E3-1220L v3, E3-1220 v3
32.00
2
Y
Y
Y
DDR3 and DDR3L 1333/1600 at 1.5V
E3-1200 v2

E3-1290 v2, E3-1280 v2, E3-1275 v2, E3-1270 v2, E3-1265L v2, E3-1245 v2, E3-1240 v2, E3-1230 v2, E3-1225 v2,
E3-1220 v2, E3-1220L v2
32.23
2
Y
Y
Y
DDR3 1333/1600
E3-1100 v2

E3-1125C v2, E3-1105C v2
32.00
2
Y
Y
Y
Y
DDR3/DDR3L 1066/1333/1600
E3-1200 (v1)

E3-1290, E3-1280, E3-1275, E3-1270, E3-1260L, E3-1245, E3-1240, E3-1235, E3-1230, E3-1225, E3-1220L, E3-1220
32.00
2
Y
Y
Y
DDR3 1066/1333
E3-1100 (v1)

E3-1125C, E3-1105C
32.00
2
Y
Y
Y
Y
DDR3 1066/1333
D-1500

128.00
2
Y
DDR4, DDR3 (Max memory depends upon type)
Core i7 (8th Gen)

i7-8809G, i7-8709G, i7-8706G, i7-8705G, i7-8700K, i7-8700
64.00
2
Y
DDR4-2666
Core i5 (8th Gen)

i5-8600K, i5-8400, i5-8350U, i5-8305G, i5-8250U
64.00
2
Y
DDR4-2666
Core i3 (8th Gen)

i3-8350K, i3-8100
64.00
2
Y
DDR4-2400
Core i7 (7th Gen) 64.00
2
Y
Y
Y
Y
DDR4-2133/2400, DDR3L-1333/1600 @ 1.35V
Core i5 (7th Gen) 64.00
2
Y
Y
Y
Y
DDR4-2133/2400, DDR3L-1333/1600 @ 1.35V
Core i3 (7th Gen) 64.00
2
Y
Y
Y
Y
DDR4-2133/2400, DDR3L-1333/1600 @ 1.35V
Core i7-6900 (6th Gen) 128.00
4
Y
Y
DDR4 2400/2133
Core i7-6800 (6th Gen) 128.00
4
Y
Y
DDR4 2400/2133
Core i7 (6th Gen) 64.00
2
Y
Y
Y
Y
DDR4-1866/2133, DDR3L-1333/1600 @ 1.35V
Core i5 (6th Gen) 64.00
2
Y
Y
Y
Y
DDR4-1866/2133, DDR3L-1333/1600 @ 1.35V
Core i3 (6th Gen) 64.00
2
Y
Y
Y
Y
DDR4-1866/2133, DDR3L-1333/1600 @ 1.35V
Core i7-5900 (5th Gen) 64.00
4
Y
Y
Y
DDR4 1333/1600/2133
Core i7-5800 (5th Gen) 64.00
4
Y
Y
Y
DDR4 1333/1600/2133
Core i7 (5th Gen) 32.00
2
Y
Y
Y
DDR3L-1333/1600/1866 @ 1.5V
Core i5 (5th Gen) 32.00
2
Y
Y
DDR3L-1333/1600 @ 1.5V
Core i7-4900 (4th Gen) 64.00
4
Y
Y
Y
DDR3 1333/1600/1866
Core i7-4800 (4th Gen) 64.00
4
Y
Y
Y
DDR3 1333/1600/1866
Core i7 (4th Gen) 32.00
2
Y
Y
DDR3-1333/1600, DDR3L-1333/1600 @ 1.5V
Core i5 (4th Gen) 32.00
2
Y
Y
DDR3-1333/1600, DDR3L-1333/1600 @ 1.5V
Core i3 (4th Gen) 32.00
2
Y
Y
DDR3-1333/1600, DDR3L-1333/1600 @ 1.5V
Core i7-3900 (3rd Gen) 64.45
4
Y
Y
Y
DDR3 1066/1333/1600
Core i7-3800 (3rd Gen) 64.23
4
Y
Y
Y
DDR3 1066/1333/1600
Core i7 (3rd Gen) 32.00
2
Y
Y
DDR3 1333/1600
Core i5 (3rd Gen) 32.00
2
Y
Y
DDR3 1333/1600
Core i3 (3rd Gen) 32.00
2
Y
Y
DDR3 1333/1600
Core i7 (2nd Gen) 32.00
2
Y
Y
DDR3 1066/1333
Core i5 (2nd Gen) 32.00
2
Y
Y
DDR3 1066/1333
Core i3 (2nd Gen) 32.00
2
Y
Y
DDR3 1066/1333
Core i7-900 (1st Gen) 24.00
3
Y
Y
DDR3 800/1066
Core i7-800 (1st Gen) 16.00
2
Y
Y
DDR3 1066/1333 (These will actually take 32GB)
Core i5 (1st Gen) 16.60
2
Y
Y
DDR3 1066/1333
Core i3 (1st Gen) 16.38
2
Y
Y
DDR3 1066/1333

All information was sourced from Intel Ark and is deemed to be correct at the time of writing.

If you found this useful, please consider donating toward the running costs of this website.

Netgear ReadyNAS Duo V2 and Jumbo Frames

System Requirements:

  • Netgear ReadyNAS Duo V2
  • Firmware 5.3.12
  • 9k compatible NIC’s and intermediate Layer 2/Layer 3 hardware

The Problem:

The ReadyNAS Duo V2 is a now legacy, ARM based dual 3.5″ drive SOHO NAS appliance with a single NIC and 256MB of RAM. The device was never intended to support performance or even low-end enterprise tasks.

The ReadyNAS Duo V2 is not designed for Jumbo Frames and there are no user interface entry points to enable it. It is not clear on-line whether anyone has had any actual success with enabling it. This document explores the issue.

More Info

In wanting to use the device simply as a on-line backup appliance, I wanted to try and squeeze as much performance out of it as I could. One of the obvious things to try is to enable Jumbo Frames, which allows more data to be transmitted in a single Ethernet Frame before the data has to be re-wrapped in a new header and trailer for Layer 2 transmission over an Ethernet Network. The logic being that the fewer CPU cycles being used to process the header and generate and process the CRC and footer, the faster the transfer into memory and thus the smoother the transmission of the data can occur into the disk sub-system.

In order the enable Jumbo Frames, the Maximum Transmission Unit (MTU) has to be adjusted on ALL devices in the transmission path – sender NIC, receiver NIC and any and all intermediary switch ports, bridge ports and router ports. If any one devices does not have Jumbo Frames configured to the same (or higher) value, it will fault. If you have one devices with a higher value and the other devices with a lower value, you will almost certainly see a performance reduction when transmitting from that device. Therefore: Set all of your devices to the same, common MTU values.

Depending on the manufacturer, device and driver these are usually:

Name Rounded Offset
Normal
1500
1514
3K
3000
3014
4K
4000
4088
5K
5000
5014
7K
7000
7014
Note: As a general rule of thumb, if you are using a PCI NIC, this is the largest MTU you can hope to achieve. PCI Express NIC’s can go up to 9k
9K
9000
9014
Note: Specialist enterprise grade hardware is required for MTUs larger than 9K and these sizes are not usually available on 1Gbps NIC hardware (10 or 40Gbps hardware or higher)
16K
16000
16128
24K
24000
32K
32000
64K
64000
Note: This is the maximum transfer size of a TCP Segment

Step 1

This was only tested on Firmware Version 5.3.12. Ensure that your firmware is up to date.

Step 2

You will need to enable Root SSH Access on the device via the official Netgear plugin. Install the plugin via the web UI and reboot the NAS before attempting to proceed.

Download: Enable Root SSH Access Plugin

Step 3

Set a static IP address on the device (or at the very least set and then unset it) to ensure that the config files have been written out correctly. It appears that a clean OS install does not make use of the interfaces file as expected until after a static IP address has been set.

Step 4

SSH into the device (using Bash, Putty for Windows or your preferred client). Usually this is root@<ipaddress>

Step 5

Perform a test to see whether Jumbo Frames currently works

ping <ipaddress> -l 8000

If you receive a successful “reply from…” then it is already working between your ReadyNAS and your PC. The expected result is however for this to fail, indicating that Jumbo Frames is not enabled

Step 6

Perform a volatile test by enabling Jumbo Frames for the session. If you lose contact with your single NIC ReadyNAS, simply reboot it to restore functionality. Under the SSH session issue the following command. If you need to set a lower Jumbo Frame value (for example 7k) change the 9000 value as appropriate.

ip link set dev eth0 mtu 9000

Step 7

Repeat the ping test from your PC

ping <ipaddress> -l 8000

This should now be successful

Step 8

If you wish to make the change permanent so that the setting persists after a restart of the ReadyNAS you must edit the interfaces file. Return to the SSH session

vi /etc/network/interfaces

In VI press i to commence insert mode

Find the entry for the eth0 interface and at the bottom of the section enter mtu 9000 (or the frame setting that you require) e.g.

iface eth0 inet static
address 192.168.0.100
netmask 255.255.255.0
gateway 192.168.0.254
mtu 9000

Note: That is LOWER CASE “mtu”

To save and exit VI press the Escape key and then type :wq (colon, w, q) and press return

Finally type reboot to restart the ReadyNAS

Step 9

Repeat the ping test and you should find Jumbo Frames working

Everything went wrong and now I cannot access my ReadyNAS

Don’t panic. Just access the boot menu and put it into OS Reinstall mode

  1. Turn the ReadyNAS off
  2. Use a paper clip to hold in the reset button on the back
  3. Keep the clip held in place and turn the ReadyNAS on
  4. Hold the paper clip in for 10 seconds
  5. Release the paperclip
  6. Push the backup bottom on the front of the ReadyNAS until the Disk 2 LED is the only one illuminated (be very careful that it is Disk 2 and not Disk 1. Disk 2 reinstalls the OS, Disk 1 factory resets the device and deletes all of your data)
  7. Use the paperclip one more time and single press the reset button to execute the boot menu mode
  8. Come back in 20 minutes and use RAIDar to find your ReadyNAS again (you will have to reconfigure the settings)

Does is make a difference?

Comparing the transfer speed before and afterwards does yield a significant improvement in write speed on the device.

The test configuration

The Source

  • 3.4GB ISO
  • Windows Server 2012 R2
  • From an NTFS formatted 5 disk RAID 5 array on a LSI MegaRaid 8260-8i with caching an optimisation’s enabled
  • A Quad Port Intel I350-T4 Gigabit Server NIC with Jumbo Frames set to 9014 bytes
  • Cat 5e cabling
  • 3 intermediate switches all supporting Jumbo Frames

The Destination

  • ReadyNAS firmware 5.3.12
  • Dual WD Red 3TB WD30EFRX-68AX9N0 with 64MB cache (only 5400 RPM)
  • Write caching enabled on the ReadyNAS
  • The ReadyNAS is running in Flex-RAID RAID 0

To enable write caching over SSH:

hdparm -W1 /dev/sda
hdparm -W1 /dev/sda1
hdparm -W1 /dev/sda2
hdparm -W1 /dev/sda3

Transferring the same 3.4GB ISO from the same Windows Server 2012 R2 NTFS volume over the same NIC/Network/Switches with the MTU set at the default 1500 resulted in a non-burst transfer speed of around 43MB/s (344Mbps).

Repeating the transfer with Jumbo Frames set to 9000 enabled increased this noticeably 53MB/s (424Mbps); a 18.86% increase in write speed.

Given that 53MB/s isn’t that stellar in the first place, this improvement is certainly worth having.

Comparison of IOCrest SI-PEX40071 SATA III Controller with onboard Intel RST SATA II

System Requirements:

  • A free PCIe 4x slot
  • IOCrest SI-PEX40071

The Problem:

This came about, not because I needed or intended to bench mark the controller, but because I had a large number of 1TB drives that I wanted to string together into a Dynamic Disk Volume and didn’t have enough ports on a motherboard to connect all of the drives up.

The cheapest solution that I could find was the £45 IOCREST 8 Channel PCI-Express Serial ATA Host Controller Card a non-RAID HBA for up to 8 SATA III drives. Model number SI-PEX40071.

The controller is a cheap 8 port SATA III interface running on a Marvel Chipset. The device uses a PCIe 4x slot and presents two controllers to the system bus, not one. This is significant as it a) means that it requires special IOCrest drivers to permit Windows to see the second controller and b) means that only the disks on the first controller, i.e. ports 0-3 are presented to the BIOS.

Please keep that in mind if you need boot support! It can boot from the controller on BIOS or UEFI if the interface is attached to port 0-3.

As I had it and before I put it to use in the dynamic disk, I thought that it would be interesting to see what sort of a difference it would make to a system that only shipped with SATA II on the motherboard. While rotational hard drives cannot saturate a SATA II bus, let alone SATA III. A SSD might come close and consequently SATA III + a SSD in a PCIe 8x slot (with boot support) would seem like a way to achieve higher transfer speeds.

More Info

I did not have long to test it, so I only performed some rudimentary testing.

The test compared a Samsung 840 Pro 250GB SSD running on Port 0 of the IOCrest controller vs. Port 0 of an Intel STA II controller on a X58 chipset running in AHCI mode. In all three tests the same SSD was running with the write cache enabled and cache control set to write back i.e. optimal. All tests were performed on Windows 10 Enterprise and the SSD was the boot drive and the only drive present in the system.

The IOCrest controller was tested twice, one batch with the Microsoft Windows 10 default Marvel driver (only ports 0-3 working) and the second batch with the latest driver from IOCrest (not an actual Windows 10 driver and in actual fact fairly old at late 2012).

Testing was performed 3 times for each test with the data being generated by the latest version of Samsung Magician. The mean of the three runs is presented in the table below.

Test
Sequential Read (MB/s)
Sequential Write (MB/S)
Random Read (IOPS)
Random Write (IOPS)
Intel RST SATA II (Microsoft)
285
273
41497
47480
IOCrest SATA III (Microsoft)
203
172
32472
26492
IOCrest SATA III (IoCrest)
209
166
35803
29177

In all cases higher values are preferable.

The onboard Intel storage controller running in AHCI mode out performed the SATA III PCIe controller by a considerable margin. 70MB/s on read and 100MB/s on write! These values aren’t even close.

Simply put, cheap controllers – especially ones labeled SATA III – are a false economy. I wouldn’t have expected to see something close to enterprise level hardware, however I was expecting to see something that offered at the very least moderate performance increase over a SATA II controller.

The sad thing is that using this controller, Samsung Magician stops complaining that it is running on a SATA II controller, citing the SATA III controllers presence as meaning that it is running optimally even though its performance has been frankly nothing short of crippled.