Installing Pi-hole in Hyper-V

Pi-hole is a DNS-based ad and tracking blocker. Originally intended to run on a Raspberry Pi, Pi-hole can itself run officially under almost any Linux distribution. This article discusses the process of installing Pi-hole in Hyper-V for Windows Server. By using Hyper-V you can fully virtualise your Pi-hole environment for both scale-up and scale-out deployments.

View: Pi-hole Project Home Page

System Requirements

Pi-hole’s requirements are extremely modest. The main detail in defining system requirements is the minimum requirement for the Linux distribution that you choose. Unlike other third-party virtualisation platforms, Hyper-V is highly selective over which distributions will work well with its various Microsoft-orchestrated foibles. Consequently, for Windows Server 2016 and 2019 users, CentOS 7 currently offers the best feature support.

Using CentOS 7, a basic Pi-hole installation will require

  • 1 CPU core
  • 700MB RAM
  • 2.5 GB storage space
  • A virtual DVD drive


Understand your Pi-hole Design Architecture

The complexity of your existing network and requirements may mean that there are several design decisions that you should make before installing Pi-hole. It is important that you understand these design requirements before implementing Pi-hole, to prevent service disruption.

How should Pi-hole integrate with existing DNS servers?

There are two options for where you place the Pi-hole Server in the DNS lookup chain. Before, or after you internal DNS server.

  • Before: Client Device > Pi-hole > DNS Server/Router > Public DNS
  • After: Client Device > DNS Server > Pi-hole > Public DNS


Before the DNS Server

Under the ‘Before’ design, if the Pi-hole goes down, the client device can communicate directly with the existing DNS server/router to make DNS lookups. To use this design, all client devices must be re-configured to perform DNS lookups to the Pi-hole. This is either through adjustment of manual DNS Server assignments on all clients, or by using the DNS options on the DHCP server.

The ‘Before’ design is more flexible as it is possible to have some devices bypass Pi-hole completely (e.g. servers).

If you wish to use the ‘Before’ design. The DHCP Server should advertise the Pi-hole as the client DNS Server. The Pi-hole should in-turn specify the router’s DNS address, public DNS (or both) as its DNS address during installation.

After the DNS Server

In the ‘After’ design, it isn’t possible for any traffic to bypass the Pi-hole filtering. All DNS Servers will rely on the Pi-hole Server for resolution. In this design, no reconfiguration is necessary on clients or DHCP Server. The administrator must instead reconfigure the DNS Servers so that the Pi-hole server is their preferred forwarder. The Pi-hole should then only be configures to forward requests to the ISPs DNS Servers or directly to Public DNS.

The ‘After’ design is less flexible, and will be more prone to issues caused by Pi-hole service outages. The advantage is that for secure environments, it is not possible for DNS traffic to bypass filtering.


Redundancy & Failover

If you only have a single Pi-hole server and it fails, what will be the impact to your network? The Pi-hole server could crash, be turned off or even hacked. More likely is the impact of you performing servicing and maintenance on the Pi-hole, creating downtime. Most business deployments should therefore make use of Virtualisation to provide failover and redundancy for the Pi-hole VM. Additionally, most business deployments should have at least 2 Pi-hole servers. Both of which should be made available to service clients using either load balancing or Primary/Secondary DNS Server preferences.

In most ‘Before’ environments (see above), you should continue to advertise the existing, non-Pi-hole DNS Servers to your clients. For example, if your Router’s IP address is and the Pi-hole Server’s is, your DHCP Server’s primary DNS option should be (for the Pi-hole) and its secondary should be By advertising a secondary, you give the client a chance to recover (with ads) if the Pi-hole Server fails.


IP Addressing

You must use a static IP / reserved IP address for your Pi-hole installation. Ensure that you know what this address is before commencing the installation and that any intermediate firewalls allow HTTP (TCP 80) and DNS (UDP 53) traffic.


Integration with Active Directory

Under ‘Before’ deployments, an issue may occur for businesses using Directory Services (e.g. Active Directory). The Pi-hole Server will receive an internal request for you company domain e.g., but will forward this directly to Public DNS. As the AD domain is private, the Public DNS will fail to resolve the request. Within a few minutes of deploying Pi-hole, network clients will become unable to communicate with resources on your business network.

To ensure that there is no DNS route susceptible to this issue, you must ensure that a Conditional forwarder is configured before going-live. You add the conditional forwarder to Pi-hole via Settings > DNS > Conditional Forwarding.

Warning: At the current time, Pi-hole only support a single Conditional Forwarder. This may be a problem in multi-domain Active Directory forests.


Create the Virtual Machine

In Hyper-V Manager, create a new Virtual Machine with the following characteristics:

  • Generation 2
  • Secure Boot Disabled
  • 512 MB startup RAM
    • Enable dynamic memory with 256 MB minimum RAM and 1024 MB maximum RAM
  • 2 virtual processors
  • A virtual hard drive – I suggest leaving the defaults, but I would suggest having 10 GB minimum size
  • A virtual DVD drive
    • Mount the CentOS 7 installer ISO into the virtual DVD Drive
  • A virtual network adapter, connected to your client facing LAN
  • Automatic start action: Always start this virtual machine automatically


Installing CentOS

Download the ~950MB Minimal CentOS ISO from the CentOS website. Using the Minimal install ensures that as little RAM and disk space as possible will be used for your Pi-hole environment. A consequence of this is that you will not have a X11 GUI to work with. If this is something that you find off-putting, be reassured that there is not much that you need to do on the Linux command line to get Pi-hole up and running.

Download: CentOS 7

Boot the Hyper-V virtual machine and from the DVD boot menu, choose to Install CentOS

Ensure that you

  1. Select your correct region, time zone and keyboard layout
  2. Configure your Virtual Machines network adapter to use a static IP address e.g. ‘’ with a ’24’ mask and a gateway of ‘’
  3. Ensure that you set the correct up-stream DNS servers for your system, this could be your internal DNS servers, your router or a public DNS service such as Google DNS on ‘,’ (note the comma)
  4. If you are on a business network, set the DNS search alias to match your network domain name e.g.
  5. Once ready, begin the CentOS 7 installation. While it is installing, ensure that you set a new ‘root’ password. You do not need to add any additional user accounts to the CentOS install to use Pi-hole


Complete CentOS Configuration

Once the installation has finished, you will be presented with the white on black CentOS logon screen. There are a small number of pre-configuration steps to perform before you can install Pi-hole.

Note: Your Virtual Machine will need access to the Internet from this point forward.

  1. Log-in as ‘root’ (lower case), using the password you entered during setup
  2. Type the following command, confirming that you accept the installation of the new application. Please note that ALL Linux command line commands are case sensitive:
    yum install nano
  3. Once nano is installed, execute the command:
    nano /etc/sysconfig/selinux
  4. Nano will load the ‘selinux’ configuration file. Use the cursor keys, delete/backspace, locate the line
  5. Edit the “SELINUX=enforcing’ line to read:
    Note: This is case sensitive.
    By changing the SELINUX configuration value, you will lower the system security level to permit Pi-hole to run. Pi-hole cannot load unless this step has been performed.
    Disable SELinux
  6. To save the file and exit Nano, enter the following commands on the keyboard:
    Ctrl + O (to save the file)
    Press Enter (to accept the file name and complete the save)
    Ctrl + X (to exit Nano)
  7. You must now reboot the Virtual Machine. Do this by entering the following command:
    shutdown -r now


Installing Pi-hole

Once CentOS is configured, you can install Pi-hole.

  1. Once the Virtual Machine has rebooted, log in once again as ‘root’
  2. Enter the following command to download the Pi-hole installer and begin setup:
    curl -sSL | bash
  3. Follow through the wizard, choosing options suitable for your environment. There are no special configuration steps required for running Pi-hole specifically under Hyper-V
  4. You can take a note of the randomly-generated admin account password, however I recommend following the steps in the Post-install Tasks section to change the password immediately


Post-install Tasks

Before you start to use Pi-hole, you should perform some basic configuration tasks.

  1. Change the randomly assigned Pi-hole admin account password to something more memorable. From the command line enter:
    pihole -a -p
    When you press return, you will be asked to enter a new password
  2. Update and patch CentOS
    yum update
    Accept any updates that are available to install and allow CentOS to self-update
  3. Once CentOS has finished updating reboot the server a final time by issuing the reboot command:
    shutdown -r now
  4. [Optional] Consider enabling HTTPS (TCP 443) support for the Pi-hole admin interface.
    View: Enabling HTTPS for your Pi-hole Web Interface


Web Interface Configuration Tasks

After the installation has completed successfully. You can log into your Pi-hole installation from any web browser on any device via:

  • http://pi.hole/admin


  • http://<your IP address>/admin


[Optional] If you are using Active Directory, or another x.500, DNS bound service. Add a conditional forwarder for your domain:

  1. Navigate to: Settings > DNS > Conditional Forwarding
  2. Tick the “Use Conditional Forwarding” check box
  3. In the “IP of your router” text box, enter the IP address of an authoritative DNS server for your domain
  4. In the “Local domain name” text box, enter your fully qualified domain name e.g.


Maintenance Tasks

As with any software application, Pi-hole will periodically receive updates. Similarly, so will CentOS. Unfortunately at the time or writing, Pi-hole does not support self-updating via the web interface. Consequently, you should periodically log into the CentOS VM and perform maintenance from the command line. This is a fairly simple process, requiring only two commands

  1. Update CentOS:
    yum update
  2. Update Pi-hole:
    pihole -up

You may also wish to periodically reboot the server after updating by issuing the reboot command:
shutdown -r now

You can also reboot the Pi-hole server via the Settings page on the web interface.

Hyper-V Error Message Cheat-sheet

The aim of this article is to create a slowly growing cheat-sheet of Hyper-V Errors and known fixes.

Errors without Error Codes

These errors are UI errors that do not provide a Hex Error Code / Win32 Error Code.

There was an error during move operation.

Migration operation on ‘<VM Name>’ failed.

See 0x80041024
There was an error during move operation.

Virtual machine migration operation failed at migration source.

Failed to create folder.

See 0x80070005


Errors with Error Codes

The following errors have error codes either stated on GUI error messages, as part of PowerShell error messages or in the Hyper-V event logs in Event Viewer.


Event ID: 16000

The Hyper-V Virtual Machine Management service encountered an unexpected error: Provider is not capable of the attempted operation (0x80041024).

  1. One or both of the Hyper-V servers has an invalid or inaccessible IP address/network address on the “Incoming live migrations” list on the “Live Migrations” tab in Hyper-V settings
  2. The live migration network is down, the/a cable is unplugged or faulty
0x80070002 Could not create backup checkpoint for virtual machine ‘<VM Name>’: The system cannot find the file specified. (0x80070002). (Virtual machine ID <VM GUID>).

For more on this, seem my in-depth article.

  1. Check that you have enough disk space on the volume to perform the VSS. If your volume is sub-15%, try using the Hyper-V Manager to change the snapshot directory to another volume – plug in an external NTFS formatted hard drive if you have to.
  2. Check the permissions of the VHD stated in the error.
    icacls “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\<VHD File>” /grant “NT VIRTUAL MACHINE\Virtual Machines”:F
  3. Check that you can manually checkpoint/snapshot the VM while it is running.
    In Hyper-V Manager or in PowerShell, force a checkpoint on the VM and then delete it and wait for it to merge back. If this works, you are not having a physical VSS issue. If it fails, you need to troubleshoot this and not the WSB error.
  4. Live Migrate the VM off of the current server and onto a different Hypervisor, attempt the backup here, then bring it back to the original server and try again. This process will reset the permissions on the VM file set. If you cannot live or offline migrate the VM, the you need to troubleshoot this and not the WSB error
  5. Ensure that all VHDX files associated with the VM are on the same storage volume and not spread across multiple Volumes/LUNs (be it individual disks, logical RAID or iSCSI disks). If they are, move them to a common location and retry the operation.
0x80070005 ‘General access denied error'(‘0x80070005’)

For more on this, see my in-depth article.

  1. If you are using a management console (not the local Hyper-V console to perform the migration. You must setup Kerberos Constrained Delegation for all of your Hyper-V hosts machine accounts in Active Directory.
  2. Ensure that both NetBIOS and fully qualified DNS entries exist in the Constrained delegation (click the “Expanded” check box to verify this) e.g. myserver1 and mysever1.mydomain.local for both “CIFS” and “Microsoft Virtual System Migration Service”
  3. For Hyper-V 2008, 2008 R2, 2012 and 2012 R2 “Trust this computer for delegation to specified services only” should be used along with “Use Kerberos only”
  4. For Hyper-V 2016 and 2019 or a mixed environment “Trust this computer for delegation to specified services only” must be used along with “Use any authentication protocol”

Event ID: 20306

The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host ‘<Hypervisor FQDN>’: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (0x8007274C).

  1. The Live Migration firewall rules have not been enabled in Windows Firewall (or your third party firewall). This is especially easy to forget on Server Core and Hyper-V Server installs.
  2. No DNS server(s) or invalid DNS server(s) have been specified on the network configuration for the hypervisor
  3. Specified DNS server(s) are unreachable
  4. An intermediate firewall or IDS blocked the request

Sniffing the Parent Partitions Network Traffic in a Hyper-V Virtual Machine

This article discusses a situation whereby you want to monitor/mirror/sniff network port traffic on a Hyper-V Parent Partition inside on of its own child VM’s.

Why would you need to do this?

Under a traditional architecture you have the flexibility to tell your switch to mirror all traffic into or out of Port 6 onto Port 21. You then connect a laptop to Port 21 and promiscuously monitor the traffic coming into that port. Under a modern Converged/Software Defined Network architecture, this will not work.

In a modern Converged Fabric design, physical NICs are teamed. The parent partition on the hypervisor no-longer uses the physical NICs, but logically uses its own synthetic NICs for data transfers.

  1. Link Aggregation/LCAP/EtherChannel will split the traffic at the switch
  2. Teaming/LBFO will split the traffic at the hypervisor
  3. Data security will fire a red flag as you will be monitoring too much unrelated traffic
  4. If you combine them, you will overload the monitoring Port with aggregated traffic, causing performance issues and packet loss
  5. You may impact the performance of tenant VM’s and mission critical services

Fortunately the Parent Partitions own Virtual NICs are identical to the vNICs in any Hyper-V virtual machine. Consequently, you can use the same Hyper-V functionality on the Parent Partition as you would any VM.



In order to sniff traffic on the Parent Partition you must ensure the following:

  1. The Parent Partition and the VM must be connected to the same Virtual Switch
  2. The “Microsoft NDIS Capture” extension must be enabled on the Virtual Switch (this is enabled by default)
    Enable the Microsoft NDIS Capture Extensions
  3. The monitoring VM should have 2 vNICs. The vNIC used to monitor traffic should be configured onto the same VLAN as the vNIC on the Parent Partition. The monitoring NIC should have all of its service and protocol bindings disabled to ensure that only port mirrored traffic is appearing in the WireShark logs
    Disabling service and protocol bindings on the vNIC
  4. Wireshark, Microsoft NetMonitor or another promiscuous network traffic monitor
  5. If you are in a corporate environment, ensure that you have approvals from your Information Security team. In some jurisdictions port sniffing can be considered an offence


Enabling Port Sniffing

You cannot enable Port Sniffing on the Parent Partition using the Hyper-V Manager GUI. Open PowerShell on/to the Parent Partition

Execute Get-NetAdapter

Identify the name of vNIC that you will sniff traffic to/from e.g. vEthernet (Management)

Taking only the value inside the parenthesis "Management" enter the following command

Get-VMNetworkAdapter -ManagementOS 'Management' | Set-VMNetworkAdapter -PortMirroring Source

Substituting WireSharkVm for the name of your monitoring VM. Execute Get-VMNetworkAdapter 'WireSharkVm'

Identify the MAC Address of the vNIC’s that you will use to receive the Port Mirror from the Hyper-V host and enable it as the recipient for the mirror

Get-VMNetworkAdapter 'WireSharkVm' | ?{$_.MacAddress -eq '001512AB34CD'} | Set-VMNetworkAdapter -PortMirroring Destination

If the Parent Partition and VM vNICs are in the same VLAN. You should now be able to sniff traffic inbound to / outbound from the Parent Partition.


Disabling Port Sniffing

When using Port Mirroring, remember that it consumes CPU time and network resources on the hypervisor. To disable the port mirror, repeat the above commands substituting ‘None’ as the key-word for the PortMirroring parameter e.g.

Get-VMNetworkAdapter -ManagementOS 'Management' | Set-VMNetworkAdapter -PortMirroring None
Get-VMNetworkAdapter 'WireSharkVm' | ?{$_.MacAddress -eq '001512AB34CD'} | Set-VMNetworkAdapter -PortMirroring None

Scanning and repairing drive 9% complete – the curse of chkdsk

This article discusses an issue of a computer getting stuck at boot with the message “Scanning and repairing drive 9% complete” with chkdsk hanging at 9%.

The hypervisor was 12 months over-due for a BIOS update. Updating the UEFI should be simple enough, however SuperMicro have a nasty habit of clearing the CMOS during BIOS updates. Why most other OEM’s are able to transfer settings and SuperMicro insists on not is one of only a few gripes that I have ever had with the firm. Yet it is a persistent one that I’ve had with them going back to 1998.

The Fault

After the successful update, I reset the BIOS to the previous values as best I could recall. Unfortunately I also enabled the firmware watchdog timer.

SuperMicro’s firmware level watchdog timer does not operate as you might expect. It requires a daemon or service to be present within the running operating system that polls the watchdog interrupt periodically. If the interrupt isn’t polled, the firmware forces a soft reboot. Supermicro do not provide a driver to do this for Windows, although their IPMI implementation can do so.

After 5 minutes from the POST the hypervisor performed an ungraceful, uninitaited reset. Following the first occurrence I assumed it was completing Windows Update. Subsequent to the second, I was looking for a problem and after the third (and a carefully placed stopwatch) I had a suspicion that I must have turned on the UEFI watchdog.

I was correct and, after disabling it, the issue was resolved.

This particular hypervisor has SSD block storage for VMs internally and large block storage for backup via an external USB 3.1 enclosure – a lot of it. Without giving it any thought, I told the system to

chkdsk <mountPoint> /F

Note that this does not include the /R switch to perform a 5 step surface scan. I told chkdsk not to dismount the volume, but to bundle all of the scans together during the required reboot to scan the C:. Doing it this way meant that I could walk away from the system. In theory this would mean that when chkdsk finished, it would rejoin the Hyper-V cluster on its own and become available to receive workloads.

… and restarted.


Scanning and repairing drive 9% complete

chkdsk skipped the SSD storage as it is all configured as ReFS. Under ReFS, disk checking is not required as it performs journaling activities in the background to preserve data integrity. Unfortunately, the external backup enclosure volume was NTFS. It would be scanned – and it was also quite full.

The system rebooted, and sitting at the intermedia chkdsk stage of the NT boot process. It zipped through the SSD NTFS boot volume in a few seconds, before hitting the external enclosure. Within around 5 minutes it had arrived at the magic “9% complete” threshold.

1 hour, 2 hours, 4 hours… 8 hours. That turned into 24 hours later and the message was still the same.

Windows Boot Scanning and repairing drive (F:): 9% complete

Scanning and repairing drive (F:): 9% complete.

Crashing the chkdsk

The insanity of waiting over 24 hours had to come to an end and I used IPMI to forcefully shutdown the server.

After a minute or two, we powered back on. To be met with a black screen of death from Windows after the POST.

The c:\pagefile.sys was corrupt and unreadable. Perform a system recovery or press enter to load the boot menu. On pressing enter, the single option to boot Windows Server 2019 was present, and, after a few moments. Windows self-deleted the corrupt pagefile.sys, recreated it and booted -to much relief.

I then ran

chkdsk c: /f

and rebooted, which completed within a few seconds and marked the volume as clean, with no reported anomalies.

The Windows System Event Log contained no errors (in fact as you might expect, no data) for the 24 hour period that the server had been ‘down’. The were no ‘after the event’ errors added to the System log or any of the Hardware or Disk logs either. for all intents and purposes, the system reported as fine.


Trying chkdsk for a second time

I decided to brave running chkdsk on the external enclosure again. Initially in read-only mode

chkdsk F:

Note the absence of the /F switch here.

It zipped through the process in a few seconds stating

Windows has scanned the file system and found no problems.
No further action is required.

Next I ran a full 3-phase scan

chkdsk F: /F

Again, it passed the scan in a few seconds without reporting any errors. So much for the last 24 hours!



The corruption in the page file indicates that Windows was doing something. The disk array was certainly very active, with disk activity visible (via LED), acoustically and via data from the power monitor on the server all confirming that “something” was happening. Forcibly shutting down the system killed the page file during a write. Had been a 5-step chkdsk F: /f /r scan I could understand the length of time that it was taking.

With chkdsk /f /r – assuming a 512 byte hard drive – the system has to test 1,953,125,000 sectors for each terabyte of disk space. Depending on the drive speed, CPU speed and RAM involved it isn’t uncommon to hear of systems taking 5 hours per-terabyte to scan. This scan was not a 5-step scan, just a 3-step. A live Windows environment could scan the disk correctly in a few seconds.

Resources were not an issue in this system. Being a hypervisor, it had 128GB of RAM and was running with 2018 manufactured processors.

My suspicion is that the problem exists because of a bad interaction between the boot level USB driver and the USB enclosure. The assumption is that Windows fell into either a race condition or a deadlocked loop. During this fault, chkdsk was genuinely scanning the disk and diagnostic data was being tested in virtual memory (i.e. in the page file) but it was never able to successfully exit.

The lesson that I will take away from this experience is that unless it to avoid using a boot cycle chkdsk to perform a scan on a USB disk enclosure.