Installing Pi-hole in Hyper-V

Pi-hole is a DNS-based ad and tracking blocker. Originally intended to run on a Raspberry Pi, Pi-hole can itself run officially under almost any Linux distribution. This article discusses the process of installing Pi-hole in Hyper-V for Windows Server. By using Hyper-V you can fully virtualise your Pi-hole environment for both scale-up and scale-out deployments.

View: Pi-hole Project Home Page

System Requirements

Pi-hole’s requirements are extremely modest. The main detail in defining system requirements is the minimum requirement for the Linux distribution that you choose. Unlike other third-party virtualisation platforms, Hyper-V is highly selective over which distributions will work well with its various Microsoft-orchestrated foibles. Consequently, for Windows Server 2016 and 2019 users, CentOS 7 currently offers the best feature support.

Using CentOS 7, a basic Pi-hole installation will require

  • 1 CPU core
  • 700MB RAM
  • 2.5 GB storage space
  • A virtual DVD drive

 

Understand your Pi-hole Design Architecture

The complexity of your existing network and requirements may mean that there are several design decisions that you should make before installing Pi-hole. It is important that you understand these design requirements before implementing Pi-hole, to prevent service disruption.

How should Pi-hole integrate with existing DNS servers?

There are two options for where you place the Pi-hole Server in the DNS lookup chain. Before, or after you internal DNS server.

  • Before: Client Device > Pi-hole > DNS Server/Router > Public DNS
  • After: Client Device > DNS Server > Pi-hole > Public DNS

 

Before the DNS Server

Under the ‘Before’ design, if the Pi-hole goes down, the client device can communicate directly with the existing DNS server/router to make DNS lookups. To use this design, all client devices must be re-configured to perform DNS lookups to the Pi-hole. This is either through adjustment of manual DNS Server assignments on all clients, or by using the DNS options on the DHCP server.

The ‘Before’ design is more flexible as it is possible to have some devices bypass Pi-hole completely (e.g. servers).

If you wish to use the ‘Before’ design. The DHCP Server should advertise the Pi-hole as the client DNS Server. The Pi-hole should in-turn specify the router’s DNS address, public DNS (or both) as its DNS address during installation.

After the DNS Server

In the ‘After’ design, it isn’t possible for any traffic to bypass the Pi-hole filtering. All DNS Servers will rely on the Pi-hole Server for resolution. In this design, no reconfiguration is necessary on clients or DHCP Server. The administrator must instead reconfigure the DNS Servers so that the Pi-hole server is their preferred forwarder. The Pi-hole should then only be configures to forward requests to the ISPs DNS Servers or directly to Public DNS.

The ‘After’ design is less flexible, and will be more prone to issues caused by Pi-hole service outages. The advantage is that for secure environments, it is not possible for DNS traffic to bypass filtering.

 

Redundancy & Failover

If you only have a single Pi-hole server and it fails, what will be the impact to your network? The Pi-hole server could crash, be turned off or even hacked. More likely is the impact of you performing servicing and maintenance on the Pi-hole, creating downtime. Most business deployments should therefore make use of Virtualisation to provide failover and redundancy for the Pi-hole VM. Additionally, most business deployments should have at least 2 Pi-hole servers. Both of which should be made available to service clients using either load balancing or Primary/Secondary DNS Server preferences.

In most ‘Before’ environments (see above), you should continue to advertise the existing, non-Pi-hole DNS Servers to your clients. For example, if your Router’s IP address is 192.168.1.254 and the Pi-hole Server’s is 192.168.1.2, your DHCP Server’s primary DNS option should be 192.16.1.2 (for the Pi-hole) and its secondary should be 192.168.1.254. By advertising a secondary, you give the client a chance to recover (with ads) if the Pi-hole Server fails.

 

IP Addressing

You must use a static IP / reserved IP address for your Pi-hole installation. Ensure that you know what this address is before commencing the installation and that any intermediate firewalls allow HTTP (TCP 80) and DNS (UDP 53) traffic.

 

Integration with Active Directory

Under ‘Before’ deployments, an issue may occur for businesses using Directory Services (e.g. Active Directory). The Pi-hole Server will receive an internal request for you company domain e.g. ad.mycompany.co.uk, but will forward this directly to Public DNS. As the AD domain is private, the Public DNS will fail to resolve the request. Within a few minutes of deploying Pi-hole, network clients will become unable to communicate with resources on your business network.

To ensure that there is no DNS route susceptible to this issue, you must ensure that a Conditional forwarder is configured before going-live. You add the conditional forwarder to Pi-hole via Settings > DNS > Conditional Forwarding.

Warning: At the current time, Pi-hole only support a single Conditional Forwarder. This may be a problem in multi-domain Active Directory forests.

 

Create the Virtual Machine

In Hyper-V Manager, create a new Virtual Machine with the following characteristics:

  • Generation 2
  • Secure Boot Disabled
  • 512 MB startup RAM
    • Enable dynamic memory with 256 MB minimum RAM and 1024 MB maximum RAM
  • 2 virtual processors
  • A virtual hard drive – I suggest leaving the defaults, but I would suggest having 10 GB minimum size
  • A virtual DVD drive
    • Mount the CentOS 7 installer ISO into the virtual DVD Drive
  • A virtual network adapter, connected to your client facing LAN
  • Automatic start action: Always start this virtual machine automatically

 

Installing CentOS

Download the ~950MB Minimal CentOS ISO from the CentOS website. Using the Minimal install ensures that as little RAM and disk space as possible will be used for your Pi-hole environment. A consequence of this is that you will not have a X11 GUI to work with. If this is something that you find off-putting, be reassured that there is not much that you need to do on the Linux command line to get Pi-hole up and running.

Download: CentOS 7

Boot the Hyper-V virtual machine and from the DVD boot menu, choose to Install CentOS

Ensure that you

  1. Select your correct region, time zone and keyboard layout
  2. Configure your Virtual Machines network adapter to use a static IP address e.g. ‘192.168.1.2’ with a ’24’ mask and a gateway of ‘192.168.1.254’
  3. Ensure that you set the correct up-stream DNS servers for your system, this could be your internal DNS servers, your router or a public DNS service such as Google DNS on ‘8.8.8.8, 8.8.4.4’ (note the comma)
  4. If you are on a business network, set the DNS search alias to match your network domain name e.g. ad.mycompany.co.uk
  5. Once ready, begin the CentOS 7 installation. While it is installing, ensure that you set a new ‘root’ password. You do not need to add any additional user accounts to the CentOS install to use Pi-hole

 

Complete CentOS Configuration

Once the installation has finished, you will be presented with the white on black CentOS logon screen. There are a small number of pre-configuration steps to perform before you can install Pi-hole.

Note: Your Virtual Machine will need access to the Internet from this point forward.

  1. Log-in as ‘root’ (lower case), using the password you entered during setup
  2. Type the following command, confirming that you accept the installation of the new application. Please note that ALL Linux command line commands are case sensitive:
    yum install nano
  3. Once nano is installed, execute the command:
    nano /etc/sysconfig/selinux
  4. Nano will load the ‘selinux’ configuration file. Use the cursor keys, delete/backspace, locate the line
    SELINUX=enforcing
  5. Edit the “SELINUX=enforcing’ line to read:
    SELINUX=permissive
    Note: This is case sensitive.
    By changing the SELINUX configuration value, you will lower the system security level to permit Pi-hole to run. Pi-hole cannot load unless this step has been performed.
    Disable SELinux
  6. To save the file and exit Nano, enter the following commands on the keyboard:
    Ctrl + O (to save the file)
    Press Enter (to accept the file name and complete the save)
    Ctrl + X (to exit Nano)
  7. You must now reboot the Virtual Machine. Do this by entering the following command:
    shutdown -r now

 

Installing Pi-hole

Once CentOS is configured, you can install Pi-hole.

  1. Once the Virtual Machine has rebooted, log in once again as ‘root’
  2. Enter the following command to download the Pi-hole installer and begin setup:
    curl -sSL https://install.pi-hole.net | bash
  3. Follow through the wizard, choosing options suitable for your environment. There are no special configuration steps required for running Pi-hole specifically under Hyper-V
  4. You can take a note of the randomly-generated admin account password, however I recommend following the steps in the Post-install Tasks section to change the password immediately

 

Post-install Tasks

Before you start to use Pi-hole, you should perform some basic configuration tasks.

  1. Change the randomly assigned Pi-hole admin account password to something more memorable. From the command line enter:
    pihole -a -p
    When you press return, you will be asked to enter a new password
  2. Update and patch CentOS
    yum update
    Accept any updates that are available to install and allow CentOS to self-update
  3. Once CentOS has finished updating reboot the server a final time by issuing the reboot command:
    shutdown -r now
  4. [Optional] Consider enabling HTTPS (TCP 443) support for the Pi-hole admin interface.
    View: Enabling HTTPS for your Pi-hole Web Interface

 

Web Interface Configuration Tasks

After the installation has completed successfully. You can log into your Pi-hole installation from any web browser on any device via:

  • http://pi.hole/admin

-or-

  • http://<your IP address>/admin
    e.g. http://192.168.1.2/admin

 

[Optional] If you are using Active Directory, or another x.500, DNS bound service. Add a conditional forwarder for your domain:

  1. Navigate to: Settings > DNS > Conditional Forwarding
  2. Tick the “Use Conditional Forwarding” check box
  3. In the “IP of your router” text box, enter the IP address of an authoritative DNS server for your domain
  4. In the “Local domain name” text box, enter your fully qualified domain name e.g. ad.mycompany.co.uk

 

Maintenance Tasks

As with any software application, Pi-hole will periodically receive updates. Similarly, so will CentOS. Unfortunately at the time or writing, Pi-hole does not support self-updating via the web interface. Consequently, you should periodically log into the CentOS VM and perform maintenance from the command line. This is a fairly simple process, requiring only two commands

  1. Update CentOS:
    yum update
  2. Update Pi-hole:
    pihole -up

You may also wish to periodically reboot the server after updating by issuing the reboot command:
shutdown -r now

You can also reboot the Pi-hole server via the Settings page on the web interface.

Hyper-V Error Message Cheat-sheet

The aim of this article is to create a slowly growing cheat-sheet of Hyper-V Errors and known fixes.

Errors without Error Codes

These errors are UI errors that do not provide a Hex Error Code / Win32 Error Code.

There was an error during move operation.

Migration operation on ‘<VM Name>’ failed.

See 0x80041024
There was an error during move operation.

Virtual machine migration operation failed at migration source.

Failed to create folder.

See 0x80070005

 

Errors with Error Codes

The following errors have error codes either stated on GUI error messages, as part of PowerShell error messages or in the Hyper-V event logs in Event Viewer.

0x80041024

Event ID: 16000

The Hyper-V Virtual Machine Management service encountered an unexpected error: Provider is not capable of the attempted operation (0x80041024).

  1. One or both of the Hyper-V servers has an invalid or inaccessible IP address/network address on the “Incoming live migrations” list on the “Live Migrations” tab in Hyper-V settings
  2. The live migration network is down, the/a cable is unplugged or faulty
0x80070002 Could not create backup checkpoint for virtual machine ‘<VM Name>’: The system cannot find the file specified. (0x80070002). (Virtual machine ID <VM GUID>).

For more on this, seem my in-depth article.

  1. Check that you have enough disk space on the volume to perform the VSS. If your volume is sub-15%, try using the Hyper-V Manager to change the snapshot directory to another volume – plug in an external NTFS formatted hard drive if you have to.
  2. Check the permissions of the VHD stated in the error.
    icacls “C:\Users\Public\Documents\Hyper-V\Virtual Hard Disks\<VHD File>” /grant “NT VIRTUAL MACHINE\Virtual Machines”:F
  3. Check that you can manually checkpoint/snapshot the VM while it is running.
    In Hyper-V Manager or in PowerShell, force a checkpoint on the VM and then delete it and wait for it to merge back. If this works, you are not having a physical VSS issue. If it fails, you need to troubleshoot this and not the WSB error.
  4. Live Migrate the VM off of the current server and onto a different Hypervisor, attempt the backup here, then bring it back to the original server and try again. This process will reset the permissions on the VM file set. If you cannot live or offline migrate the VM, the you need to troubleshoot this and not the WSB error
  5. Ensure that all VHDX files associated with the VM are on the same storage volume and not spread across multiple Volumes/LUNs (be it individual disks, logical RAID or iSCSI disks). If they are, move them to a common location and retry the operation.
0x80070005 ‘General access denied error'(‘0x80070005’)

For more on this, see my in-depth article.

  1. If you are using a management console (not the local Hyper-V console to perform the migration. You must setup Kerberos Constrained Delegation for all of your Hyper-V hosts machine accounts in Active Directory.
  2. Ensure that both NetBIOS and fully qualified DNS entries exist in the Constrained delegation (click the “Expanded” check box to verify this) e.g. myserver1 and mysever1.mydomain.local for both “CIFS” and “Microsoft Virtual System Migration Service”
  3. For Hyper-V 2008, 2008 R2, 2012 and 2012 R2 “Trust this computer for delegation to specified services only” should be used along with “Use Kerberos only”
  4. For Hyper-V 2016 and 2019 or a mixed environment “Trust this computer for delegation to specified services only” must be used along with “Use any authentication protocol”
0x8007274C

Event ID: 20306

The Virtual Machine Management Service failed to establish a connection for a Virtual Machine migration with host ‘<Hypervisor FQDN>’: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (0x8007274C).

  1. The Live Migration firewall rules have not been enabled in Windows Firewall (or your third party firewall). This is especially easy to forget on Server Core and Hyper-V Server installs.
  2. No DNS server(s) or invalid DNS server(s) have been specified on the network configuration for the hypervisor
  3. Specified DNS server(s) are unreachable
  4. An intermediate firewall or IDS blocked the request

How to compact VHDX files in the most efficient way

Have you ever run the Hyper-V “Edit Disk…” tool’s ‘compact’ action, to discover that your VHD or VHDX disk file didn’t shrink? This article discusses how to optimise the chances for success when compacting your virtual hard drives on Windows Server.

 

Why Compact VHDX files?

Compacting virtual disk files is an activity that most Hyper-V administrators seldom ever do. In most cases, the space savings are going to be nominal and outweigh the benefits. A well designed Hyper-V deployment will ensure accounting for future storage optimisation/loading during the capacity planning phase; with scale-out design considerations being made during this stage too.

Compaction is not a fix for poor design.

If you are unfamiliar with what compaction does at the technical level, Alataro has a good introduction guide.

View: Altaro “Why You Should Be Compacting Your Hyper-V Virtual Disks”

In practice, compaction can help in two scenarios:

  1. Improving the speed and disk space use for non-VSS block-based backups
  2. Reducing the amount of data transferred during live migration

 

Why doesn’t Edit Disk… work?

It is common for the Hyper-V Manager Compact tool to not achieve any reduction in the size of the VHDX file, putting many administrators off from spending time running it.

Two problems exist with the GUI compaction wizard:

  1. The VHDX file is not in a pre-optimised state prior to compaction, reducing the success rate
  2. For performance reasons. Execution through the wizard does not use the full range of compaction tools available to the system. These can only be accessed (currently) via PowerShell.

So how do you properly shrink a virtual disk?

 

Optimise your VM

Inevitably there will be some data inside the VM which is not necessary to retain. Removing this prior to compaction, will automatically increase the amount of space that you save.

Optimising Windows

Under Windows, running the Disk Clean-up tool is a good place to start. Additional places to clear-out include:

  1. C:\Windows\Temp
  2. C:\Windows\SoftwareDistribution\Download
    The SoftwareDistribution\Download folder will usually save over 1GB of space as it contains the installation sources for Windows Updates. Microsoft Update will re-download the files again if it needs them on the next update scan.
  3. C:\Users\<user>\AppData\Local\Temp

You can run defrag from within the VM, however if you are in a position where you can off-line compact the virtual disk, it is more time efficient to defrag the VHDX while offline.

Optimising Linux

Optimisation should also be performed on Linux systems. Unlike with Windows, Linux self-clears its own Temp space on restart. You can also free space from the updates process using your package manager. The below example can be used to free space using apt:

sudo apt-get update
sudo apt-get autoremove
sudo apt-get autoclean
sudo apt-get clean

Under native Linux file systems, there is no concept of running a defragmentation. ext4 performs this automatically on behalf of the system. This optimisation does not release free space in a “physical” fashion on the hard drive. Unless this “physical” space is released (zeroed out), Hyper-V will be unable to compact the disk.

Fortunately it is possible to force Linux to zero-out unused disk space in the VM. It can be performed using the following command

su
cd /
cat /dev/zero > zero.dat ; sync ; sleep 1 ; sync ; rm -f zero.dat

This command creates a file on the drive mounted in the “cd /” line. It then writes 0’s into this file until the system literally runs out of disk space. The new 0’s file is then deleted. If you are compacting a VHDX that is not the root partition, you must change the “cd /” line to represent the correct drive e.g. “cd /mnt/myDisk“.

Note: You will completely fill the volume for a few seconds. This will impact other write activities occurring on the disk and so can be considered to be a dangerous process.

 

Shrinking an Online VHDX

It is possible to perform an online compaction of a virtual disk. Hyper-V itself performs light-touch background optimisation automatically. If you cannot shutdown a VM and can only optimise an online VHDX then you must:

  1. Delete temp files
  2. Internally defragment the disk from within the VM itself
  3. Use Disk Management or diskpart to shrink the size of the mounted partition currently in use on the VHDX
diskpart
list vol
sel vol #
shrink
  1. Perform the compact using Hyper-V Manager or PowerShell
    Optimize-VHD <path to VHDX> -Mode Full
  2. Reverse the reduction in the size of the partition using Disk Management or diskpart
diskpart

list vol
sel vol #
extend

 

Shrinking an Offline VHDX

The most efficient method to free space from a VHDX file is to perform the compact wile it is offline. The disadvantage of an offline compaction is that the VM will need to be shutdown prior to the operation. The amount of time that the VM will be down for is directly proportional to the amount of work required to complete the optimisation process. You can improve this through online defragmentation prior to starting, however this takes significant additional administrative effort.

The following steps outline the process that I use to greatest effect. I use X:\ as the example drive letter in this scenario and the use of PowerShell is expected.

  1. Get the initial VHDX size
    $sizeBefore = (Get-Item <path to VHDX file>).length
  2. {optionally} Defrag the VM while online
  3. Shutdown the VM
  4. Mount the VHDX in read/write mode (not read only mode)
    Mount-VHD <path to VHDX file>
  5. Purge Temp files and the contents of X:\Windows\SoftwareDistribution\Download
  6. Defragment the VHDX while mounted to the management system
    1. If the VHDX is stored on SSD drives
      defrag x: /x
      defrag x: /k /l
      defrag x: /x
      defrag x: /k
    2. If the VHDX is stored on Hard Drives (/x /k /l are retained for Trim capable SANs)
      defrag x: /d
      defrag x: /x
      defrag x: /k /l
      defrag x: /x
      defrag x: /k /l

      Note: Defrag /d  can be particularly time consuming
  7. Dismount the VHDX
    Dismount-VHD <path to VHDX file>
  8. Perform a full optimisation of the VHD file
    Optimize-VHD <path to VHDX file> -Mode Full
  9. Get the new size of the VHDX
    $sizeAfter = (Get-Item <path to VHDX file>).length
  10. Start the VM
  11. Find out how much disk space you saved (if any)
    Write-Host "Total disk space Saved: $(($sizeBefore - $sizeAfter) /1Mb) MB" -Foreground Yellow

 

You will note that I repeat the steps of running defrag /x and defrag /k /l. In experimenting, this is because the repitition appears to allow a small amount of additiona space to be freed in some situations as show in the table below.

 

Results

Efficiency Purgable slabs
Size at start 73,991,716,864 100% 11
/k Slab consolidation 69,965,185,024 100% 11
/l Retrim 69,965,185,024 100% 11
/x Free space consolidation 69,965,185,024 100% 9
/x /l /k (repeat) 69,898,076,160 100% 9

The table shows the first /l retrim operation reducing the number of purgable slabs, after which a further 67,108,864 bytes (64MB) of space is freed – the two 32MB slabs.

 

Why not run Defrag /d on an SSD?

Defrag /d aka “traditional defrag” physically moves the file data to the end of its home partition, reconstructs the file in a contiguous series of blocks and moves the file back to the start of the disk. This process is unnecessary on an SSD. Here, there is virtually no likelihood that the data is stored in a contiguous fashion on the SSD NAND flash. There is also no performance benefit for the file being stored contiguously. While you can perform defrag /d on an SSD. In reality, you are needlessly shortening its cell-write life and the step should be skipped.

 

Conclusion

It is unfortunate that the process of compacting a VHDX file is not a seamless one. To realise the highest returns, it is necessary to shutdown the VM; which may not be practical in many scenarios. Equally, the amount of time required to perform the offline compact scales with the utilisation size of the VHDX, number of files and number of tasks performed as part of the maintenance.

Done right, and with the help of script automation, it can be a valuable task – especially before planned VM moves. I regularly save over 130GB in total when draining a hypervisor for maintenance in my home lab – around 25-30 minutes less file copy time over 1Gbps Ethernet. A worthwhile saving as it only takes 20 seconds to execute the automation script that does the work for me.

Sniffing the Parent Partitions Network Traffic in a Hyper-V Virtual Machine

This article discusses a situation whereby you want to monitor/mirror/sniff network port traffic on a Hyper-V Parent Partition inside on of its own child VM’s.

Why would you need to do this?

Under a traditional architecture you have the flexibility to tell your switch to mirror all traffic into or out of Port 6 onto Port 21. You then connect a laptop to Port 21 and promiscuously monitor the traffic coming into that port. Under a modern Converged/Software Defined Network architecture, this will not work.

In a modern Converged Fabric design, physical NICs are teamed. The parent partition on the hypervisor no-longer uses the physical NICs, but logically uses its own synthetic NICs for data transfers.

  1. Link Aggregation/LCAP/EtherChannel will split the traffic at the switch
  2. Teaming/LBFO will split the traffic at the hypervisor
  3. Data security will fire a red flag as you will be monitoring too much unrelated traffic
  4. If you combine them, you will overload the monitoring Port with aggregated traffic, causing performance issues and packet loss
  5. You may impact the performance of tenant VM’s and mission critical services

Fortunately the Parent Partitions own Virtual NICs are identical to the vNICs in any Hyper-V virtual machine. Consequently, you can use the same Hyper-V functionality on the Parent Partition as you would any VM.

 

Requirements

In order to sniff traffic on the Parent Partition you must ensure the following:

  1. The Parent Partition and the VM must be connected to the same Virtual Switch
  2. The “Microsoft NDIS Capture” extension must be enabled on the Virtual Switch (this is enabled by default)
    Enable the Microsoft NDIS Capture Extensions
  3. The monitoring VM should have 2 vNICs. The vNIC used to monitor traffic should be configured onto the same VLAN as the vNIC on the Parent Partition. The monitoring NIC should have all of its service and protocol bindings disabled to ensure that only port mirrored traffic is appearing in the WireShark logs
    Disabling service and protocol bindings on the vNIC
  4. Wireshark, Microsoft NetMonitor or another promiscuous network traffic monitor
  5. If you are in a corporate environment, ensure that you have approvals from your Information Security team. In some jurisdictions port sniffing can be considered an offence

 

Enabling Port Sniffing

You cannot enable Port Sniffing on the Parent Partition using the Hyper-V Manager GUI. Open PowerShell on/to the Parent Partition

Execute Get-NetAdapter

Identify the name of vNIC that you will sniff traffic to/from e.g. vEthernet (Management)

Taking only the value inside the parenthesis "Management" enter the following command

Get-VMNetworkAdapter -ManagementOS 'Management' | Set-VMNetworkAdapter -PortMirroring Source

Substituting WireSharkVm for the name of your monitoring VM. Execute Get-VMNetworkAdapter 'WireSharkVm'

Identify the MAC Address of the vNIC’s that you will use to receive the Port Mirror from the Hyper-V host and enable it as the recipient for the mirror

Get-VMNetworkAdapter 'WireSharkVm' | ?{$_.MacAddress -eq '001512AB34CD'} | Set-VMNetworkAdapter -PortMirroring Destination

If the Parent Partition and VM vNICs are in the same VLAN. You should now be able to sniff traffic inbound to / outbound from the Parent Partition.

 

Disabling Port Sniffing

When using Port Mirroring, remember that it consumes CPU time and network resources on the hypervisor. To disable the port mirror, repeat the above commands substituting ‘None’ as the key-word for the PortMirroring parameter e.g.

Get-VMNetworkAdapter -ManagementOS 'Management' | Set-VMNetworkAdapter -PortMirroring None
Get-VMNetworkAdapter 'WireSharkVm' | ?{$_.MacAddress -eq '001512AB34CD'} | Set-VMNetworkAdapter -PortMirroring None