<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>David&#039;s Homelab</title>
	<atom:link href="https://davidshomelab.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://davidshomelab.com/</link>
	<description></description>
	<lastBuildDate>Wed, 31 Jul 2024 20:45:34 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.8.10</generator>

 
	<item>
		<title>AD Cross-Forest Migration With Entra ID Sync</title>
		<link>https://davidshomelab.com/ad-cross-forest-migration-with-entra-id-sync/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Wed, 31 Jul 2024 20:37:09 +0000</pubDate>
				<category><![CDATA[Active Directory]]></category>
		<category><![CDATA[Entra ID]]></category>
		<category><![CDATA[Entra]]></category>
		<category><![CDATA[Windows]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=812</guid>

					<description><![CDATA[<p>Recently I came across a scenario where a customer had been running two on premises AD forests. Each domain contained users from different sections of the organisation with both synced to the same Entra tenant. Over time the administrative overhead of maintaining two forests became increasingly cumbersome, prompting the customer to investigate merging the two. ... <a title="AD Cross-Forest Migration With Entra ID Sync" class="read-more" href="https://davidshomelab.com/ad-cross-forest-migration-with-entra-id-sync/" aria-label="More on AD Cross-Forest Migration With Entra ID Sync">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/ad-cross-forest-migration-with-entra-id-sync/">AD Cross-Forest Migration With Entra ID Sync</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Recently I came across a scenario where a customer had been running two on premises AD forests. Each domain contained users from different sections of the organisation with both synced to the same Entra  tenant. Over time the administrative overhead of maintaining two forests became increasingly cumbersome, prompting the customer to investigate merging the two.</p>



<p>While the process of moving users between AD forests using the AD migration tool is relatively straightforward, I had struggled to find any documentation on how to ensure the migrated users would continue to match up with their Entra ID counterparts.</p>



<h2>How Entra Connect Identifies Users</h2>



<p>When the synchronization service identifies a new user account in Active Directory it will pass the user attributes to Entra ID to create a new user. One of these attributes is &#8220;sourceAnchor&#8221; which by default is a base64 encoded copy of the on premises user&#8217;s GUID.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="689" height="540" src="https://davidshomelab.com/wp-content/uploads/2024/07/image.png" alt="Shows the connector space object properties window of the synced user, showing the sourceAnchor attribute which will be used to match the account." class="wp-image-813" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image.png 689w, https://davidshomelab.com/wp-content/uploads/2024/07/image-300x235.png 300w" sizes="(max-width: 689px) 100vw, 689px" /><figcaption>When the user is created in Entra ID it is given a sourceAnchor attribute based on the on premises GUID</figcaption></figure></div>



<p>While the sourceAnchor is based on the user&#8217;s GUID, it does not map directly to the GUID. This will be useful later as it will allow us to match up the account with a new on premises account. Instead, the sourceAnchor maps  to &#8220;mS-DS-ConsistencyGuid&#8221; in on premises AD. The value here does not exactly match that in &#8220;sourceAnchor&#8221; as the &#8220;sourceAnchor&#8221; is base64 encoded and &#8220;mS-DS-ConsistencyGuid&#8221; is hex encoded but if converted to the same format they will be the same. Note, &#8220;mS-DS-ConsistencyGuid&#8221; is only populated after the second sync cycle so for the below process to work, the user must have existed for at least 2 sync cycles. This will be true for almost all production cases but is worth bearing in mind for testing.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="687" height="544" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-1.png" alt="Shows the connector space object properties for the second sync, showing that the mS-DS-ConsistencyGuid has been populated" class="wp-image-814" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-1.png 687w, https://davidshomelab.com/wp-content/uploads/2024/07/image-1-300x238.png 300w" sizes="(max-width: 687px) 100vw, 687px" /><figcaption>The second time the object is synced, the &#8220;mS-DS-ConsistencyGuid&#8221; attribute is written back to the on premises directory.</figcaption></figure></div>



<p>One other important factor to note about the second sync is that for the on premises delta import, we also see an Add operation corresponding to our user account. If we look in the attributes for this sync we see that the object does not contain &#8220;mS-DS-ConsistencyGuid&#8221;. This is because the purpose of this operation is not to write data back to AD, it is to confirm to the sync service that the previous add operation has been successful. If it does not see the original object returned it will assume that the original export has failed and attempt it again.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="690" height="544" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-2.png" alt="Connector space object properties showing Entra ID confirming to the on premises domain that the add has been successful." class="wp-image-815" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-2.png 690w, https://davidshomelab.com/wp-content/uploads/2024/07/image-2-300x237.png 300w" sizes="(max-width: 690px) 100vw, 690px" /><figcaption>When the Synchronization Service exports an object to Entra ID, it will expect to see Entra ID export the same object on the next sync cycle.</figcaption></figure></div>



<h2>Migrating a User</h2>



<p>As we have seen, when the Entra Sync Service runs it will match on premises and cloud users based on their  &#8220;mS-DS-ConsistencyGuid&#8221;. This means that if we want to sync an Entra ID user with a different on premises user we simply need to ensure that the new on premises user has the same &#8220;mS-DS-ConsistencyGuid&#8221;. Our other concern is that we do not want the Synchronization Service in multiple directories to be syncing the same user. If this were to happen, each time a sync ran, that directory&#8217;s attributes would overwrite the other directory&#8217;s attributes causing the account to flip flop between two states.</p>



<h3>Prerequisites</h3>



<p>Prior to beginning a user migration there are some prerequisites to check. These will simplify the process and reduce the risk of errors or duplicated accounts.</p>



<p>The first prerequisite to check is that the source anchor is actually set to &#8220;mS-DS-ConsistencyGuid&#8221;. This is the default configuration but can be configured when Entra Sync is first set up. While there are ways to change this later they are complex and beyond the scope of this article. In Microsoft Azure Active Directory Connect, select &#8220;View or export current configuration&#8221; and click &#8220;Next&#8221;. Under Synchronization Settings you should see the source anchor attribute, this should be set to &#8220;mS-DS-ConsistencyGuid&#8221;. If it is set to something else, this migration method will likely be impossible.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="330" height="94" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-4.png" alt="Image of the synchronization settings page. Text:

Synchronization Settings
SOURCE ANCHOR
mS-DS-ConsistencyGuid" class="wp-image-817" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-4.png 330w, https://davidshomelab.com/wp-content/uploads/2024/07/image-4-300x85.png 300w" sizes="(max-width: 330px) 100vw, 330px" /></figure></div>



<p>In the original AD forest, you will need an OU which is not synced to Entra ID. This will allow you to remove users from the sync without deleting them. If no such OU exists, open Microsoft Azure Active Directory Connect on the sync server in the original domain and click &#8220;Customize Synchronisation Options&#8221;. Sign in when prompted and press &#8220;Next&#8221;. In &#8220;Domain and OU filtering&#8221;, ensure that at least one OU is unchecked.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="878" height="620" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-3.png" alt="AD Connect configuration box showing one synced OU with all other OUs unsynced" class="wp-image-816" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-3.png 878w, https://davidshomelab.com/wp-content/uploads/2024/07/image-3-300x212.png 300w, https://davidshomelab.com/wp-content/uploads/2024/07/image-3-768x542.png 768w" sizes="(max-width: 878px) 100vw, 878px" /><figcaption>In this example, I am only syncing a single OU but for most production scenarios most or all OUs will be synced so it may be preferable to create a dedicated excluded OU</figcaption></figure></div>



<p>You should also check that all users you wish to sync have a value set for &#8220;mS-DS-ConsistencyGuid&#8221;. If a user does not have this set it may still be possible for them to sync using soft matching based on the UPN or the proxy addresses. Such a configuration will break in a scenario like a domain move as these values would change. The following PowerShell command will identify all users with an empty &#8220;mS-DS-ConsistencyGuid&#8221;. If any of these are on the list to be synced, you will need to fix them before proceeding.</p>



<pre class="wp-block-code"><code>Get-ADUser -Filter * -Properties "mS-DS-ConsistencyGuid" | Where-Object "mS-DS-ConsistencyGuid" -eq $null</code></pre>



<p>Finally, before making any changes I would recommend disabling the automatic sync cycle in both the source and target domains. This is not strictly necessary but it will ensure that syncs only occur when you plan for them, making sure that an object won&#8217;t be synced at an inopportune moment. To disable syncing, run the following PowerShell command:</p>



<pre class="wp-block-code"><code>Set-ADSyncScheduler -SyncCycleEnabled $false</code></pre>



<p>When you are done and want to re-enable automatic syncing, run:</p>



<pre class="wp-block-code"><code>Set-ADSyncScheduler -SyncCycleEnabled $true</code></pre>



<h3>Detaching the Source User</h3>



<p>Before creating the user in the target domain, we must break the syncing relationship between the Entra ID user and the old on premises user. To do this we can simply move the user out of a synced OU and then run a sync:</p>



<pre class="wp-block-code"><code>Start-ADSyncSyncCycle -PolicyType Delta</code></pre>



<p>If we now review the sync logs we can see that the user has been deleted from Entra ID.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="690" height="544" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-5.png" alt="connector space object properties showing that user has been deleted" class="wp-image-818" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-5.png 690w, https://davidshomelab.com/wp-content/uploads/2024/07/image-5-300x237.png 300w" sizes="(max-width: 690px) 100vw, 690px" /></figure></div>



<p>We can also see in the Entra ID portal that the user has been soft deleted and shows up in the deleted users list.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="397" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-6-1024x397.png" alt="Entra ID portal deleted users page showing that user has been soft deleted" class="wp-image-819" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-6-1024x397.png 1024w, https://davidshomelab.com/wp-content/uploads/2024/07/image-6-300x116.png 300w, https://davidshomelab.com/wp-content/uploads/2024/07/image-6-768x298.png 768w, https://davidshomelab.com/wp-content/uploads/2024/07/image-6.png 1527w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>One caveat to bear in mind here is that, as we discovered during the account creation, the Synchronisation Service will not fully trust that the user has been deleted until the next sync cycle. If we were to migrate the user now, they would be deleted again on the next sync. The trick here is to run the sync command for a second time. When we do this, we see the deletion reflected in the Delta Import, confirming that the deletion has been successful.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="338" height="147" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-7.png" alt="Sync status screen showing that Delta Import has processed 1 deletion" class="wp-image-820" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-7.png 338w, https://davidshomelab.com/wp-content/uploads/2024/07/image-7-300x130.png 300w" sizes="(max-width: 338px) 100vw, 338px" /></figure></div>



<p>We will also see that if we try to view the deleted object, we get an error message.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="406" height="155" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-8.png" alt="Error message confirming that properties of deleted object cannot be viewed." class="wp-image-821" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-8.png 406w, https://davidshomelab.com/wp-content/uploads/2024/07/image-8-300x115.png 300w" sizes="(max-width: 406px) 100vw, 406px" /></figure></div>



<p>This occurs because the Synchronisation Service has now fully removed this object from its database. As long as the object doesn&#8217;t come back in to scope, the service will not modify it again.</p>



<h3>Creating the Target User</h3>



<p>There are several ways that the target user can be created. For a production migration the most likely scenario is that you would be using the AD Migration Tool. While this is certainly recommended, an in depth explanation of ADMT is beyond the scope of this article. I will therefore be creating the new user manually. This method has some pitfalls which are not present in ADMT so I will point these out as we go.</p>



<p>Create a new user account in the target domain. The username and details can be the same as in the source domain but this is not necessary. The UPN suffix will almost certainly be different unless you have the same UPNs configured on both domains. If not using ADMT and automatic sync is enabled in the target domain, create the user in a non-synced OU.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="436" height="380" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-9.png" alt="User creation screen showing creation of target user" class="wp-image-822" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-9.png 436w, https://davidshomelab.com/wp-content/uploads/2024/07/image-9-300x261.png 300w" sizes="(max-width: 436px) 100vw, 436px" /></figure></div>



<p>In Active Directory Users and Computers, click &#8220;View&#8221; and ensure &#8220;Advanced Features&#8221; is checked. Open the newly created user account and go in to the &#8220;Attribute Editor&#8221; tab. Scroll down to &#8220;mS-DS-ConsistencyGuid&#8221;. This should currently be blank. Repeat the same process for the user in the source domain and you should see a hex string. Copy and paste this hex string to the new account. This is not necessary when using ADMT as it will sync the attribute automatically.</p>



<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="409" height="559" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-10.png" alt="Attribute editor dialogue showing source anchor for newly created object." class="wp-image-823" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-10.png 409w, https://davidshomelab.com/wp-content/uploads/2024/07/image-10-219x300.png 219w" sizes="(max-width: 409px) 100vw, 409px" /><figcaption>Matching up the source anchor in both domains will cause Entra ID to see both users as the same.</figcaption></figure></div>



<p>Once the source anchor is updated, we can now safely sync this account with Entra ID. If it was created in an unsynced OU, move it to a synced OU.</p>



<p>If we run the sync command and look in Entra ID, the user account no longer appears under deleted users. Returning to the All Users page, we see that the user has been re-added as a synced account. The account now uses the UPN suffix we selected in the target domain, confirming it has been synced from the new domain.</p>



<div class="wp-block-image"><figure class="aligncenter size-large"><img loading="lazy" width="1024" height="41" src="https://davidshomelab.com/wp-content/uploads/2024/07/image-11-1024x41.png" alt="" class="wp-image-829" srcset="https://davidshomelab.com/wp-content/uploads/2024/07/image-11-1024x41.png 1024w, https://davidshomelab.com/wp-content/uploads/2024/07/image-11-300x12.png 300w, https://davidshomelab.com/wp-content/uploads/2024/07/image-11-768x31.png 768w, https://davidshomelab.com/wp-content/uploads/2024/07/image-11.png 1140w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure></div>



<p>If we used ADMT the above process could be simplified as we wouldn&#8217;t need to manually set the source anchor. Instead we could simply migrate the user to a synced OU and run a sync.</p>



<h2>Conclusion</h2>



<p>Having gone through this process it turns out not to be as complex as I initially expected. The main pitfall is that the sync must be run twice on the source domain before setting the user up on the target. Another potential complication is that the user in the source domain could conflict if it were to ever become synced again. While this is unlikely to happen in a scenario where one domain is to be decommissioned, you can guard against it by clearing the &#8220;mS-DS-ConsistencyGuid&#8221; attribute. In this scenario if the user were to be synced again a new user would be created in Entra ID.</p>



<p>In summary, the steps to perform the process are:</p>



<ol><li>Move the user out of a synced OU</li><li>Run the sync command once and verify that the user is soft deleted in Entra ID</li><li>Run the sync command a second time</li><li>Migrate the user to the new domain by whatever means you have chosen</li><li>Ensure that the &#8220;mS-DS-ConsistencyGuid&#8221; attribute is the same for the source and target users</li><li>Run the sync command on the target domain and verify that the Entra ID user is undeleted</li></ol>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/ad-cross-forest-migration-with-entra-id-sync/">AD Cross-Forest Migration With Entra ID Sync</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Homelab Upgrade Project #2: A New Start with Proxmox</title>
		<link>https://davidshomelab.com/homelab-upgrade-project-2-a-new-start-with-proxmox/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sat, 29 Jul 2023 21:53:43 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Virtualisation]]></category>
		<category><![CDATA[homelab upgrade]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[virtualisation]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=784</guid>

					<description><![CDATA[<p>It&#8217;s been several months since I first started planning the homelab upgrade but I&#8217;ve finally got round to making the first steps. The initial step was to replace my Ubuntu based hypervisor with Proxmox. As the rest of my home network runs on the same host I couldn&#8217;t afford to leave it offline for long. ... <a title="Homelab Upgrade Project #2: A New Start with Proxmox" class="read-more" href="https://davidshomelab.com/homelab-upgrade-project-2-a-new-start-with-proxmox/" aria-label="More on Homelab Upgrade Project #2: A New Start with Proxmox">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/homelab-upgrade-project-2-a-new-start-with-proxmox/">Homelab Upgrade Project #2: A New Start with Proxmox</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>It&#8217;s been several months since I first started <a href="https://davidshomelab.com/homelab-upgrade-project/" target="_blank" rel="noreferrer noopener">planning the homelab upgrade</a> but I&#8217;ve finally got round to making the first steps. The initial step was to replace my Ubuntu based hypervisor with Proxmox. As the rest of my home network runs on the same host I couldn&#8217;t afford to leave it offline for long. This required me to do some careful planning to ensure a quick and smooth transition. One of the main focuses behind the upgrade was to increase separation between home and lab infrastructure. This means that the rest of the upgrades should be a lot quicker and less disruptive to implement.</p>



<p>As mentioned in my last post I decided to go with Proxmox. While there are plenty of other options out there, I found Proxmox to be the simplest to run on a single host while also maintaining clustering ability. I also like the built in backup capabilities. The fact that it&#8217;s based on Debian is also an appeal as it&#8217;s a platform I am already familiar with. I&#8217;d prefer not to have to rebuild the server again for the life of the hardware. Having a stable and reliable base platform is therefore a priority.</p>



<h2>Planning For The Move</h2>



<p>My key priorities during the move were to keep the internet up as much as possible and to not lose data. For the first part, I built a new router VM on my spare (and very noisy) Supermicro 6017R. I chose to use VyOS for this and was sufficiently impressed with it that it has now replaced pfSense as my main router OS. I&#8217;ll have a lot more to say about it in future posts.</p>



<p>I do eventually want to rebuild all my home infrastructure and move more services into containers. However, for now I just wanted to move everything over as-is. I can then tackle one service at a time after researching the best options for each situation.</p>



<p>The migration process actually turned out to be much simpler than I expected. Initially I powered down the VMs and copied the contents of <code>/var/lib/libvirt/images</code> onto an external hard drive. I would then later import these into newly created VMs after migrating to Proxmox.</p>



<h2>Installing Proxmox</h2>



<p>The Proxmox installation process is pretty typical for any Linux installation, just download the <a href="https://www.proxmox.com/en/downloads" target="_blank" rel="noreferrer noopener">ISO</a>, write it to a USB drive and boot from it. You&#8217;ll just need to select your boot drive, set a root password and configure a static IP address. Once you boot into the new installation, the console will show the IP/port combination for the web interface.</p>



<h3>Configuring Proxmox updates</h3>



<p>By default, Proxmox will expect an enterprise subscription and will have the enterprise update repo enabled. In a production environment, a subscription is highly recommended to get access to more thoroughly tested packages and support. For a homelab this is overkill so we need to enable the community repo. In the node settings go to Updates > Repositories. You will see two repositories under the enterprise.proxmox.com domain. Select these repositories and disable them. Next, click Add and click OK to the warning that you don&#8217;t have a subscription. Select &#8220;No-subscription&#8221; and click OK. In the updates tab, click Refresh and Upgrade. Once the updates are installed, reboot the host.</p>



<h3>Creating Storage Pools</h3>



<p>Once you have signed back in to the web interface, you&#8217;ll want to create some storage pools. Proxmox will create a pool automatically on your boot disk. However, storing VMs on separate disks will make life easier in the event that you need to reinstall. I have 5 additional disks in my host, two 1TB SSDs, two 4TB SATA HDDs and one 8TB HDD. I used the 1TB disks for a mirrored ZFS pool for VM boot disks. Then I did the same with the 4TB disks for bulk storage. This can be used for any archival data and for VMs which require a lot of space. The 8TB disk will be used for backup storage. Unfortunately I don&#8217;t have enough SATA ports to add a second mirror for this disk so can&#8217;t make this redundant. That is something I&#8217;d definitely look to reconsider for the next hardware iteration.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="289" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-1024x289.png" alt="Screenshot of Proxmox web interface showing the existing ZFS pools, three pools are created. BACKUP, is 7.99TB with 6.19TB free and 1.80TB used, HDD is 3.99TB with 1.58TB free and 2.40TB allocated, SSD is 996.43GB with 928.40GB free and 68.04GB allocated" class="wp-image-788" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-1024x289.png 1024w, https://davidshomelab.com/wp-content/uploads/2023/07/image-300x85.png 300w, https://davidshomelab.com/wp-content/uploads/2023/07/image-768x217.png 768w, https://davidshomelab.com/wp-content/uploads/2023/07/image-1536x434.png 1536w, https://davidshomelab.com/wp-content/uploads/2023/07/image.png 1539w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption>The disk configuration is set under the host settings, in Disks > ZFS. If the disk already has a partition on it, go to Disks and wipe it before trying to create the ZFS pool.</figcaption></figure>



<h3>Configuring the Network</h3>



<p>Next, I set up the network configuration. As the router will be running as a VM, I needed to make a WAN and LAN bridge. I could have used the existing vmbr0 interface as the LAN but in the future I may want to move the management interface to a dedicated vlan. To retain flexibility, I created a separate LAN bridge. I also made a separate Lab bridge to keep any lab traffic well away from anything important. I didn&#8217;t add any uplinks for this bridge as it will only be communicating with other VMs.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="148" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-1-1024x148.png" alt="Screenshot of Proxmox network configuration" class="wp-image-789" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-1-1024x148.png 1024w, https://davidshomelab.com/wp-content/uploads/2023/07/image-1-300x43.png 300w, https://davidshomelab.com/wp-content/uploads/2023/07/image-1-768x111.png 768w, https://davidshomelab.com/wp-content/uploads/2023/07/image-1.png 1512w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p> </p>



<h3>Separating Resources</h3>



<p>One of my main aims for the homelab rebuild was to have plenty of separation between home infrastructure and lab. One way to do this is to have separate resource pools. This creates a simple way to have different permissions for different types of VMs. This will come in particularly handy if I want to experiment with automation in the Proxmox environment. I can create an account which only has permission to touch lab VMs and be confident it wont touch infrastructure. Resource pools are configured in Datacentre > Permissions > Pools. I created two pools, Home and Lab.</p>



<h2>Importing Existing VMs to Proxmox</h2>



<p>I had worried about importing the existing VMs but it turned out to be far simpler than I expected. I had only copied the disk images so had to re-make the rest of the VM config.</p>



<p>To do this I created the VMs as normal through the web UI. As we already have the boot image, on the OS tab I selected &#8220;Do not use any media&#8221;. On the Disks tab I deleted the default scsi0 device. I configured the CPU, memory and network as appropriate for the particular VM.</p>



<h3>Import the Disk</h3>



<p>Once the VM was created, all we need to do is import the disk images. I plugged in the external drive and checked the device name in the node disk settings. In my case it came up as <code>/dev/sdg</code> I hadn&#8217;t partitioned it so this was the location I needed to mount. I went to the shell tab and mounted the disk as follows:</p>



<pre class="wp-block-code"><code>mount /dev/sdg /mnt</code></pre>



<p>We then need to import the image into the VM which we can do as follows:</p>



<pre class="wp-block-code"><code>qm disk import &lt;vm id> &lt;image path> &lt;storage pool></code></pre>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-3.png" alt="Screenshot of disk import process.

root@pve01:/HDD/archive# qm disk import 116 fw1.qcow2 PVE01-SSD 
importing disk 'fw1.qcow2' to VM 116 ...
transferred 0.0 B of 20.0 GiB (0.00%)
transferred 204.8 MiB of 20.0 GiB (1.00%)
transferred 415.7 MiB of 20.0 GiB (2.03%)
transferred 620.5 MiB of 20.0 GiB (3.03%)" class="wp-image-791" width="807" height="151" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-3.png 742w, https://davidshomelab.com/wp-content/uploads/2023/07/image-3-300x56.png 300w" sizes="(max-width: 807px) 100vw, 807px" /></figure>



<p>The disk import process will make a copy of the image to the storage pool. It will create it as a new disk so don&#8217;t worry about overwriting any existing disks. Depending on the image size and storage speed, the import may take a while. If you use the shell interface in the web UI, navigating away from the shell will cancel the import. You may find it easier to run the commands over SSH or use the <code>nohup</code> command if you have large disks to import.</p>



<h3>Attach the Disk</h3>



<p>In the VM settings, you should now see an unused disk in the Hardware tab. Click on this disk and click Edit. If it is a Linux VM, change the Bus/Device setting to VirtIO Block for improved disk performance. Windows requires additional drivers to support VirtIO so if you don&#8217;t have them installed, stick with one of the other options. Once you are happy with the settings, click Add.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-4.png" alt="Screenshot of disk add screen. The only changed setting is Bus/Device: VirtIO Block. All other settings are default. The Add button is in the bottom right corner of the box." class="wp-image-793" width="790" height="419" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-4.png 747w, https://davidshomelab.com/wp-content/uploads/2023/07/image-4-300x159.png 300w" sizes="(max-width: 790px) 100vw, 790px" /></figure>



<p>Go to the Options tab and double click on Boot Order. You should now see your new disk at the bottom of the list. Check the box to enable the disk and drag it to the top of the list. You are now ready to boot the VM.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="799" height="211" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-5.png" alt="Screenshot of the boot order editing screen. The virtio0 disk is enabled and dragged above the ide2 and net0 devices" class="wp-image-794" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-5.png 799w, https://davidshomelab.com/wp-content/uploads/2023/07/image-5-300x79.png 300w, https://davidshomelab.com/wp-content/uploads/2023/07/image-5-768x203.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></figure>



<p>For now I have significantly scaled back what was running so I just have a router, UniFi controller and file server. I will probably add PiHole back at a later date but for now I want to keep things simple. I also made the decision to not host email at home any more. It&#8217;s stressful enough having to manage mail servers at work and I don&#8217;t want to have to do it at home too. Hosting email in the cloud should mean it&#8217;s online more often. If you&#8217;ve sent me an email and got a delivery failure, I&#8217;m sorry and this shouldn&#8217;t happen in future.</p>



<h2>Next steps</h2>



<p>So far I have got to the point that my existing infrastructure is running on a more manageable platform, this will make it easier to make incremental changes later so expect more updates in future. I have already replaced my pfSense router with VyOS and implemented the Proxmox Backup Server but this post is long enough already so these will be covered in future updates.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/homelab-upgrade-project-2-a-new-start-with-proxmox/">Homelab Upgrade Project #2: A New Start with Proxmox</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Homelab Upgrade Project #1: The Plan</title>
		<link>https://davidshomelab.com/homelab-upgrade-project/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sun, 26 Mar 2023 18:37:47 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[ansible]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[homelab upgrade]]></category>
		<category><![CDATA[kubernetes]]></category>
		<category><![CDATA[networking]]></category>
		<category><![CDATA[virtualisation]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=776</guid>

					<description><![CDATA[<p>When I first started the homelab it was intended to be used mainly for learning. However, as time has gone on I&#8217;ve ended up using it more for home infrastructure. This has resulted in an infrastructure that is fairly fragile and difficult to make changes in, defeating the purpose of using it as a learning ... <a title="Homelab Upgrade Project #1: The Plan" class="read-more" href="https://davidshomelab.com/homelab-upgrade-project/" aria-label="More on Homelab Upgrade Project #1: The Plan">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/homelab-upgrade-project/">Homelab Upgrade Project #1: The Plan</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>When I first started the homelab it was intended to be used mainly for learning. However, as time has gone on I&#8217;ve ended up using it more for home infrastructure. This has resulted in an infrastructure that is fairly fragile and difficult to make changes in, defeating the purpose of using it as a learning tool. I&#8217;ve therefore decided that now is the time for a complete rebuild. While this won&#8217;t be an overnight process and I&#8217;ll probably need to retain the existing infrastructure and replace individual components at a time, I aim to go in with a plan this time, rather than allowing things to develop organically.</p>



<p>To this end, I have identified some pain points with the existing set up and have a plan for how to improve them in the rebuild. Each step will be documented in future articles.</p>





<h2>Virtualisation</h2>



<p>When I initially started, I used libvirt on Ubuntu as my hypervisor. While this worked pretty well initially there are definitely a few pain points. There is no Windows management client, requiring a Linux PC for managing VMs. The networking is also a bit of a pain to set up compared to a more dedicated hypervisor OS. As I&#8217;m running on a single host I don&#8217;t want to deal with the additional complication of oVirt so plan to use Proxmox. This will allow me to use grouping to keep production and lab infrastructure separate. I can also use the built in LXC support to create containers for workloads where a full VM isn&#8217;t necessary.</p>



<p>To make matters even better, it also has modules for Ansible and Terraform, which will be useful further down the line when I implement configuration management.</p>



<h2>IPv6</h2>



<p>As my ISP provided IPv4 address is behind CGN, IPv6 has proved useful to allow external access to internal services. However, running IPv4 and IPv6 on the internal LAN has proved to be overly cumbersome and usually resulted in me just sticking with IPv4. This time I plan to go IPv6 only for the LAN, using NAT64 to enable access to IPv4 services.</p>



<h2>Configuration Management</h2>



<p>Most of my existing resources have been configured manually. This lack of standardisation has resulted in a mix of different operating systems with different base configuration on each device. Going forward, all configuration will be set up using configuration management and stored in source control. This will make it easier to rebuild services, either for disaster recovery or just to make a copy for testing. My current plan is to provision all VMs using Terraform and then manage configuration through Ansible. As much as possible will be run on Docker or Kubernetes.</p>



<h2>Redundancy</h2>



<p>An advantage to using containers is that stateless workloads can be parallelised, making updates possible with no downtime. While I only have a single host so can&#8217;t avoid downtime for host updates, it should be possible to restart or update anything else without causing interruptions to anything critical such as the firewall, DNS or file access.</p>





<h2>Monitoring</h2>



<p><a href="https://davidshomelab.com/zabbix-server-setup/" target="_blank" rel="noreferrer noopener">I had previously run Zabbix</a> but due to the lack of standardisation, managing alert policies for all the different VMs posed an excessive maintenance burden so it started to fall by the wayside. I will rebuild it using a standard set of basic alerts for disk space, updates, expiring SSL certificates etc.</p>



<h2>The plan</h2>



<p>Initially I will back up all existing VMs and restore them as-is onto Proxmox. I will then begin developing the Kubernetes cluster and migrate all possible workloads onto that. This will require me to come up with some plan for persistent HA SQL and file storage. At this point I will also set up a new Zabbix server and figure out exactly what I want to monitor.</p>



<p>Once all internal workloads are migrated, I will replace the firewall and disable IPv4. Finally I will do a full test of the infrastructure as code by setting up a clone of the network which can be used as the test network. Come back soon for future updates.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/homelab-upgrade-project/">Homelab Upgrade Project #1: The Plan</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Handling multiple auto-incrementing progress bars in PowerShell</title>
		<link>https://davidshomelab.com/handling-multiple-auto-incrementing-progress-bars-in-powershell/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Wed, 16 Feb 2022 19:14:48 +0000</pubDate>
				<category><![CDATA[Uncategorised]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=747</guid>

					<description><![CDATA[<p>When writing scripts it is often useful to know which step is currently being executed. We can do this by periodically printing a status message to the screen but PowerShell provides a more elegant way in the form of the Write-Progress cmdlet. Using this cmdlet we can print the current status to the screen only ... <a title="Handling multiple auto-incrementing progress bars in PowerShell" class="read-more" href="https://davidshomelab.com/handling-multiple-auto-incrementing-progress-bars-in-powershell/" aria-label="More on Handling multiple auto-incrementing progress bars in PowerShell">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/handling-multiple-auto-incrementing-progress-bars-in-powershell/">Handling multiple auto-incrementing progress bars in PowerShell</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>When writing scripts it is often useful to know which step is currently being executed. We can do this by periodically printing a status message to the screen but PowerShell provides a more elegant way in the form of the <code>Write-Progress</code> cmdlet. Using this cmdlet we can print the current status to the screen only to have it replaced by the next status when we move on to the next task. However, the basic use of this cmdlet can become difficult to maintain as scripts expand. Ideally we would prefer to have an auto-incrementing progress bar which can track the number of steps already completed.</p>



<p>We can demonstrate the basic behaviour with the following code:</p>



<pre class="wp-block-code"><code>Write-Progress -Activity "MyActivity" -Status "Performing actions" -PercentComplete 25
Start-Sleep 0.25
Write-Progress -Activity "MyActivity" -Status "Performing actions" -PercentComplete 50
Start-Sleep 0.25
Write-Progress -Activity "MyActivity" -Status "Performing actions" -PercentComplete 75
Start-Sleep 0.25
Write-Progress -Activity "MyActivity" -Status "Performing actions" -PercentComplete 100
Start-Sleep 0.25</code></pre>



<p>Here, create a progress bar and increment by 25 percent every 0.25 seconds. In real scripts you wouldn&#8217;t sleep between steps, this is just for demonstration purposes and would be replaced by your actual tasks.</p>



<p>If we want to have multiple progress bars (perhaps a main bar for the overall script progress and then additional bars for sub-tasks, we just need to provide an <code>-Id</code> parameter to differentiate between the bars. If we don&#8217;t provide this then the second one to be created will replace the first one.</p>



<pre class="wp-block-code"><code>Write-Progress -Activity "MyActivity" -Status "Performing actions" -PercentComplete 25 -Id 0
Write-Progress -Activity "MySecondActivity" -Status "Performing more actions" -PercentComplete 25 -Id 1
Start-Sleep 0.25
Write-Progress -Activity "MyActivity" -Status "Performing actions" -PercentComplete 50 -Id 0
Write-Progress -Activity "MySecondActivity" -Status "Performing more actions" -PercentComplete 50 -Id 1
Start-Sleep 0.25
Write-Progress -Activity "MyActivity" -Status "Performing actions" -PercentComplete 75 -Id 0
Write-Progress -Activity "MySecondActivity" -Status "Performing more actions" -PercentComplete 75 -Id 1
Start-Sleep 0.25
Write-Progress -Activity "MyActivity" -Status "Performing actions" -PercentComplete 100 -Id 0
Write-Progress -Activity "MySecondActivity" -Status "Performing more actions" -PercentComplete 100 -Id 1
Start-Sleep 0.25</code></pre>



<p>A disadvantage of this method is that we need to manually specify the percentage complete, meaning that if we change our code, we will need to remember to update how the progress bars are generated. In his <a href="https://adamtheautomator.com/write-progress/" target="_blank" rel="noreferrer noopener">blog</a>, Adam Bertram provides a fantastic way to automatically calculate the percentage completion using the PsParser .NET class. I&#8217;d highly recommend reading Bertram&#8217;s article as it provides a lot of background about how PsParser works that I won&#8217;t repeat here. To summarise Adam&#8217;s method, he simply parses the script file and counts how many calls are made to <code>Write-Progress</code> and treats this as the number of steps the progress bar needs to handle. For scripts containing only a single progress bar this is fine but if we want to track the progress of multiple tasks we will need something more advanced.</p>



<h2>Creating a ProgressBar class</h2>



<p>As we are looking to create multiple progress bars which need to keep track of their own internal state (specifically the number of steps completed) we can create a class from which we can generate as many progress bars as we need.</p>



<pre class="wp-block-code"><code>class ProgressBar {

    $completedSteps = 0
    $activity
    $id = 0
    $totalSteps

    ProgressBar(
        &#91;string]$activity,
        &#91;int]$id
    ){
        $this.activity = $activity
        $this.id = $id
    }

    &#91;void] UpdateProgress($status) {
        $percentComplete = ($this.completedSteps / $this.totalSteps) * 100
        Write-Progress -Activity $this.activity -Status $status -PercentComplete $percentComplete -Id $this.id
        $this.completedSteps += 1
    }  
}</code></pre>



<p>This code doesn&#8217;t yet automatically count the total number of steps, but once it is provided, it will do most of the rest of the work of tracking the progress. Whenever the <code>UpdateProgress</code> method is called, it will update the status message and increment the completed steps by 1. It will also automatically calculate the percentage completion.</p>



<p>To use this class to create a progress bar we need to create an instance:</p>



<pre class="wp-block-code"><code>&#91;ProgressBar]::new("MyProgressBar",1)</code></pre>



<p>We can now call the <code>UpdateProgress</code> method on this object to create and increment the progress bar:</p>



<pre class="wp-block-code"><code>$progressbar.UpdateProgress("Task1")
Start-Sleep 0.25
$progressbar.UpdateProgress("Task2")
Start-Sleep 0.25
$progressbar.UpdateProgress("Task3")
Start-Sleep 0.25
$progressbar.UpdateProgress("Task4")
Start-Sleep 0.25
$progressbar.UpdateProgress("Task5")
Start-Sleep 0.25
$progressbar.UpdateProgress("Task6")
Start-Sleep 0.25
$progressbar.UpdateProgress("Task7")
Start-Sleep 0.25
$progressbar.UpdateProgress("Task8")
Start-Sleep 0.25
$progressbar.UpdateProgress("Task9")
Start-Sleep 0.25
$progressbar.UpdateProgress("Task10")
Start-Sleep 0.25
</code></pre>



<p>If we wanted to create a second progress bar we could simply create a second instance and we would then be able to update the two progress bars independently. We would of course need to make sure both bars have separate ID fields to allow them to be shown independently.</p>



<h2>Detecting the number of steps</h2>



<p>To detect the number of steps we do something similar to Bertram&#8217;s method but instead of counting the number of calls to <code>Write-Progress</code> we need to count the calls to our custom ProgressBar object. I was unable to find a reliable way to determine the variable names we choose for our progress bars so we need to add an additional property to our class to store the name of the variable. We simply add a <code>$variableName</code> property to the class and add it to the constructor so we can use it when creating an object.</p>



<p>Once we have this, we can use Bertram&#8217;s method to locate instances of our progress bar variable. One issue I had getting this to work is that <code>$($MyInvocation.MyCommand.Name)</code>doesn&#8217;t appear to populate correctly when called inside a class definition so we need to save the script path as a variable outside of the class:</p>



<pre class="wp-block-code"><code>$ScriptFile = Get-Content "$PSScriptRoot\$($MyInvocation.MyCommand.Name)"</code></pre>



<p>We can then scan the script and automatically set the value of <code>$totalSteps</code>:</p>



<pre class="wp-block-code"><code>&#91;int]$totalSteps = (&#91;System.Management.Automation.PsParser]::Tokenize(( $ScriptFile ), &#91;ref]$null) | Where-Object { $_.Type -eq 'Variable' } | Where-Object { $_.Content -eq $variableName }).Count - 1</code></pre>



<h2>Putting it all together</h2>



<p>Now we are able to detect the names of our progress trackers and scan the script for instances of it, we can put it all together.</p>



<h3>Defining our auto-incrementing ProgressBar class</h3>



<pre class="wp-block-code"><code>$ScriptFile = Get-Content "$PSScriptRoot\$($MyInvocation.MyCommand.Name)"

class ProgressBar {

    &#91;int]$completedSteps = 0
    &#91;string]$activity
    &#91;int]$id
    &#91;string]$variableName
    &#91;int]$totalSteps = (&#91;System.Management.Automation.PsParser]::Tokenize(( $ScriptFile ), &#91;ref]$null) | Where-Object { $_.Type -eq 'Variable' } | Where-Object { $_.Content -eq $variableName }).Count - 1

    ProgressBar(
        &#91;string]$activity,
        &#91;int]$id,
        &#91;string]$variableName
    ){
        $this.activity = $activity
        $this.id = $id
        $this.variableName = $variableName
    }

    &#91;void] UpdateProgress($status) {
        $percentComplete = ($this.completedSteps / $this.totalSteps) * 100
        Write-Progress -Activity $this.activity -Status $status -PercentComplete $percentComplete -Id $this.id
        $this.completedSteps += 1
    }
}</code></pre>



<h3>Creating our auto-incrementing progress bar objects</h3>



<pre class="wp-block-code"><code>$progressbar = &#91;ProgressBar]::new("MyProgressBar",1,"progressbar")
$progressbar2 = &#91;ProgressBar]::new("MySecondProgressBar",2,"progressbar2")</code></pre>



<p>Note that the third argument is the name of the variable we will use to refer to the progress bar. This is how we determine the number of steps our progress bar needs.</p>



<h3>Performing our tasks</h3>



<figure class="wp-block-image size-full"><img loading="lazy" width="841" height="242" src="https://davidshomelab.com/wp-content/uploads/2022/02/image.png" alt="Progress bar ilustration in Windows PowerShell 5.1" class="wp-image-752" srcset="https://davidshomelab.com/wp-content/uploads/2022/02/image.png 841w, https://davidshomelab.com/wp-content/uploads/2022/02/image-300x86.png 300w, https://davidshomelab.com/wp-content/uploads/2022/02/image-768x221.png 768w" sizes="(max-width: 841px) 100vw, 841px" /></figure>



<p>The following are some example tasks simply to demonstrate how the progress bar can be updated. In a real script, the <code>progressbar</code> and <code>progressbar2</code> objects need not be updated at the same time and you would perform actual tasks instead of sleeping.</p>



<pre class="wp-block-code"><code>$progressbar.UpdateProgress("Task")
$progressbar2.UpdateProgress("Task2")
Start-Sleep 1
$progressbar.UpdateProgress("Task")
$progressbar2.UpdateProgress("Task2")
Start-Sleep 1
$progressbar.UpdateProgress("Task")
$progressbar2.UpdateProgress("Task2")
Start-Sleep 1
$progressbar.UpdateProgress("Task")
$progressbar2.UpdateProgress("Task2")
Start-Sleep 1
$progressbar.UpdateProgress("Task")
$progressbar2.UpdateProgress("Task2")
Start-Sleep 1
$progressbar.UpdateProgress("Task")
$progressbar2.UpdateProgress("Task2")
Start-Sleep 1
$progressbar.UpdateProgress("Task")
$progressbar2.UpdateProgress("Task2")
Start-Sleep 1</code></pre>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/handling-multiple-auto-incrementing-progress-bars-in-powershell/">Handling multiple auto-incrementing progress bars in PowerShell</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Pi-hole failover using Gravity Sync and Keepalived</title>
		<link>https://davidshomelab.com/pi-hole-failover-with-keepalived/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Mon, 30 Aug 2021 14:07:22 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Networking]]></category>
		<category><![CDATA[networking]]></category>
		<category><![CDATA[pi-hole]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=646</guid>

					<description><![CDATA[<p>Traditionally when we assign DNS servers to hosts we assign more than one. For example, we may assign 8.8.8.8 as a primary an 8.8.4.4 as a secondary so in the unlikely event that one is ever down, we have a second one to fall back on. When we set up a single Pi-hole server with ... <a title="Pi-hole failover using Gravity Sync and Keepalived" class="read-more" href="https://davidshomelab.com/pi-hole-failover-with-keepalived/" aria-label="More on Pi-hole failover using Gravity Sync and Keepalived">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/pi-hole-failover-with-keepalived/">Pi-hole failover using Gravity Sync and Keepalived</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Traditionally when we assign DNS servers to hosts we assign more than one. For example, we may assign 8.8.8.8 as a primary an 8.8.4.4 as a secondary so in the unlikely event that one is ever down,  we have a second one to fall back on. When we set up a single Pi-hole server with no failover we lose this redundancy, meaning that if our Pi-hole ever goes down or we want to restart it for updates, we lose DNS until it is back.</p>


<p>The simplest way to regain this redundancy is just to run 2 independent Pi-hole servers and set them as the primary and fallback DNS servers. However, this approach has a couple of disadvantages. The main problem is that we have no setting synchronisation between our servers.  If we update the block lists on one, we have to remember to update them on the other. We also can&#8217;t guarantee which server clients will connect to, so if one is down, clients may still try to connect and then have to wait for a timeout before querying the second server. This can cause performance drops when one of the servers is offline.</p>


<h2>Using failover to provide a highly available Pi-hole cluster</h2>


<p>We can resolve these problems by linking our pair of Pi-hole servers into a unified failover cluster. The original idea for this came from <a href="https://www.reddit.com/user/Panja0/" target="_blank" rel="noreferrer noopener">u/Panja0</a> on <a href="https://www.reddit.com/r/pihole/comments/d5056q/tutorial_v2_how_to_run_2_pihole_servers_in_ha/" target="_blank" rel="noreferrer noopener">Reddit</a>. However, this approach doesn&#8217;t quite work on Pi-hole 5 and above as the way the data is saved changed. This article is therefore an updated version of that basic idea to remain compatible with the latest Pi-hole versions.</p>


<p>To accomplish our goal we need to solve 2 problems. Firstly, we need a way to keep the blocklists and settings in sync between the 2 Pi-hole servers. We then need a mechanism to monitor the servers and hand over the IP address if the primary fails. If you are unconcerned about DNS timeouts, you can skip the IP failover setup and just have 2 servers on their own IP addresses. For the greatest reliability, you will want to configure both stages.</p>


<h2>What do we need?</h2>


<p>You will need 2 Pi-hole servers, ideally updated to the latest version but anything greater than 5.0 should work. You can update an existing instance to the latest version by running <code>sudo pihole updatePihole</code>. Note that this will restart the DNS resolver. I will not cover how to set up a Pi-hole server <a href="https://davidshomelab.com/pi-hole-an-open-source-ad-blocker-for-your-home-or-company/" target="_blank" rel="noreferrer noopener">as I have described it previously</a>. This article is a few years old so you will probably want to use a more recent OS such as Ubuntu 20.04LTS but the installation process is more or less unchanged. For a more recent guide, you can look at <a href="https://unixcop.com/how-to-install-pi-hole-ubuntu-20-04/" target="_blank" rel="noreferrer noopener">this article</a> or the <a href="https://github.com/pi-hole/pi-hole/#one-step-automated-install" target="_blank" rel="noreferrer noopener">Pi-hole documentation</a>. I will assume the use of a Debian/Ubuntu based distro for this guide but it should be easily adaptable for CentOS.</p>


<p>You will also need a third IP address which will be shared between the 2 servers.</p>


<h2>Synchronising Pi-hole blocklists</h2>


<p>Pi-hole 5 no-longer stores the blocklists in plain text files. Instead they are stored in a SQLite database in <code>/etc/pihole/gravity.db</code>. This means we can&#8217;t use Panja0&#8217;s original method of using rsync to synchronise the list files. Instead, we use Michael Stanclift&#8217;s (<a href="https://github.com/vmstan/" target="_blank" rel="noreferrer noopener">vmstan on GitHub</a>) <a href="https://github.com/vmstan/gravity-sync" target="_blank" rel="noreferrer noopener">gravity-sync script</a>.</p>


<p>This script will synchronise blocklists, exclusions and local DNS records. It won&#8217;t synchronise admin passwords, upstream DNS servers, DHCP leases and statistics. We will therefore need to manually configure the admin password and upstream DNS servers to be the same on both servers. There isn&#8217;t a practical way to have DHCP on the Pi-hole so that will have to run on the router or some other device.</p>


<h3>Preparing the Pi-hole servers</h3>


<p>To begin with, ensure all dependencies are installed. Most will already be installed but if not, we can catch them by running:</p>


<pre class="wp-block-code"><code>apt update &amp;&amp; apt install sqlite3 sudo git rsync ssh</code></pre>


<p>On both servers, we will need a service account with sudo privileges. Create this account by running the following command on each server:</p>


<pre class="wp-block-code"><code>sudo useradd -G sudo -m pi
sudo passwd pi</code></pre>


<p>On some distros (e.g. CentOS), the privileged group is called wheel instead of sudo so you may need to adjust this command.</p>


<h3>Install Gravity-Sync on the primary Pi-hole</h3>


<p>Pick one of the servers to act as the primary and log in to it as the <code>pi</code> user account. Run the installer script:</p>


<pre class="wp-block-code"><code>export GS_INSTALL=primary &amp;&amp; curl -sSL https://gravity.vmstan.com/ | bash</code></pre>


<h3>Install Gravity-Sync on the secondary Pi-hole</h3>


<p>Log in to the other server as the <code>pi</code> account and run:</p>


<pre class="wp-block-code"><code>export GS_INSTALL=secondary &amp;&amp; curl -sSL https://gravity.vmstan.com/ | bash</code></pre>


<p>While the installer on the primary just verifies pre-requisites, the secondary needs additional configuration as it performs the active role of replication. When prompted, provide the name of the service account on the primary, the IP address of the primary and configure password-less SSH login.</p>


<h3>Verify connectivity and enable synchronisation</h3>


<p>On the secondary Pi-hole, navigate in to the <code>gravity-sync</code> directory and run:</p>


<pre class="wp-block-code"><code>./gravity-sync.sh compare</code></pre>


<p>This should give you output similar to the below. It will not actually perform a sync but will verify connectivity and detect if a sync is required.</p>


<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="418" height="212" src="https://davidshomelab.com/wp-content/uploads/2021/08/image-7.png" alt="$ ./gravity-sync.sh compare
[∞] Initalizing Gravity Sync (3.4.5)
[✓] Loading gravity-sync.conf
[✓] Evaluating arguments: COMPARE
[i] Remote Pi-hole: pi@10.100.4.53
[✓] Connecting to 10.100.4.53of Pi-hole
[✓] Hashing the primary Domain Database
[✓] Comparing to the secondary Domain Database
[✓] Hashing the primary Local DNS Records
[✓] Comparing to the secondary Local DNS Records
[i] No replication is required at this time
[✓] Purging redundant backups on secondary Pi-hole instance
[i] 3 days of backups remain (11M)
[∞] Gravity Sync COMPARE aborted after 0 seconds" class="wp-image-680" /></figure></div>


<p>If the primary already has configuration but the secondary is a fresh install you will want to ensure that the first sync is a one-way sync from the primary to the secondary. If you don&#8217;t do this, the script may detect the configuration on the secondary as newer and overwrite the primary. To force a one way sync, run:</p>


<pre class="wp-block-code"><code>./gravity-sync.sh pull</code></pre>


<p>Finally, enable automatic synchronisation by running:</p>


<pre class="wp-block-code"><code>./gravity-sync.sh automate</code></pre>


<p>Set an update frequency from the available options. Setting a lower frequency will increase the risk of changes not syncing but will result in reduced server load so may be more appropriate on busy servers.</p>


<p>Synchronisation should now be working, if you do not want to configure IP failover the setup is now complete. If you have additional servers you want to keep in sync, use the steps for adding a secondary server.</p>


<h2>Configure IP failover</h2>


<p>Now our Pi-hole instances are in sync, we can configure IP failover to direct traffic towards the primary when it is available and switch over to the secondary if the primary ever fails. On both servers, install the required packages:</p>


<pre class="wp-block-code"><code>sudo apt install keepalived libipset13 -y</code></pre>


<p>Next, we need to download a script on both servers to monitor the status of the pihole-FTL service so we can fail over if it ever stops running:</p>


<pre class="wp-block-code"><code>sudo mkdir /etc/scripts
sudo sh -c "curl https://pastebin.com/raw/npw6tcuk | tr -d '\r' &gt; /etc/scripts/chk_ftl"
sudo chmod +x /etc/scripts/chk_ftl</code></pre>


<p>Now, we need to add our keepalived configuration. On the primary, run:</p>


<pre class="wp-block-code"><code>sudo curl https://pastebin.com/raw/nsBnkShi -o /etc/keepalived/keepalived.conf</code></pre>


<p>and on the secondary:</p>


<pre class="wp-block-code"><code>sudo curl https://pastebin.com/raw/HbdsUc07 -o /etc/keepalived/keepalived.conf</code></pre>


<p>We now need to edit the configuration on both servers. On each server, set the following properties:</p>


<figure class="wp-block-table"><table><tbody><tr><td><strong>Property</strong></td><td><strong>Description</strong></td><td><strong>Example Server 1</strong></td><td><strong>Example Server 2</strong></td></tr><tr><td><code>interface</code></td><td>The LAN network interface name. Run <code>ip list</code> to view available interfaces if you are unsure.</td><td>eth0</td><td>eth0</td></tr><tr><td><code>unicast_src_ip</code></td><td>The IP address of the server you are currently configuring.</td><td>192.168.1.21</td><td>192.168.1.22</td></tr><tr><td><code>unicast_peer</code></td><td>The IP address of the other server.</td><td>192.168.1.22</td><td>192.168.1.21</td></tr><tr><td><code>virtual_ipaddress</code></td><td>The virtual IP address shared between the 2 servers, provided in CIDR notation. This must be the same on both servers.</td><td>192.168.1.20/24</td><td>192.168.1.20/24</td></tr><tr><td><code>auth_pass</code></td><td>A shared password (max 8 characters). This must be the same on both servers</td><td>P@$$w05d</td><td>P@$$w05d</td></tr></tbody></table></figure>


<p>We are now ready to start and enable keepalived. On both servers, run:</p>


<pre class="wp-block-code"><code>systemctl enable --now keepalived.service
systemctl status keepalived.service</code></pre>


<p>You should see a status of <code>active</code>. If you don&#8217;t see this, you most likely have an error in your config and the status message should give you a hint as to where it is. Additionally, on the primary server, you should see that it has placed itself into the <code>master</code> role.</p>


<div class="wp-block-image"><figure class="aligncenter size-full"><img src="https://davidshomelab.com/wp-content/uploads/2021/08/image-2.png" alt="● keepalived.service - Keepalive Daemon (LVS and VRRP)
     Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2021-08-30 09:37:45 UTC; 9s ago
   Main PID: 82740 (keepalived)
      Tasks: 2 (limit: 4617)
     Memory: 2.1M
     CGroup: /system.slice/keepalived.service
             ├─82740 /usr/sbin/keepalived --dont-fork
             └─82752 /usr/sbin/keepalived --dont-fork
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: Registering Kernel netlink command channel
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: Opening file '/etc/keepalived/keepalived.conf'.
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: Registering gratuitous ARP shared channel
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: (PIHOLE) Entering BACKUP STATE (init)
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: VRRP_Script(chk_ftl) succeeded
Aug 30 09:37:46 pihole1 Keepalived_vrrp[82752]: (PIHOLE) received lower priority (145) advert from 10.100.4.54 - discarding
Aug 30 09:37:47 pihole1 Keepalived_vrrp[82752]: (PIHOLE) received lower priority (145) advert from 10.100.4.54 - discarding
Aug 30 09:37:48 pihole1 Keepalived_vrrp[82752]: (PIHOLE) received lower priority (145) advert from 10.100.4.54 - discarding
Aug 30 09:37:49 pihole1 Keepalived_vrrp[82752]: (PIHOLE) received lower priority (145) advert from 10.100.4.54 - discarding
Aug 30 09:37:49 pihole1 Keepalived_vrrp[82752]: (PIHOLE) Entering MASTER STATE" class="wp-image-654" /><figcaption>If all is well the primary should see adverts from the secondary, detect itself as higher priority and elect itself as active.</figcaption></figure></div>


<h2>Testing Pi-hole failover</h2>


<p>You should now be able to reach the Pi-hole interface on the virtual IP we configured previously (e.g. http://192.168.1.20/admin). In the top right corner of the interface it will show the hostname and you should see that it is the primary server. To check that failover is working, shut down the primary server. If you reload the page you should see that the hostname in the top right has changed to the name of the secondary.</p>


<figure class="wp-block-gallery aligncenter columns-2 is-cropped"><ul class="blocks-gallery-grid"><li class="blocks-gallery-item"><figure><img src="https://davidshomelab.com/wp-content/uploads/2021/08/image-5.png" alt="Pi-hole console indicating we are logged in to pihole1" data-id="657" data-full-url="https://davidshomelab.com/wp-content/uploads/2021/08/image-5.png" data-link="https://davidshomelab.com/?attachment_id=657#main" class="wp-image-657" /></figure></li><li class="blocks-gallery-item"><figure><img src="https://davidshomelab.com/wp-content/uploads/2021/08/image-4.png" alt="Pi-hole console indicating we are logged in to pihole2" data-id="656" data-full-url="https://davidshomelab.com/wp-content/uploads/2021/08/image-4.png" data-link="https://davidshomelab.com/?attachment_id=656#main" class="wp-image-656" /></figure></li></ul><figcaption class="blocks-gallery-caption">The same IP address we used to access pihole1 can be used to access pihole2 once pihole1 has been powered off, demonstrating that the Pi-hole failover has worked. Once pihole1 is back online, when we reload the page, we see that pihole1 is once again active.</figcaption></figure>


<p>You can also test DNS requests using <code>nslookup</code>. The following script will make a DNS lookup every second, allowing you to verify that you receive responses even if the primary is offline:</p>


<pre class="wp-block-code"><code>while true; do
nslookup davidshomelab.com &#091;virtual IP of Pi-hole cluster]
sleep 1
done</code></pre>


<p>If you shut down pihole1 while this script is running you should see that DNS lookups are not interrupted. There may be a brief moment of interruption at the exact moment that the failover occurs but it should resume within around a second.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/pi-hole-failover-with-keepalived/">Pi-hole failover using Gravity Sync and Keepalived</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Say goodbye to print() with the Python logging module</title>
		<link>https://davidshomelab.com/python-logging-module-introduction/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sat, 14 Aug 2021 13:36:34 +0000</pubDate>
				<category><![CDATA[Coding]]></category>
		<category><![CDATA[Python]]></category>
		<category><![CDATA[coding]]></category>
		<category><![CDATA[logging]]></category>
		<category><![CDATA[python]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=623</guid>

					<description><![CDATA[<p>The logging module provides the capacity to create multiple logger objects each with different properties, allowing for flexibility over the routing of log messages. By using logging instead of print statements to provide feedback on the operation of our code we make it more maintainable and easier to home in on the specific information needed when investigating bugs.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/python-logging-module-introduction/">Say goodbye to print() with the Python logging module</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>When people start out with coding it is common to use print statements to investigate program state. This can be helpful as a quick way to check the value of a variable at a given point. However, as your project begins to grow this approach becomes difficult to scale. If a program is producing a lot of output or runs for a long time you may want to write the output to a file instead of the console. This will allow you to investigate the output at a later date and search for specific keywords. Additionally, the level of required detail changes depending on the point in the project lifecycle. All of these problems can be solved using the logging module.</p>


<p>When you first develop a feature you may want a lot of information about its state. Later, when it matures, you may only want to see serious errors and skip events that are purely informational. At this point we then comment out the print statements, and re-enter them the next time there is a bug and we want detailed output. This leads to the possibility that important messages could be commented out or informational messages could be left in, leading to messy output.</p>


<div class="wp-block-image"><figure class="aligncenter size-full is-resized"><img loading="lazy" src="https://davidshomelab.com/wp-content/uploads/2021/08/image.png" alt="Example of some code that is using print statements to interrogate program state. Print statements have been commented out as they are no-longer needed" class="wp-image-631" width="641" height="353" /><figcaption>Print statements that are no-longer needed but are kept in case they are useful later</figcaption></figure></div>


<p>We can use the logging module set up custom event handlers to control which locations we log events to. For example, we may want to send all messages to a file but only send significant errors to the console. At a later date we may find a bug and want to output detailed debug messages. We can do this by making a small configuration change instead of un-commenting hundreds of lines of code.</p>


<p>The logging module has a lot of features and I will not cover all possible uses here. Instead, I will provide a general overview that you can adapt to implement basic logging functionality in scripts.</p>


<h2>Using the logging module to generate log messages</h2>


<p>The logging module provides the capacity to create multiple logger objects each with different properties, allowing for flexibility over the routing of log messages. Alternatively you can call the logging class directly to use the root logger. We can start using the logging module to print a message to the console using the default settings of the root logger as follows:</p>


<pre class="wp-block-code"><code>import logging
logging.error("this is a test log message")</code></pre>


<p>When we run this code we get the output on the console:</p>


<pre class="wp-block-code"><code>ERROR:root:this is a test log message</code></pre>


<p>This message shows the severity of the message (ERROR), the name of the logger (root) and the message to output. If we wanted to create a secondary logger we could use the GetLogger method:</p>


<pre class="wp-block-code"><code>logging.getLogger("CustomLogger").error("This is an error message sent to a custom logger")</code></pre>


<pre class="wp-block-code"><code>ERROR:CustomLogger:This is an error message sent to a custom logger</code></pre>


<p>We can also assign a custom logger to a variable to simplify calling it in future:</p>


<pre class="wp-block-code"><code>myCustomLogger = logging.getLogger("CustomLogger")
myCustomLogger.error("This custom logger is mapped to a variable")</code></pre>


<h2>Handling different logging severity levels</h2>


<p>So far we have just generated log messages with a severity level of &#8220;ERROR&#8221;. There are 5 default severity levels, &#8220;DEBUG&#8221;, &#8220;INFO&#8221;, &#8220;WARNING&#8221;, &#8220;ERROR&#8221; and &#8220;CRITICAL&#8221;. We can use each of these levels by using the appropriate method of the logging module. In the following example we generate a log message with each severity level.</p>


<pre class="wp-block-code"><code>logging.critical("this is a test CRITICAL message")
logging.error("this is a test ERROR message")
logging.warning("this is a test WARNING message")
logging.info("this is a test INFO message")
logging.debug("this is a test DEBUG message")</code></pre>


<p>When you run this code, you see that not all of your messages appear on the console:</p>


<pre class="wp-block-code"><code>CRITICAL:root:this is a test CRITICAL message
ERROR:root:this is a test ERROR message
WARNING:root:this is a test WARNING message</code></pre>


<p>We only get messages with a severity greater than or equal to WARNING but our INFO and DEBUG messages are ignored. This is because by default the logger object has a severity level set to WARNING. We can customise this level at the root logger object and at the handler level (more on handlers later) but the message must match the severity level for both the logger and the handler. For example, if we set the logger to &#8220;WARNING&#8221; and the handler to &#8220;DEBUG&#8221;, the handler would still only receive WARNING or higher.</p>


<p>We can use the setLevel method to change the severity level for the logger object:</p>


<pre class="wp-block-code"><code>logging.getLogger().setLevel("DEBUG")
logging.critical("this is a test CRITICAL message")
logging.error("this is a test ERROR message")
logging.warning("this is a test WARNING message")
logging.info("this is a test INFO message")
logging.debug("this is a test DEBUG message"</code></pre>


<p>Now the output contains all messages:</p>


<pre class="wp-block-code"><code>CRITICAL:root:this is a test CRITICAL message
ERROR:root:this is a test ERROR message
WARNING:root:this is a test WARNING message
INFO:root:this is a test INFO message
DEBUG:root:this is a test DEBUG message</code></pre>


<p>This now means we can adjust the severity of messages we receive without needing to comment out any print statements, we just need to adjust the parameter in our call to SetLevel and the change will take effect throughout our code.</p>


<h2>Redirecting logging module output</h2>


<p>So far we have just looked at printing log output to the console but the logging module provides various handlers to redirect the output. For example, we could write the output to a file, a syslog server or even to email. We can also customise the severity of logs that go to different locations and the level of detail that goes in to the logs. For example, we may want only errors to appear on the console but all detail to appear in the log file. We may also want critical alerts to generate an email message or we might want the log file to contain timestamps that do not appear in the console.</p>


<h3>Outputting to a file</h3>


<p>We create handlers by creating a new handler object and assigning it to the logger object. For example, we could create a new file handler and set its severity level to INFO as follows:</p>


<pre class="wp-block-code"><code>filehandler = logging.FileHandler("test.log")
filehandler.setLevel("INFO")</code></pre>


<p>We can then assign that handler to our logger object:</p>


<pre class="wp-block-code"><code>logging.getLogger().addHandler(filehandler)</code></pre>


<p>If you are not using the root logger then you will need to specify the name of the logger as a parameter to the getLogger method. If you have assigned the logger object to a variable you can also use the name of that variable to reference the object. For example:</p>


<pre class="wp-block-code"><code>mylogger = logging.getLogger("ExampleLogger")
mylogger.addHandler(filehandler)</code></pre>


<p>If we now run our example code we will see that it does not print anything to the screen. Instead, we can find a file called test.log which contains:</p>


<pre class="wp-block-code"><code>this is a test CRITICAL message
this is a test ERROR message
this is a test WARNING message
this is a test INFO message</code></pre>


<h3>Outputting to the console</h3>


<p>The reason we don&#8217;t see anything on the screen is because the logging module only creates a default stream handler if there are no other handlers. If we want to have a stream handler as well as a file handler then we must explicitly define both. The method for adding a stream handler is similar to adding a file handler:</p>


<pre class="wp-block-code"><code>streamhandler = logging.StreamHandler()
streamhandler.setLevel("WARNING")
logging.getLogger().addHandler(streamhandler)</code></pre>


<p>If we run this now we get</p>


<pre class="wp-block-code"><code>this is a test CRITICAL message
this is a test ERROR message
this is a test WARNING message</code></pre>


<p>printed to the console and </p>


<pre class="wp-block-code"><code>this is a test CRITICAL message
this is a test ERROR message
this is a test WARNING message
this is a test INFO message</code></pre>


<p>appended to test.log. The INFO message is skipped from the console as we only care about WARNING or higher.</p>


<h3>Controlling disk usage when logging to a file</h3>


<p>There are several different types of handler which I will not cover here as the aim is to provide a general overview but the full details of all handlers are in the <a href="https://docs.python.org/3/library/logging.handlers.html" target="_blank" rel="noreferrer noopener">module documentation</a>. One handler I will mention here is the rotating file handler. This works similarly to the file handler we have discussed previously but also allows us to delete stale log files.</p>


<p>Suppose your application continuously generates log messages and never deletes them. Eventually the logs will end up taking up a significant amount of space and may even become impossible to open. The rotating handler specifies the maximum size a log file is allowed to reach before a new one is created. We can also specify how many historical log files to keep before deleting the old ones.</p>


<p>To use the rotating file handler, along with the other advanced handlers, we also need to import the logging.handlers module. This is in addition to the main logging module:</p>


<pre class="wp-block-code"><code>import logging.handlers</code></pre>


<p>We can now define our handler as follows:</p>


<pre class="wp-block-code"><code>rotatingfilehandler = logging.handlers.RotatingFileHandler(
    "Rotating.log",maxBytes=1024,backupCount=3)
rotatingfilehandler.setLevel("DEBUG")
logging.getLogger().addHandler(rotatingfilehandler)</code></pre>


<p>This handler will output to a file called Rotating.log and will archive the file whenever it reaches 1kB in size. When the handler rotates a file, it increments a number at the end. For example, when the file is rotated once, the logger will rename it to Rotating.log1, then to Rotating.log2 etc. We have set it to retain 3 archived copies.</p>


<p>As we do not need to worry about the log filling up the disk space we can, if we choose, store all detail including DEBUG messages. RotatingFileHandler and FileHandler are not mutually exclusive. If you wanted to save informational messages to a log that gets rotated but save ERROR and CRITICAL to a secondary log file that is never replaced, you could use both types of handler at the same time. This is the same as how we used both FileHandler and StreamHandler in the previous example.</p>


<h2>Formatting logging output</h2>


<p>You may recall from our first example, that the log messages we generated didn&#8217;t just contain the message. They also showed the severity and the name of the logger object that generated the message.</p>


<p>We can control how to display messages by using formatters. These specify the extra information that should go in to a log message. As we assign the formatter to a specific handler, not  to the overall logging object, we can specify different format options for each handler. For example, we may want to show a timestamp in the log file but not on the console.</p>


<p>Formatter objects are created as strings containing placeholder variables for specific data. You can find a full list of all the possible variables <a href="https://docs.python.org/3/library/logging.html#logrecord-attributes" target="_blank" rel="noreferrer noopener">here</a>. For a simple example, suppose we wanted the log file to contain the timestamp, severity level, and log message and the console to just show the severity and the message. We could do:</p>


<pre class="wp-block-code"><code>fileformatter = logging.Formatter("%(asctime)s - %(level)s: %(message)s")
streamformatter = logging.Formatter("%(level)s - %(message)s")
streamhandler.setFormatter(streamformatter)
filehandler.setFormatter(fileformatter)</code></pre>


<p>If we now generate some example log messages we can see the following on the console:</p>


<pre class="wp-block-code"><code>CRITICAL - this is a test CRITICAL message
ERROR - this is a test ERROR message
WARNING - this is a test WARNING message</code></pre>


<p>And this in our log file:</p>


<pre class="wp-block-code"><code>2021-08-14 13:14:50,731 - ERROR: this is a test ERROR message
2021-08-14 13:14:50,731 - WARNING: this is a test WARNING message
2021-08-14 13:14:50,731 - INFO: this is a test INFO message</code></pre>


<h2>Next steps</h2>


<p>We have now seen how to generate logging messages and filter by severity level. We have also seen how to redirect our log messages to a file while also ensuring high priority messages are still visible on the console. I hope this should be enough to gain a general understanding of how the logging module works, allowing the reader to research the more advanced features directly from the <a href="https://docs.python.org/3/library/logging.html" target="_blank" rel="noreferrer noopener">library documentation</a>. By using logging instead of print statements to provide feedback on the operation of our code we make it more maintainable and easier to home in on the specific information needed when investigating bugs.</p>


<p></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/python-logging-module-introduction/">Say goodbye to print() with the Python logging module</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Tar on Linux &#8211; File Storage and Retrieval</title>
		<link>https://davidshomelab.com/tar-on-linux-file-storage-and-retrieval/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sat, 05 Jun 2021 04:46:00 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[tar]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=596</guid>

					<description><![CDATA[<p>Tar is a common file archive format on Linux and other UNIX or UNIX-like systems. Like the zip file format, the tar format allows you to combine multiple files in to a single archive. Tar provides many options for compression and file formats making it suitable for many use cases. However, it has gained a ... <a title="Tar on Linux &#8211; File Storage and Retrieval" class="read-more" href="https://davidshomelab.com/tar-on-linux-file-storage-and-retrieval/" aria-label="More on Tar on Linux &#8211; File Storage and Retrieval">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/tar-on-linux-file-storage-and-retrieval/">Tar on Linux &#8211; File Storage and Retrieval</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Tar is a common file archive format on Linux and other UNIX or UNIX-like systems. Like the zip file format, the tar format allows you to combine multiple files in to a single archive. Tar provides many options for compression and file formats making it suitable for many use cases. However, it has gained a reputation for being a difficult command to learn and use. This guide contains examples for many common (and some less common) use cases for tar and can serve as a reference if you want to make sure you are using the correct syntax.</p>


<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://imgs.xkcd.com/comics/tar.png" alt="XKCD comic depicting the confusing nature of tar flags. If you are ever in this situation, tar --help is a valid tar command which requires very little memorisation" /><figcaption>Image copyright Randall Munroe, originally published on <a href="https://xkcd.com/1168/" target="_blank" rel="noreferrer noopener">XKCD</a></figcaption></figure></div>


<h2>What is tar?</h2>


<p>Tar, originally standing for <strong>t</strong>ape <strong>ar</strong>chive is the traditional archiving utility on most UNIX-like systems. On modern systems without tape drives, you would usually save archives to a file instead of tape. Unlike the zip format, which both combines and compresses files, tar only combines files. Compression is performed separately using a utility such as gzip or bzip2. However, you can use the tar command to perform both the archiving and compression in one go.</p>


<h2>Creating an archive</h2>


<p>You can create an archive using the <code>--create</code> flag (<code>c</code> in the short format). One caveat is that if the archive file already exists, attempting to create a new archive will overwrite it. This can cause problems if you invert the order of your source files and your archive as you may end up replacing your source files. The easiest way to remember this is to think about the command using long form flags. For example:</p>


<pre class="wp-block-code"><code>tar --create --file example.tar file1 file2 file3</code></pre>


<p>In this example it should be clear that the <code>--file</code> (<code>f</code>) flag indicates the output file. The input files are specified at the end of the command. Once you understand the syntax, you can use the short form version of the command:</p>


<pre class="wp-block-code"><code>tar cf example.tar file1 file2 file3</code></pre>


<p>Files will be added to the archive using the full provided path excluding the leading /. For example, if you ran</p>


<pre class="wp-block-code"><code>tar cf example.tar /tmp/{file1,file2}</code></pre>


<p>and then extracted the archive, a folder called &#8220;tmp&#8221; would be created in your extraction location. The files would be placed in that folder.</p>


<h2>Extracting an archive</h2>


<p>Once you have created an archive, the next thing you will want to do is extract it. The command to do this is almost the same as the command to create an archive. Except the <code>--extract</code> (<code>x</code>) flag is used:</p>


<pre class="wp-block-code"><code>tar xf example.tar</code></pre>


<p>If you just need to extract certain files, you can specify them at the end of the command:</p>


<pre class="wp-block-code"><code>tar xf example.tar file1 file2</code></pre>


<p>If you are specifying files you will need to specify the full path location in the archive. For example:</p>


<pre class="wp-block-code"><code>tar xf example.tar tmp/example/file1</code></pre>


<p>Extracted file paths are always relative to the current working directory.</p>


<h2>All about compression</h2>


<p>As discussed, tar archives are uncompressed by default but compression can be enabled if desired. The short options for some of the compression algorithms can be a little confusing but there are also long options.  The table below shows all the available compression algorithms.</p>


<figure class="wp-block-table"><table><tbody><tr><td><strong>Algorithm</strong></td><td><strong>Short Option</strong></td><td><strong>Long Option</strong></td></tr><tr><td>gzip</td><td>z</td><td><code>--gzip</code></td></tr><tr><td>bzip2</td><td>j</td><td><code>--bzip2</code></td></tr><tr><td>xzip</td><td>J</td><td><code>--xz</code></td></tr><tr><td>lzip</td><td></td><td><code>--lzip</code></td></tr><tr><td>lzma</td><td></td><td><code>--lzma</code></td></tr><tr><td>lzop</td><td></td><td><code>--lzop</code></td></tr><tr><td>zstd</td><td></td><td><code>--zstd</code></td></tr></tbody></table></figure>


<p>Additionally, if you use the <code>--auto-compress</code> <code>(a)</code> flag, tar will automatically detect the right compression algorithm based on the file extension:</p>


<pre class="wp-block-code"><code>tar caf example.tar.gz file1</code></pre>


<p>When extracting an archive, the compression algorithm can be specified but if omitted will be detected from the file name.</p>


<h2>Inspecting the contents of tar archives</h2>


<p>If you want to list the contents of an archive you can use the <code>--list</code> (<code>t</code>) flag. You will still need to use the <code>f</code> flag to indicate that the archive you want to list is a file:</p>


<pre class="wp-block-code"><code>tar tf example.tar
------------------------------------
file1
file2
file3</code></pre>


<p>If the archive contains subfolders, the full path will be listed. This can be useful if you need to modify an archive or extract only a couple of files.</p>


<h2>Modifying tar archives</h2>


<h3>Adding more files</h3>


<p>You can add additional files to an existing archive using the <code>--append</code> (<code>r</code>) flag. This flag uses the same syntax as <code>--create</code>:</p>


<pre class="wp-block-code"><code>tar rf example.tar file3</code></pre>


<h3>Updating an archive</h3>


<p>The <code>--update</code> (<code>u</code>) flag is used to update an archive by only appending files that either don&#8217;t exist in the archive or which have been modified since the archive was created. Updated files will not actually replace the old copies of the file in the archive. Instead you will get multiple copies of the file with the same name. This presents a difficulty when extracting the archive in determining which copy of the file you will get. When an archive is extracted, files are processed in the order that they appear in the archive. This means that the newest copy of the file will always be extracted unless another version is specified. You can use the <code>--occurrence</code> flag to specify which version of the file to use. For example, the following command will extract the 3rd instance of file1:</p>


<pre class="wp-block-code"><code>tar xf example.tar file1 --occurrence=3</code></pre>


<h3>Removing a file</h3>


<p>You can remove files from an <em>uncompressed</em> archive using the <code>--delete</code> flag. There is no way to remove a file from a compressed archive.</p>


<pre class="wp-block-code"><code>tar f example.tar --delete file1</code></pre>


<h2>Other useful actions</h2>


<p>When creating or extracting an archive, you might want to see the names of the files being processed. You can do this using the <code>--verbose</code> (<code>v</code>) flag.</p>


<p>You can combine multiple archives together using <code>--concatenate</code> (<code>A</code>):</p>


<pre class="wp-block-code"><code>tar Af example1.tar example2.tar example3.tar</code></pre>


<p>In the above example, the contents of example2.tar and example3.tar will be appended to example1.tar</p>


<p>There are many other options which can be used to accomplish different tasks but the above commands should cover most common situations. You can find the full list of commands in the <a href="https://man7.org/linux/man-pages/man1/tar.1.html" target="_blank" rel="noreferrer noopener">man page</a> and, even if you are using different options, you c<mark class="annotation-text annotation-text-yoast" id="annotation-text-49a0810d-47d4-4c2e-bbf5-cfcd84b6d4f3"></mark>an use the above commands as skeletons to be modified for the more advanced options.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/tar-on-linux-file-storage-and-retrieval/">Tar on Linux &#8211; File Storage and Retrieval</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>UniFi Controller Setup on Ubuntu 20.04LTS</title>
		<link>https://davidshomelab.com/unifi-controller-setup-on-ubuntu-20-04lts/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sat, 29 May 2021 20:23:54 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Networking]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=571</guid>

					<description><![CDATA[<p>Ubiquiti&#8217;s UniFi product lineup has seen enormous growth in popularity due to its range of high quality access points. While you will usually find professional grade access points in businesses instead of homes, they provide a benefit in any building. This is especially true for large homes or older buildings with thick walls where a ... <a title="UniFi Controller Setup on Ubuntu 20.04LTS" class="read-more" href="https://davidshomelab.com/unifi-controller-setup-on-ubuntu-20-04lts/" aria-label="More on UniFi Controller Setup on Ubuntu 20.04LTS">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/unifi-controller-setup-on-ubuntu-20-04lts/">UniFi Controller Setup on Ubuntu 20.04LTS</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Ubiquiti&#8217;s UniFi product lineup has seen enormous growth in popularity due to its range of high quality access points. While you will usually find professional grade access points in businesses instead of homes, they provide a benefit in any building. This is especially true for large homes or older buildings with thick walls where a single AP isn&#8217;t enough.</p>



<p>Many larger homes end up using multiple separate access points with a mix of repeaters. This results in a confusing mix of networks with devices connecting to a sub-optimal AP, causing weak signal. UniFi resolves this by managing all access points from a central controller and treating them as a single network. This saves you having to join your devices to several different networks and allows the APs to intelligently hand devices off to each other as you roam around the house. This ensues that you are always communicating with the AP that has the strongest signal.</p>



<p>UniFi can act solely as an access point without performing NAT. This means that unlike mesh WiFi systems which are traditionally used to expand coverage in a home setting, you shouldn&#8217;t run in to communications issues between wireless and wired devices in your home.</p>



<h2>Why not another product?</h2>



<p>While there are plenty of other good products on the market, there are several reasons why UniFi is a strong contender. When compared to other commercial solutions, UniFi hardware is priced very reasonably and is widely available from consumer outlets. This means you don&#8217;t need to procure hardware through trade-specific distribution networks. One other advantage is the simplicity of setting up devices. Once you have the controller set up, you can add new devices by adding them to the network. They will appear in the dashboard and can you can easily configure them in just a few clicks.</p>



<p>For me, the flexibility around the controller software is the key selling point. Some providers require you to buy an expensive hardware controller in addition to the APs. Other systems can only be managed from the cloud which some people may view as a security risk. The UniFi controller can instead be installed on any Windows, Mac or Ubuntu PC (or VM), allowing you to run it on hardware you already have. That&#8217;s not to say that you can&#8217;t run it in the cloud or have a dedicated controller. UniFi provide various models of <a href="https://www.amazon.co.uk/gp/product/B017T2QB22/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=davidshomelab-21&amp;creative=6738&amp;linkCode=as2&amp;creativeASIN=B017T2QB22&amp;linkId=61d8043b767c287dd50062f2565f1f26" target="_blank" rel="noreferrer noopener sponsored nofollow">CloudKey</a>(paid link) for users who wish to avoid the effort of building their own controller. The basic model will be sufficient for any home or office with fewer than a couple of dozen managed devices. Additionally, while not owned by UniFi, the <a href="https://hostifi.com/" target="_blank" rel="noreferrer noopener">HostiFi</a> company offers cloud hosted controllers requiring no on-premisies management hardware.</p>



<h2>Setting up the UniFi controller on Ubuntu</h2>



<p>While the controller software can be installed on any PC, a dedicated server will simplify management. Windows and Ubuntu are both supported but Ubuntu is preferred due to its lack of licensing costs and smaller footprint. The instructions provided here are for Ubuntu Server 20.04. While an LTS version of Ubuntu Server is preferred, any recent version of Ubuntu Server or Desktop can be used.</p>



<p>The system requirements depend on the number of managed devices but 1 CPU core, 2GB of RAM and 25GB of storage should be enough in most cases. The UniFi controller software isn&#8217;t in the main Ubuntu repos so we need to add the correct repo. We must also install the GPG keys so the repo is trusted:</p>



<pre class="wp-block-code"><code>echo 'deb https://www.ui.com/downloads/unifi/debian stable ubiquiti' | sudo tee /etc/apt/sources.list.d/100-ubnt-unifi.list
sudo wget -O /etc/apt/trusted.gpg.d/unifi-repo.gpg https://dl.ui.com/unifi/unifi-repo.gpg </code></pre>



<p>Next, update the apt cache and install the UniFi controller along with its prerequisites:</p>



<pre class="wp-block-code"><code>sudo apt update &amp;&amp; sudo apt install ca-certificates openjdk-8-jdk apt-transport-https unifi -y</code></pre>



<p>Once the install is finished, check that the service is running:</p>



<pre class="wp-block-code"><code>systemctl status unifi.service</code></pre>



<p>If the service shows as failed or not running, restart the service with:</p>



<pre class="wp-block-code"><code>sudo systemctl restart unifi.service</code></pre>



<h2>Configuring the UniFi controller</h2>



<p>Check the status again and verify that the service is running. Once everything is up and running, open a web browser and go to https://[server&/#8217;s IP address]:8443. You will need to accept the self-signed certificate warning.</p>



<p>Next, chose a name for your controller and accept the terms and conditions.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1093" height="389" src="https://davidshomelab.com/wp-content/uploads/2021/05/image.png" alt="Unifi Controller Setup Step 1 of 6
Name your Controller" class="wp-image-577"/></figure>



<p>On the next screen, sign in with your UniFi account. If you want to be able to access your controller through Unifi&#8217;s cloud enter your login details here.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1093" height="389" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-1.png" alt="Unifi Controller Setup Step 2 of 6
Sign in with your Ubiquiti Account" class="wp-image-578"/></figure>



<p>If you want to keep your controller local to your network, set up a local account, click &#8220;Switch to Advanced Setup&#8221;. Uncheck both checkboxes and set up a local username and password.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1093" height="682" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-2.png" alt="Unifi Controller Setup Step 2 of 6 (Advanced)
Advanced remote and local access" class="wp-image-579"/></figure>



<p>On the next screen, leave auto backup and network optimisation enabled.</p>



<p>Installing on an Ubuntu server is one of the simplest and cheapest ways to deploy the UniFi controller.</p>



<p>If you already have your devices, you can now choose to set them up. If you are just setting up the controller in preparation for receiving the devices, you can add them later.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1061" height="479" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-4.png" alt="Unifi Controller Setup Step 4 of 6
Devices Setup" class="wp-image-581"/></figure>



<p>Enter a WiFi network name and password. If you plan to have multiple SSIDs you can add the rest later, just enter your primary one here.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1061" height="348" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-5.png" alt="Unifi Controller Setup Step 5 of 6
WiFi Setup" class="wp-image-582"/></figure>



<p>Finally, confirm your settings and set your location and time zone.</p>



<figure class="wp-block-image size-large is-style-default"><img loading="lazy" width="1061" height="348" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-6.png" alt="Unifi Controller Setup Step 6 of 6
Review Configuration" class="wp-image-583"/></figure>



<p>The wizard will redirect you to the main dashboard and your network will be set up. </p>



<p>There is plenty more you can do with UniFi hardware such as having multiple SSIDs on separate vlans, captive portal and MAC address based vlan assignments. Come back soon for more guides.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/unifi-controller-setup-on-ubuntu-20-04lts/">UniFi Controller Setup on Ubuntu 20.04LTS</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites</title>
		<link>https://davidshomelab.com/install-zabbix-proxy-on-pfsense-to-monitor-hosts-in-remote-sites/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Tue, 23 Jun 2020 17:28:53 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Networking]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[pfsense]]></category>
		<category><![CDATA[zabbix]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=528</guid>

					<description><![CDATA[<p>In a multi-site network you will most likely have VPNs connecting your sites, allowing remote connectivity to your main site. However, as these links are not necessarily 100% reliable, a dropped link could potentially result in the loss of monitoring data for a large number of hosts as the agent does not store any data ... <a title="Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites" class="read-more" href="https://davidshomelab.com/install-zabbix-proxy-on-pfsense-to-monitor-hosts-in-remote-sites/" aria-label="More on Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/install-zabbix-proxy-on-pfsense-to-monitor-hosts-in-remote-sites/">Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In a multi-site network you will most likely have VPNs connecting your sites, allowing remote connectivity to your main site. However, as these links are not necessarily 100% reliable, a dropped link could potentially result in the loss of monitoring data for a large number of hosts as the agent does not store any data by itself. By using Zabbix Proxies on remote sites it is possible to configure the agents to communicate only with the local proxy which will then store the data and relay it back to the main server when a link is available. Zabbix Proxy can be installed on any operating system that the Zabbix Server can be installed on but it can also be installed as a package on pfSense. This has the advantages that it does not require maintaining an additional server for each site and provides a simpler interface for configuring the proxy than installing it on a Linux server would.</p>


<h2>Install the Zabbix Proxy plug-in</h2>


<p>In the pfSense management console, go to System &gt; Package Manager &gt; Available Packages and search for &#8220;zabbix-proxy&#8221;. You will see multiple different versions. Unlike the agent, the proxy must be the same version as the server so select the appropriate one for your server. In this case we will install zabbix-proxy5.</p>


<h2>Connect the proxy to the server</h2>


<p>In the pfSense console go to Services &gt; Zabbix Proxy 5.0 and configure the proxy. Set the server to the IP address or hostname of the Zabbix server and the hostname to whatever you will call the proxy within the Zabbix console. The other options can be left as default but can be modified if desired. For example, if you only want the proxy to listen on one interface, set the listen IP to the pfSense IP address for the interface you want to listen on. If you have a large number of proxies you can increase this value to reduce load on the server but doing so will increase the propagation time for any changes you make to your setup.</p>


<p>While testing or for small deployments you may wish to lower the config frequency from the default of 1 hour. This value determines how often the proxy will contact the server to get the current configuration, including the hosts to monitor and the data it is supposed to collect.</p>


<p>Finally, enable the service and save the settings.</p>


<figure class="wp-block-image size-large"><img loading="lazy" width="1002" height="763" src="https://davidshomelab.com/wp-content/uploads/2020/06/image.png" alt="Screenshot of Zabbix proxy settings. Non-default settings:
Enable: Yes
Server: 172.16.24.8
Hostname: proxy-site-a
Config Frequency: 1025" class="wp-image-550" /></figure>


<p>Now the proxy is configured it is time to tell the server where to find it. From the Zabbix console, go to Administration &gt; Proxies. Click &#8220;Create proxy&#8221; to add the new proxy. Enter the name (this is the hostname you configured in the proxy) and the IP address of the proxy.</p>


<figure class="wp-block-image size-large"><img loading="lazy" width="567" height="286" src="https://davidshomelab.com/wp-content/uploads/2020/06/image-1.png" alt="Screenshot of add-proxy screen in Zabbix console.
Proxy Name: proxy-site-a
Proxy Mode: Active
Proxy Address: 172.16.24.1
Description: Site A Zabbix Proxy" class="wp-image-552" /></figure>


<p>Once you have added the proxy, reload the page and check that the proxy shows up in the list and it has a &#8220;last seen&#8221; value. If all is well and the listing shows something similar to the below, the proxy is set up and we are ready to configure hosts to talk to it.</p>


<figure class="wp-block-image size-large"><img loading="lazy" width="853" height="75" src="https://davidshomelab.com/wp-content/uploads/2020/06/image-2.png" alt="Screenshot showing proxy in Zabbix proxy list with last seen value of 1 second" class="wp-image-553" /></figure>


<h2>Configure the proxy to monitor hosts</h2>


<p>Configuring an agent to use a proxy is very similar to configuring it to talk to the server directly. For this example I will modify the CentOS host described in <a href="https://davidshomelab.com/install-zabbix-agent-to-monitor-windows-and-linux-hosts/" target="_blank" rel="noreferrer noopener">my post on configuring Zabbix agents</a> to get it to talk via the proxy.</p>


<p>Log in to the server and open <code>/etc/zabbix/zabbix_agent.conf</code>. Edit the Server and ServerActive values to the IP address or hostname of the Zabbix proxy and save and close the file. Restart the Zabbix agent:</p>


<pre class="wp-block-code"><code>systemctl restart zabbix-agent.service</code></pre>


<p>Now log in to the Zabbix console and go to Configuration &gt; Hosts and select the host you want to monitor via the proxy. In the host settings, click the drop-down next to &#8220;Monitored by proxy&#8221; and select the proxy you have configured the host to talk to. Click &#8220;Update&#8221; to save your settings.</p>


<figure class="wp-block-image size-large"><img loading="lazy" width="847" height="500" src="https://davidshomelab.com/wp-content/uploads/2020/06/image-3.png" alt="Zabbix host configuration screen with Monitored by proxy setting set to proxy-site-a" class="wp-image-554" /></figure>


<p>When you return to the hosts list you will probably see that the host appears to be offline. This is because the proxy will not find out that it is meant to be monitoring the host until the next time it syncs its settings with the server. If you have set a reasonable update frequency you will just need to wait for that period of time and it should start to appear and collect data automatically.</p>


<p>If you need to force the proxy to update immediately, log in to pfSense and go to Status &gt; Services and restart the zabbix_proxy service. When the service starts up this will force a config sync and you should see the host come online within a couple of seconds after the service restarts.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/install-zabbix-proxy-on-pfsense-to-monitor-hosts-in-remote-sites/">Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Upgrade Zabbix Server to 5.0 LTS</title>
		<link>https://davidshomelab.com/upgrade-zabbix-server-to-5-0-lts/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sat, 30 May 2020 16:29:34 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[centos8]]></category>
		<category><![CDATA[Linux monitoring]]></category>
		<category><![CDATA[zabbix]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=535</guid>

					<description><![CDATA[<p>Earlier this month, Zabbix server 5.0 LTS was released. Version 5.0 brings an updated UI, additional cloud integrations and support for the latest Zabbix agent (the old agent is still supported so you don&#8217;t need to upgrade the agent on all your hosts immediately). If you are using Zabbix Proxy to monitor some of your ... <a title="Upgrade Zabbix Server to 5.0 LTS" class="read-more" href="https://davidshomelab.com/upgrade-zabbix-server-to-5-0-lts/" aria-label="More on Upgrade Zabbix Server to 5.0 LTS">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/upgrade-zabbix-server-to-5-0-lts/">Upgrade Zabbix Server to 5.0 LTS</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Earlier this month, Zabbix server 5.0 LTS was released. Version 5.0 brings an updated UI, additional cloud integrations and support for the latest Zabbix agent (the old agent is still supported so you don&#8217;t need to upgrade the agent on all your hosts immediately). If you are using Zabbix Proxy to monitor some of your hosts, you will need to update this at the same time as the proxy and server must have the same major version number as the server.</p>



<p>I will be upgrading the Zabbix server I set up in my previous tutorials on Zabbix so this guide assumes a CentOS 8 operating system. With the exception of the package manager commands, the steps should be more or less the same on any Linux distribution.</p>



<h2>Update your system</h2>



<p>Before making major changes it is worth making sure the base system is up to date to make sure there aren&#8217;t any issues arising from using older packages. Run the following commands to update and reboot the system.</p>



<pre class="wp-block-code"><code>dnf upgrade
reboot</code></pre>



<h2>Back up your current installation</h2>



<p>While the upgrade process shouldn&#8217;t cause too many problems, it will make changes to your database and overwrite the old installation. In the event that this process runs into an error or the machine crashes or loses power while the upgrade is in progress, having a backup of your database and configuration files will make recovery a lot easier.</p>



<p>Create a directory to store all your backups:</p>



<pre class="wp-block-code"><code>mkdir /opt/zabbix-backup
cd /opt/zabbix-backup</code></pre>



<p>Now we will need to stop the Zabbix server to ensure that the database remains in a consistent state while we are performing the backups. Run the following commands to stop Zabbix and write the database to a file:</p>



<pre class="wp-block-code"><code>systemctl stop zabbix-server.service
mysqldump -u&#91;zabbix-user] -p &#91;zabbix database] &gt; zabbix-database.sql</code></pre>



<p>For the <code>mysqldump</code> command, you will need to enter the names of the Zabbix database user and the database in the appropriate places. If you do not remember what these are, check in <code>/etc/zabbix/zabbix_server.conf</code>. The lines you are looking for are <code>DBName</code>, <code>DBUser</code> and <code>DBPassword</code>.</p>



<p>You will also want to back up your  configuration files as follows:</p>



<pre class="wp-block-code"><code>cp /etc/zabbix/zabbix_server.conf /opt/zabbix-backup/
cp /etc/httpd/conf.d/zabbix.conf  /opt/zabbix-backup/
cp -R /usr/share/zabbix/ /opt/zabbix-backup/
cp -R /usr/share/doc/zabbix-* /opt/zabbix-backup/</code></pre>



<h2>Update Zabbix</h2>



<p>Install the new repo by running the following commands:</p>



<pre class="wp-block-code"><code>rpm -Uvh https://repo.zabbix.com/zabbix/5.0/rhel/8/x86_64/zabbix-release-5.0-1.el8.noarch.rpm
dnf clean all </code></pre>



<p>If you are not running CentOS 8, check the <a href="https://www.zabbix.com/download" target="_blank" rel="noreferrer noopener">Zabbix download page</a> for the link the the correct repo for your distribution and the required commands to install it.</p>



<p>This will remove the old 4.0 repo from your system and replace it with the 5.0 repo. Now you can just upgrade your Zabbix packages as follows:</p>



<pre class="wp-block-code"><code>yum upgrade zabbix*</code></pre>



<p>If the update installs without errors, restart the Zabbix server and agent as follows:</p>



<pre class="wp-block-code"><code>systemctl restart zabbix-server.service
systemctl restart zabbix-agent.service</code></pre>



<p>Copy the Apache config file back to its original location and restart Apache:</p>



<pre class="wp-block-code"><code>cp /opt/zabbix-backup/zabbix.conf /etc/httpd/conf.d/
systemctl restart httpd</code></pre>



<p>You should now be able to log in to the Zabbix web console. You&#8217;ll see that the UI now has the navigation menus on the left hand side and the version number at the bottom has changed to 5.0.1. If the page doesn&#8217;t load correctly, clear your browser cache and cookies and try again.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1920" height="785" src="https://davidshomelab.com/wp-content/uploads/2020/05/image.png" alt="Screenshot of Zabbix 5.0.1 dashboard" class="wp-image-536"/></figure>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/upgrade-zabbix-server-to-5-0-lts/">Upgrade Zabbix Server to 5.0 LTS</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
