<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Linux &#8211; David&#039;s Homelab</title>
	<atom:link href="https://davidshomelab.com/category/linux/feed/" rel="self" type="application/rss+xml" />
	<link>https://davidshomelab.com/</link>
	<description></description>
	<lastBuildDate>Sat, 29 Jul 2023 22:09:09 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.8.10</generator>

 
	<item>
		<title>Homelab Upgrade Project #2: A New Start with Proxmox</title>
		<link>https://davidshomelab.com/homelab-upgrade-project-2-a-new-start-with-proxmox/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sat, 29 Jul 2023 21:53:43 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Virtualisation]]></category>
		<category><![CDATA[homelab upgrade]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[Proxmox]]></category>
		<category><![CDATA[virtualisation]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=784</guid>

					<description><![CDATA[<p>It&#8217;s been several months since I first started planning the homelab upgrade but I&#8217;ve finally got round to making the first steps. The initial step was to replace my Ubuntu based hypervisor with Proxmox. As the rest of my home network runs on the same host I couldn&#8217;t afford to leave it offline for long. ... <a title="Homelab Upgrade Project #2: A New Start with Proxmox" class="read-more" href="https://davidshomelab.com/homelab-upgrade-project-2-a-new-start-with-proxmox/" aria-label="More on Homelab Upgrade Project #2: A New Start with Proxmox">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/homelab-upgrade-project-2-a-new-start-with-proxmox/">Homelab Upgrade Project #2: A New Start with Proxmox</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>It&#8217;s been several months since I first started <a href="https://davidshomelab.com/homelab-upgrade-project/" target="_blank" rel="noreferrer noopener">planning the homelab upgrade</a> but I&#8217;ve finally got round to making the first steps. The initial step was to replace my Ubuntu based hypervisor with Proxmox. As the rest of my home network runs on the same host I couldn&#8217;t afford to leave it offline for long. This required me to do some careful planning to ensure a quick and smooth transition. One of the main focuses behind the upgrade was to increase separation between home and lab infrastructure. This means that the rest of the upgrades should be a lot quicker and less disruptive to implement.</p>



<p>As mentioned in my last post I decided to go with Proxmox. While there are plenty of other options out there, I found Proxmox to be the simplest to run on a single host while also maintaining clustering ability. I also like the built in backup capabilities. The fact that it&#8217;s based on Debian is also an appeal as it&#8217;s a platform I am already familiar with. I&#8217;d prefer not to have to rebuild the server again for the life of the hardware. Having a stable and reliable base platform is therefore a priority.</p>



<h2>Planning For The Move</h2>



<p>My key priorities during the move were to keep the internet up as much as possible and to not lose data. For the first part, I built a new router VM on my spare (and very noisy) Supermicro 6017R. I chose to use VyOS for this and was sufficiently impressed with it that it has now replaced pfSense as my main router OS. I&#8217;ll have a lot more to say about it in future posts.</p>



<p>I do eventually want to rebuild all my home infrastructure and move more services into containers. However, for now I just wanted to move everything over as-is. I can then tackle one service at a time after researching the best options for each situation.</p>



<p>The migration process actually turned out to be much simpler than I expected. Initially I powered down the VMs and copied the contents of <code>/var/lib/libvirt/images</code> onto an external hard drive. I would then later import these into newly created VMs after migrating to Proxmox.</p>



<h2>Installing Proxmox</h2>



<p>The Proxmox installation process is pretty typical for any Linux installation, just download the <a href="https://www.proxmox.com/en/downloads" target="_blank" rel="noreferrer noopener">ISO</a>, write it to a USB drive and boot from it. You&#8217;ll just need to select your boot drive, set a root password and configure a static IP address. Once you boot into the new installation, the console will show the IP/port combination for the web interface.</p>



<h3>Configuring Proxmox updates</h3>



<p>By default, Proxmox will expect an enterprise subscription and will have the enterprise update repo enabled. In a production environment, a subscription is highly recommended to get access to more thoroughly tested packages and support. For a homelab this is overkill so we need to enable the community repo. In the node settings go to Updates > Repositories. You will see two repositories under the enterprise.proxmox.com domain. Select these repositories and disable them. Next, click Add and click OK to the warning that you don&#8217;t have a subscription. Select &#8220;No-subscription&#8221; and click OK. In the updates tab, click Refresh and Upgrade. Once the updates are installed, reboot the host.</p>



<h3>Creating Storage Pools</h3>



<p>Once you have signed back in to the web interface, you&#8217;ll want to create some storage pools. Proxmox will create a pool automatically on your boot disk. However, storing VMs on separate disks will make life easier in the event that you need to reinstall. I have 5 additional disks in my host, two 1TB SSDs, two 4TB SATA HDDs and one 8TB HDD. I used the 1TB disks for a mirrored ZFS pool for VM boot disks. Then I did the same with the 4TB disks for bulk storage. This can be used for any archival data and for VMs which require a lot of space. The 8TB disk will be used for backup storage. Unfortunately I don&#8217;t have enough SATA ports to add a second mirror for this disk so can&#8217;t make this redundant. That is something I&#8217;d definitely look to reconsider for the next hardware iteration.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="289" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-1024x289.png" alt="Screenshot of Proxmox web interface showing the existing ZFS pools, three pools are created. BACKUP, is 7.99TB with 6.19TB free and 1.80TB used, HDD is 3.99TB with 1.58TB free and 2.40TB allocated, SSD is 996.43GB with 928.40GB free and 68.04GB allocated" class="wp-image-788" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-1024x289.png 1024w, https://davidshomelab.com/wp-content/uploads/2023/07/image-300x85.png 300w, https://davidshomelab.com/wp-content/uploads/2023/07/image-768x217.png 768w, https://davidshomelab.com/wp-content/uploads/2023/07/image-1536x434.png 1536w, https://davidshomelab.com/wp-content/uploads/2023/07/image.png 1539w" sizes="(max-width: 1024px) 100vw, 1024px" /><figcaption>The disk configuration is set under the host settings, in Disks > ZFS. If the disk already has a partition on it, go to Disks and wipe it before trying to create the ZFS pool.</figcaption></figure>



<h3>Configuring the Network</h3>



<p>Next, I set up the network configuration. As the router will be running as a VM, I needed to make a WAN and LAN bridge. I could have used the existing vmbr0 interface as the LAN but in the future I may want to move the management interface to a dedicated vlan. To retain flexibility, I created a separate LAN bridge. I also made a separate Lab bridge to keep any lab traffic well away from anything important. I didn&#8217;t add any uplinks for this bridge as it will only be communicating with other VMs.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="148" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-1-1024x148.png" alt="Screenshot of Proxmox network configuration" class="wp-image-789" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-1-1024x148.png 1024w, https://davidshomelab.com/wp-content/uploads/2023/07/image-1-300x43.png 300w, https://davidshomelab.com/wp-content/uploads/2023/07/image-1-768x111.png 768w, https://davidshomelab.com/wp-content/uploads/2023/07/image-1.png 1512w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p> </p>



<h3>Separating Resources</h3>



<p>One of my main aims for the homelab rebuild was to have plenty of separation between home infrastructure and lab. One way to do this is to have separate resource pools. This creates a simple way to have different permissions for different types of VMs. This will come in particularly handy if I want to experiment with automation in the Proxmox environment. I can create an account which only has permission to touch lab VMs and be confident it wont touch infrastructure. Resource pools are configured in Datacentre > Permissions > Pools. I created two pools, Home and Lab.</p>



<h2>Importing Existing VMs to Proxmox</h2>



<p>I had worried about importing the existing VMs but it turned out to be far simpler than I expected. I had only copied the disk images so had to re-make the rest of the VM config.</p>



<p>To do this I created the VMs as normal through the web UI. As we already have the boot image, on the OS tab I selected &#8220;Do not use any media&#8221;. On the Disks tab I deleted the default scsi0 device. I configured the CPU, memory and network as appropriate for the particular VM.</p>



<h3>Import the Disk</h3>



<p>Once the VM was created, all we need to do is import the disk images. I plugged in the external drive and checked the device name in the node disk settings. In my case it came up as <code>/dev/sdg</code> I hadn&#8217;t partitioned it so this was the location I needed to mount. I went to the shell tab and mounted the disk as follows:</p>



<pre class="wp-block-code"><code>mount /dev/sdg /mnt</code></pre>



<p>We then need to import the image into the VM which we can do as follows:</p>



<pre class="wp-block-code"><code>qm disk import &lt;vm id> &lt;image path> &lt;storage pool></code></pre>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-3.png" alt="Screenshot of disk import process.

root@pve01:/HDD/archive# qm disk import 116 fw1.qcow2 PVE01-SSD 
importing disk 'fw1.qcow2' to VM 116 ...
transferred 0.0 B of 20.0 GiB (0.00%)
transferred 204.8 MiB of 20.0 GiB (1.00%)
transferred 415.7 MiB of 20.0 GiB (2.03%)
transferred 620.5 MiB of 20.0 GiB (3.03%)" class="wp-image-791" width="807" height="151" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-3.png 742w, https://davidshomelab.com/wp-content/uploads/2023/07/image-3-300x56.png 300w" sizes="(max-width: 807px) 100vw, 807px" /></figure>



<p>The disk import process will make a copy of the image to the storage pool. It will create it as a new disk so don&#8217;t worry about overwriting any existing disks. Depending on the image size and storage speed, the import may take a while. If you use the shell interface in the web UI, navigating away from the shell will cancel the import. You may find it easier to run the commands over SSH or use the <code>nohup</code> command if you have large disks to import.</p>



<h3>Attach the Disk</h3>



<p>In the VM settings, you should now see an unused disk in the Hardware tab. Click on this disk and click Edit. If it is a Linux VM, change the Bus/Device setting to VirtIO Block for improved disk performance. Windows requires additional drivers to support VirtIO so if you don&#8217;t have them installed, stick with one of the other options. Once you are happy with the settings, click Add.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-4.png" alt="Screenshot of disk add screen. The only changed setting is Bus/Device: VirtIO Block. All other settings are default. The Add button is in the bottom right corner of the box." class="wp-image-793" width="790" height="419" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-4.png 747w, https://davidshomelab.com/wp-content/uploads/2023/07/image-4-300x159.png 300w" sizes="(max-width: 790px) 100vw, 790px" /></figure>



<p>Go to the Options tab and double click on Boot Order. You should now see your new disk at the bottom of the list. Check the box to enable the disk and drag it to the top of the list. You are now ready to boot the VM.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="799" height="211" src="https://davidshomelab.com/wp-content/uploads/2023/07/image-5.png" alt="Screenshot of the boot order editing screen. The virtio0 disk is enabled and dragged above the ide2 and net0 devices" class="wp-image-794" srcset="https://davidshomelab.com/wp-content/uploads/2023/07/image-5.png 799w, https://davidshomelab.com/wp-content/uploads/2023/07/image-5-300x79.png 300w, https://davidshomelab.com/wp-content/uploads/2023/07/image-5-768x203.png 768w" sizes="(max-width: 799px) 100vw, 799px" /></figure>



<p>For now I have significantly scaled back what was running so I just have a router, UniFi controller and file server. I will probably add PiHole back at a later date but for now I want to keep things simple. I also made the decision to not host email at home any more. It&#8217;s stressful enough having to manage mail servers at work and I don&#8217;t want to have to do it at home too. Hosting email in the cloud should mean it&#8217;s online more often. If you&#8217;ve sent me an email and got a delivery failure, I&#8217;m sorry and this shouldn&#8217;t happen in future.</p>



<h2>Next steps</h2>



<p>So far I have got to the point that my existing infrastructure is running on a more manageable platform, this will make it easier to make incremental changes later so expect more updates in future. I have already replaced my pfSense router with VyOS and implemented the Proxmox Backup Server but this post is long enough already so these will be covered in future updates.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/homelab-upgrade-project-2-a-new-start-with-proxmox/">Homelab Upgrade Project #2: A New Start with Proxmox</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Pi-hole failover using Gravity Sync and Keepalived</title>
		<link>https://davidshomelab.com/pi-hole-failover-with-keepalived/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Mon, 30 Aug 2021 14:07:22 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Networking]]></category>
		<category><![CDATA[networking]]></category>
		<category><![CDATA[pi-hole]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=646</guid>

					<description><![CDATA[<p>Traditionally when we assign DNS servers to hosts we assign more than one. For example, we may assign 8.8.8.8 as a primary an 8.8.4.4 as a secondary so in the unlikely event that one is ever down, we have a second one to fall back on. When we set up a single Pi-hole server with ... <a title="Pi-hole failover using Gravity Sync and Keepalived" class="read-more" href="https://davidshomelab.com/pi-hole-failover-with-keepalived/" aria-label="More on Pi-hole failover using Gravity Sync and Keepalived">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/pi-hole-failover-with-keepalived/">Pi-hole failover using Gravity Sync and Keepalived</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Traditionally when we assign DNS servers to hosts we assign more than one. For example, we may assign 8.8.8.8 as a primary an 8.8.4.4 as a secondary so in the unlikely event that one is ever down,  we have a second one to fall back on. When we set up a single Pi-hole server with no failover we lose this redundancy, meaning that if our Pi-hole ever goes down or we want to restart it for updates, we lose DNS until it is back.</p>


<p>The simplest way to regain this redundancy is just to run 2 independent Pi-hole servers and set them as the primary and fallback DNS servers. However, this approach has a couple of disadvantages. The main problem is that we have no setting synchronisation between our servers.  If we update the block lists on one, we have to remember to update them on the other. We also can&#8217;t guarantee which server clients will connect to, so if one is down, clients may still try to connect and then have to wait for a timeout before querying the second server. This can cause performance drops when one of the servers is offline.</p>


<h2>Using failover to provide a highly available Pi-hole cluster</h2>


<p>We can resolve these problems by linking our pair of Pi-hole servers into a unified failover cluster. The original idea for this came from <a href="https://www.reddit.com/user/Panja0/" target="_blank" rel="noreferrer noopener">u/Panja0</a> on <a href="https://www.reddit.com/r/pihole/comments/d5056q/tutorial_v2_how_to_run_2_pihole_servers_in_ha/" target="_blank" rel="noreferrer noopener">Reddit</a>. However, this approach doesn&#8217;t quite work on Pi-hole 5 and above as the way the data is saved changed. This article is therefore an updated version of that basic idea to remain compatible with the latest Pi-hole versions.</p>


<p>To accomplish our goal we need to solve 2 problems. Firstly, we need a way to keep the blocklists and settings in sync between the 2 Pi-hole servers. We then need a mechanism to monitor the servers and hand over the IP address if the primary fails. If you are unconcerned about DNS timeouts, you can skip the IP failover setup and just have 2 servers on their own IP addresses. For the greatest reliability, you will want to configure both stages.</p>


<h2>What do we need?</h2>


<p>You will need 2 Pi-hole servers, ideally updated to the latest version but anything greater than 5.0 should work. You can update an existing instance to the latest version by running <code>sudo pihole updatePihole</code>. Note that this will restart the DNS resolver. I will not cover how to set up a Pi-hole server <a href="https://davidshomelab.com/pi-hole-an-open-source-ad-blocker-for-your-home-or-company/" target="_blank" rel="noreferrer noopener">as I have described it previously</a>. This article is a few years old so you will probably want to use a more recent OS such as Ubuntu 20.04LTS but the installation process is more or less unchanged. For a more recent guide, you can look at <a href="https://unixcop.com/how-to-install-pi-hole-ubuntu-20-04/" target="_blank" rel="noreferrer noopener">this article</a> or the <a href="https://github.com/pi-hole/pi-hole/#one-step-automated-install" target="_blank" rel="noreferrer noopener">Pi-hole documentation</a>. I will assume the use of a Debian/Ubuntu based distro for this guide but it should be easily adaptable for CentOS.</p>


<p>You will also need a third IP address which will be shared between the 2 servers.</p>


<h2>Synchronising Pi-hole blocklists</h2>


<p>Pi-hole 5 no-longer stores the blocklists in plain text files. Instead they are stored in a SQLite database in <code>/etc/pihole/gravity.db</code>. This means we can&#8217;t use Panja0&#8217;s original method of using rsync to synchronise the list files. Instead, we use Michael Stanclift&#8217;s (<a href="https://github.com/vmstan/" target="_blank" rel="noreferrer noopener">vmstan on GitHub</a>) <a href="https://github.com/vmstan/gravity-sync" target="_blank" rel="noreferrer noopener">gravity-sync script</a>.</p>


<p>This script will synchronise blocklists, exclusions and local DNS records. It won&#8217;t synchronise admin passwords, upstream DNS servers, DHCP leases and statistics. We will therefore need to manually configure the admin password and upstream DNS servers to be the same on both servers. There isn&#8217;t a practical way to have DHCP on the Pi-hole so that will have to run on the router or some other device.</p>


<h3>Preparing the Pi-hole servers</h3>


<p>To begin with, ensure all dependencies are installed. Most will already be installed but if not, we can catch them by running:</p>


<pre class="wp-block-code"><code>apt update &amp;&amp; apt install sqlite3 sudo git rsync ssh</code></pre>


<p>On both servers, we will need a service account with sudo privileges. Create this account by running the following command on each server:</p>


<pre class="wp-block-code"><code>sudo useradd -G sudo -m pi
sudo passwd pi</code></pre>


<p>On some distros (e.g. CentOS), the privileged group is called wheel instead of sudo so you may need to adjust this command.</p>


<h3>Install Gravity-Sync on the primary Pi-hole</h3>


<p>Pick one of the servers to act as the primary and log in to it as the <code>pi</code> user account. Run the installer script:</p>


<pre class="wp-block-code"><code>export GS_INSTALL=primary &amp;&amp; curl -sSL https://gravity.vmstan.com/ | bash</code></pre>


<h3>Install Gravity-Sync on the secondary Pi-hole</h3>


<p>Log in to the other server as the <code>pi</code> account and run:</p>


<pre class="wp-block-code"><code>export GS_INSTALL=secondary &amp;&amp; curl -sSL https://gravity.vmstan.com/ | bash</code></pre>


<p>While the installer on the primary just verifies pre-requisites, the secondary needs additional configuration as it performs the active role of replication. When prompted, provide the name of the service account on the primary, the IP address of the primary and configure password-less SSH login.</p>


<h3>Verify connectivity and enable synchronisation</h3>


<p>On the secondary Pi-hole, navigate in to the <code>gravity-sync</code> directory and run:</p>


<pre class="wp-block-code"><code>./gravity-sync.sh compare</code></pre>


<p>This should give you output similar to the below. It will not actually perform a sync but will verify connectivity and detect if a sync is required.</p>


<div class="wp-block-image"><figure class="aligncenter size-full"><img loading="lazy" width="418" height="212" src="https://davidshomelab.com/wp-content/uploads/2021/08/image-7.png" alt="$ ./gravity-sync.sh compare
[∞] Initalizing Gravity Sync (3.4.5)
[✓] Loading gravity-sync.conf
[✓] Evaluating arguments: COMPARE
[i] Remote Pi-hole: pi@10.100.4.53
[✓] Connecting to 10.100.4.53of Pi-hole
[✓] Hashing the primary Domain Database
[✓] Comparing to the secondary Domain Database
[✓] Hashing the primary Local DNS Records
[✓] Comparing to the secondary Local DNS Records
[i] No replication is required at this time
[✓] Purging redundant backups on secondary Pi-hole instance
[i] 3 days of backups remain (11M)
[∞] Gravity Sync COMPARE aborted after 0 seconds" class="wp-image-680" /></figure></div>


<p>If the primary already has configuration but the secondary is a fresh install you will want to ensure that the first sync is a one-way sync from the primary to the secondary. If you don&#8217;t do this, the script may detect the configuration on the secondary as newer and overwrite the primary. To force a one way sync, run:</p>


<pre class="wp-block-code"><code>./gravity-sync.sh pull</code></pre>


<p>Finally, enable automatic synchronisation by running:</p>


<pre class="wp-block-code"><code>./gravity-sync.sh automate</code></pre>


<p>Set an update frequency from the available options. Setting a lower frequency will increase the risk of changes not syncing but will result in reduced server load so may be more appropriate on busy servers.</p>


<p>Synchronisation should now be working, if you do not want to configure IP failover the setup is now complete. If you have additional servers you want to keep in sync, use the steps for adding a secondary server.</p>


<h2>Configure IP failover</h2>


<p>Now our Pi-hole instances are in sync, we can configure IP failover to direct traffic towards the primary when it is available and switch over to the secondary if the primary ever fails. On both servers, install the required packages:</p>


<pre class="wp-block-code"><code>sudo apt install keepalived libipset13 -y</code></pre>


<p>Next, we need to download a script on both servers to monitor the status of the pihole-FTL service so we can fail over if it ever stops running:</p>


<pre class="wp-block-code"><code>sudo mkdir /etc/scripts
sudo sh -c "curl https://pastebin.com/raw/npw6tcuk | tr -d '\r' &gt; /etc/scripts/chk_ftl"
sudo chmod +x /etc/scripts/chk_ftl</code></pre>


<p>Now, we need to add our keepalived configuration. On the primary, run:</p>


<pre class="wp-block-code"><code>sudo curl https://pastebin.com/raw/nsBnkShi -o /etc/keepalived/keepalived.conf</code></pre>


<p>and on the secondary:</p>


<pre class="wp-block-code"><code>sudo curl https://pastebin.com/raw/HbdsUc07 -o /etc/keepalived/keepalived.conf</code></pre>


<p>We now need to edit the configuration on both servers. On each server, set the following properties:</p>


<figure class="wp-block-table"><table><tbody><tr><td><strong>Property</strong></td><td><strong>Description</strong></td><td><strong>Example Server 1</strong></td><td><strong>Example Server 2</strong></td></tr><tr><td><code>interface</code></td><td>The LAN network interface name. Run <code>ip list</code> to view available interfaces if you are unsure.</td><td>eth0</td><td>eth0</td></tr><tr><td><code>unicast_src_ip</code></td><td>The IP address of the server you are currently configuring.</td><td>192.168.1.21</td><td>192.168.1.22</td></tr><tr><td><code>unicast_peer</code></td><td>The IP address of the other server.</td><td>192.168.1.22</td><td>192.168.1.21</td></tr><tr><td><code>virtual_ipaddress</code></td><td>The virtual IP address shared between the 2 servers, provided in CIDR notation. This must be the same on both servers.</td><td>192.168.1.20/24</td><td>192.168.1.20/24</td></tr><tr><td><code>auth_pass</code></td><td>A shared password (max 8 characters). This must be the same on both servers</td><td>P@$$w05d</td><td>P@$$w05d</td></tr></tbody></table></figure>


<p>We are now ready to start and enable keepalived. On both servers, run:</p>


<pre class="wp-block-code"><code>systemctl enable --now keepalived.service
systemctl status keepalived.service</code></pre>


<p>You should see a status of <code>active</code>. If you don&#8217;t see this, you most likely have an error in your config and the status message should give you a hint as to where it is. Additionally, on the primary server, you should see that it has placed itself into the <code>master</code> role.</p>


<div class="wp-block-image"><figure class="aligncenter size-full"><img src="https://davidshomelab.com/wp-content/uploads/2021/08/image-2.png" alt="● keepalived.service - Keepalive Daemon (LVS and VRRP)
     Loaded: loaded (/lib/systemd/system/keepalived.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2021-08-30 09:37:45 UTC; 9s ago
   Main PID: 82740 (keepalived)
      Tasks: 2 (limit: 4617)
     Memory: 2.1M
     CGroup: /system.slice/keepalived.service
             ├─82740 /usr/sbin/keepalived --dont-fork
             └─82752 /usr/sbin/keepalived --dont-fork
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: Registering Kernel netlink command channel
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: Opening file '/etc/keepalived/keepalived.conf'.
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: Registering gratuitous ARP shared channel
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: (PIHOLE) Entering BACKUP STATE (init)
Aug 30 09:37:45 pihole1 Keepalived_vrrp[82752]: VRRP_Script(chk_ftl) succeeded
Aug 30 09:37:46 pihole1 Keepalived_vrrp[82752]: (PIHOLE) received lower priority (145) advert from 10.100.4.54 - discarding
Aug 30 09:37:47 pihole1 Keepalived_vrrp[82752]: (PIHOLE) received lower priority (145) advert from 10.100.4.54 - discarding
Aug 30 09:37:48 pihole1 Keepalived_vrrp[82752]: (PIHOLE) received lower priority (145) advert from 10.100.4.54 - discarding
Aug 30 09:37:49 pihole1 Keepalived_vrrp[82752]: (PIHOLE) received lower priority (145) advert from 10.100.4.54 - discarding
Aug 30 09:37:49 pihole1 Keepalived_vrrp[82752]: (PIHOLE) Entering MASTER STATE" class="wp-image-654" /><figcaption>If all is well the primary should see adverts from the secondary, detect itself as higher priority and elect itself as active.</figcaption></figure></div>


<h2>Testing Pi-hole failover</h2>


<p>You should now be able to reach the Pi-hole interface on the virtual IP we configured previously (e.g. http://192.168.1.20/admin). In the top right corner of the interface it will show the hostname and you should see that it is the primary server. To check that failover is working, shut down the primary server. If you reload the page you should see that the hostname in the top right has changed to the name of the secondary.</p>


<figure class="wp-block-gallery aligncenter columns-2 is-cropped"><ul class="blocks-gallery-grid"><li class="blocks-gallery-item"><figure><img src="https://davidshomelab.com/wp-content/uploads/2021/08/image-5.png" alt="Pi-hole console indicating we are logged in to pihole1" data-id="657" data-full-url="https://davidshomelab.com/wp-content/uploads/2021/08/image-5.png" data-link="https://davidshomelab.com/?attachment_id=657#main" class="wp-image-657" /></figure></li><li class="blocks-gallery-item"><figure><img src="https://davidshomelab.com/wp-content/uploads/2021/08/image-4.png" alt="Pi-hole console indicating we are logged in to pihole2" data-id="656" data-full-url="https://davidshomelab.com/wp-content/uploads/2021/08/image-4.png" data-link="https://davidshomelab.com/?attachment_id=656#main" class="wp-image-656" /></figure></li></ul><figcaption class="blocks-gallery-caption">The same IP address we used to access pihole1 can be used to access pihole2 once pihole1 has been powered off, demonstrating that the Pi-hole failover has worked. Once pihole1 is back online, when we reload the page, we see that pihole1 is once again active.</figcaption></figure>


<p>You can also test DNS requests using <code>nslookup</code>. The following script will make a DNS lookup every second, allowing you to verify that you receive responses even if the primary is offline:</p>


<pre class="wp-block-code"><code>while true; do
nslookup davidshomelab.com &#091;virtual IP of Pi-hole cluster]
sleep 1
done</code></pre>


<p>If you shut down pihole1 while this script is running you should see that DNS lookups are not interrupted. There may be a brief moment of interruption at the exact moment that the failover occurs but it should resume within around a second.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/pi-hole-failover-with-keepalived/">Pi-hole failover using Gravity Sync and Keepalived</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Tar on Linux &#8211; File Storage and Retrieval</title>
		<link>https://davidshomelab.com/tar-on-linux-file-storage-and-retrieval/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sat, 05 Jun 2021 04:46:00 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[tar]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=596</guid>

					<description><![CDATA[<p>Tar is a common file archive format on Linux and other UNIX or UNIX-like systems. Like the zip file format, the tar format allows you to combine multiple files in to a single archive. Tar provides many options for compression and file formats making it suitable for many use cases. However, it has gained a ... <a title="Tar on Linux &#8211; File Storage and Retrieval" class="read-more" href="https://davidshomelab.com/tar-on-linux-file-storage-and-retrieval/" aria-label="More on Tar on Linux &#8211; File Storage and Retrieval">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/tar-on-linux-file-storage-and-retrieval/">Tar on Linux &#8211; File Storage and Retrieval</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Tar is a common file archive format on Linux and other UNIX or UNIX-like systems. Like the zip file format, the tar format allows you to combine multiple files in to a single archive. Tar provides many options for compression and file formats making it suitable for many use cases. However, it has gained a reputation for being a difficult command to learn and use. This guide contains examples for many common (and some less common) use cases for tar and can serve as a reference if you want to make sure you are using the correct syntax.</p>


<div class="wp-block-image"><figure class="aligncenter size-large"><img src="https://imgs.xkcd.com/comics/tar.png" alt="XKCD comic depicting the confusing nature of tar flags. If you are ever in this situation, tar --help is a valid tar command which requires very little memorisation" /><figcaption>Image copyright Randall Munroe, originally published on <a href="https://xkcd.com/1168/" target="_blank" rel="noreferrer noopener">XKCD</a></figcaption></figure></div>


<h2>What is tar?</h2>


<p>Tar, originally standing for <strong>t</strong>ape <strong>ar</strong>chive is the traditional archiving utility on most UNIX-like systems. On modern systems without tape drives, you would usually save archives to a file instead of tape. Unlike the zip format, which both combines and compresses files, tar only combines files. Compression is performed separately using a utility such as gzip or bzip2. However, you can use the tar command to perform both the archiving and compression in one go.</p>


<h2>Creating an archive</h2>


<p>You can create an archive using the <code>--create</code> flag (<code>c</code> in the short format). One caveat is that if the archive file already exists, attempting to create a new archive will overwrite it. This can cause problems if you invert the order of your source files and your archive as you may end up replacing your source files. The easiest way to remember this is to think about the command using long form flags. For example:</p>


<pre class="wp-block-code"><code>tar --create --file example.tar file1 file2 file3</code></pre>


<p>In this example it should be clear that the <code>--file</code> (<code>f</code>) flag indicates the output file. The input files are specified at the end of the command. Once you understand the syntax, you can use the short form version of the command:</p>


<pre class="wp-block-code"><code>tar cf example.tar file1 file2 file3</code></pre>


<p>Files will be added to the archive using the full provided path excluding the leading /. For example, if you ran</p>


<pre class="wp-block-code"><code>tar cf example.tar /tmp/{file1,file2}</code></pre>


<p>and then extracted the archive, a folder called &#8220;tmp&#8221; would be created in your extraction location. The files would be placed in that folder.</p>


<h2>Extracting an archive</h2>


<p>Once you have created an archive, the next thing you will want to do is extract it. The command to do this is almost the same as the command to create an archive. Except the <code>--extract</code> (<code>x</code>) flag is used:</p>


<pre class="wp-block-code"><code>tar xf example.tar</code></pre>


<p>If you just need to extract certain files, you can specify them at the end of the command:</p>


<pre class="wp-block-code"><code>tar xf example.tar file1 file2</code></pre>


<p>If you are specifying files you will need to specify the full path location in the archive. For example:</p>


<pre class="wp-block-code"><code>tar xf example.tar tmp/example/file1</code></pre>


<p>Extracted file paths are always relative to the current working directory.</p>


<h2>All about compression</h2>


<p>As discussed, tar archives are uncompressed by default but compression can be enabled if desired. The short options for some of the compression algorithms can be a little confusing but there are also long options.  The table below shows all the available compression algorithms.</p>


<figure class="wp-block-table"><table><tbody><tr><td><strong>Algorithm</strong></td><td><strong>Short Option</strong></td><td><strong>Long Option</strong></td></tr><tr><td>gzip</td><td>z</td><td><code>--gzip</code></td></tr><tr><td>bzip2</td><td>j</td><td><code>--bzip2</code></td></tr><tr><td>xzip</td><td>J</td><td><code>--xz</code></td></tr><tr><td>lzip</td><td></td><td><code>--lzip</code></td></tr><tr><td>lzma</td><td></td><td><code>--lzma</code></td></tr><tr><td>lzop</td><td></td><td><code>--lzop</code></td></tr><tr><td>zstd</td><td></td><td><code>--zstd</code></td></tr></tbody></table></figure>


<p>Additionally, if you use the <code>--auto-compress</code> <code>(a)</code> flag, tar will automatically detect the right compression algorithm based on the file extension:</p>


<pre class="wp-block-code"><code>tar caf example.tar.gz file1</code></pre>


<p>When extracting an archive, the compression algorithm can be specified but if omitted will be detected from the file name.</p>


<h2>Inspecting the contents of tar archives</h2>


<p>If you want to list the contents of an archive you can use the <code>--list</code> (<code>t</code>) flag. You will still need to use the <code>f</code> flag to indicate that the archive you want to list is a file:</p>


<pre class="wp-block-code"><code>tar tf example.tar
------------------------------------
file1
file2
file3</code></pre>


<p>If the archive contains subfolders, the full path will be listed. This can be useful if you need to modify an archive or extract only a couple of files.</p>


<h2>Modifying tar archives</h2>


<h3>Adding more files</h3>


<p>You can add additional files to an existing archive using the <code>--append</code> (<code>r</code>) flag. This flag uses the same syntax as <code>--create</code>:</p>


<pre class="wp-block-code"><code>tar rf example.tar file3</code></pre>


<h3>Updating an archive</h3>


<p>The <code>--update</code> (<code>u</code>) flag is used to update an archive by only appending files that either don&#8217;t exist in the archive or which have been modified since the archive was created. Updated files will not actually replace the old copies of the file in the archive. Instead you will get multiple copies of the file with the same name. This presents a difficulty when extracting the archive in determining which copy of the file you will get. When an archive is extracted, files are processed in the order that they appear in the archive. This means that the newest copy of the file will always be extracted unless another version is specified. You can use the <code>--occurrence</code> flag to specify which version of the file to use. For example, the following command will extract the 3rd instance of file1:</p>


<pre class="wp-block-code"><code>tar xf example.tar file1 --occurrence=3</code></pre>


<h3>Removing a file</h3>


<p>You can remove files from an <em>uncompressed</em> archive using the <code>--delete</code> flag. There is no way to remove a file from a compressed archive.</p>


<pre class="wp-block-code"><code>tar f example.tar --delete file1</code></pre>


<h2>Other useful actions</h2>


<p>When creating or extracting an archive, you might want to see the names of the files being processed. You can do this using the <code>--verbose</code> (<code>v</code>) flag.</p>


<p>You can combine multiple archives together using <code>--concatenate</code> (<code>A</code>):</p>


<pre class="wp-block-code"><code>tar Af example1.tar example2.tar example3.tar</code></pre>


<p>In the above example, the contents of example2.tar and example3.tar will be appended to example1.tar</p>


<p>There are many other options which can be used to accomplish different tasks but the above commands should cover most common situations. You can find the full list of commands in the <a href="https://man7.org/linux/man-pages/man1/tar.1.html" target="_blank" rel="noreferrer noopener">man page</a> and, even if you are using different options, you c<mark class="annotation-text annotation-text-yoast" id="annotation-text-49a0810d-47d4-4c2e-bbf5-cfcd84b6d4f3"></mark>an use the above commands as skeletons to be modified for the more advanced options.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/tar-on-linux-file-storage-and-retrieval/">Tar on Linux &#8211; File Storage and Retrieval</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>UniFi Controller Setup on Ubuntu 20.04LTS</title>
		<link>https://davidshomelab.com/unifi-controller-setup-on-ubuntu-20-04lts/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sat, 29 May 2021 20:23:54 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Networking]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=571</guid>

					<description><![CDATA[<p>Ubiquiti&#8217;s UniFi product lineup has seen enormous growth in popularity due to its range of high quality access points. While you will usually find professional grade access points in businesses instead of homes, they provide a benefit in any building. This is especially true for large homes or older buildings with thick walls where a ... <a title="UniFi Controller Setup on Ubuntu 20.04LTS" class="read-more" href="https://davidshomelab.com/unifi-controller-setup-on-ubuntu-20-04lts/" aria-label="More on UniFi Controller Setup on Ubuntu 20.04LTS">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/unifi-controller-setup-on-ubuntu-20-04lts/">UniFi Controller Setup on Ubuntu 20.04LTS</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Ubiquiti&#8217;s UniFi product lineup has seen enormous growth in popularity due to its range of high quality access points. While you will usually find professional grade access points in businesses instead of homes, they provide a benefit in any building. This is especially true for large homes or older buildings with thick walls where a single AP isn&#8217;t enough.</p>



<p>Many larger homes end up using multiple separate access points with a mix of repeaters. This results in a confusing mix of networks with devices connecting to a sub-optimal AP, causing weak signal. UniFi resolves this by managing all access points from a central controller and treating them as a single network. This saves you having to join your devices to several different networks and allows the APs to intelligently hand devices off to each other as you roam around the house. This ensues that you are always communicating with the AP that has the strongest signal.</p>



<p>UniFi can act solely as an access point without performing NAT. This means that unlike mesh WiFi systems which are traditionally used to expand coverage in a home setting, you shouldn&#8217;t run in to communications issues between wireless and wired devices in your home.</p>



<h2>Why not another product?</h2>



<p>While there are plenty of other good products on the market, there are several reasons why UniFi is a strong contender. When compared to other commercial solutions, UniFi hardware is priced very reasonably and is widely available from consumer outlets. This means you don&#8217;t need to procure hardware through trade-specific distribution networks. One other advantage is the simplicity of setting up devices. Once you have the controller set up, you can add new devices by adding them to the network. They will appear in the dashboard and can you can easily configure them in just a few clicks.</p>



<p>For me, the flexibility around the controller software is the key selling point. Some providers require you to buy an expensive hardware controller in addition to the APs. Other systems can only be managed from the cloud which some people may view as a security risk. The UniFi controller can instead be installed on any Windows, Mac or Ubuntu PC (or VM), allowing you to run it on hardware you already have. That&#8217;s not to say that you can&#8217;t run it in the cloud or have a dedicated controller. UniFi provide various models of <a href="https://www.amazon.co.uk/gp/product/B017T2QB22/ref=as_li_qf_asin_il_tl?ie=UTF8&amp;tag=davidshomelab-21&amp;creative=6738&amp;linkCode=as2&amp;creativeASIN=B017T2QB22&amp;linkId=61d8043b767c287dd50062f2565f1f26" target="_blank" rel="noreferrer noopener sponsored nofollow">CloudKey</a>(paid link) for users who wish to avoid the effort of building their own controller. The basic model will be sufficient for any home or office with fewer than a couple of dozen managed devices. Additionally, while not owned by UniFi, the <a href="https://hostifi.com/" target="_blank" rel="noreferrer noopener">HostiFi</a> company offers cloud hosted controllers requiring no on-premisies management hardware.</p>



<h2>Setting up the UniFi controller on Ubuntu</h2>



<p>While the controller software can be installed on any PC, a dedicated server will simplify management. Windows and Ubuntu are both supported but Ubuntu is preferred due to its lack of licensing costs and smaller footprint. The instructions provided here are for Ubuntu Server 20.04. While an LTS version of Ubuntu Server is preferred, any recent version of Ubuntu Server or Desktop can be used.</p>



<p>The system requirements depend on the number of managed devices but 1 CPU core, 2GB of RAM and 25GB of storage should be enough in most cases. The UniFi controller software isn&#8217;t in the main Ubuntu repos so we need to add the correct repo. We must also install the GPG keys so the repo is trusted:</p>



<pre class="wp-block-code"><code>echo 'deb https://www.ui.com/downloads/unifi/debian stable ubiquiti' | sudo tee /etc/apt/sources.list.d/100-ubnt-unifi.list
sudo wget -O /etc/apt/trusted.gpg.d/unifi-repo.gpg https://dl.ui.com/unifi/unifi-repo.gpg </code></pre>



<p>Next, update the apt cache and install the UniFi controller along with its prerequisites:</p>



<pre class="wp-block-code"><code>sudo apt update &amp;&amp; sudo apt install ca-certificates openjdk-8-jdk apt-transport-https unifi -y</code></pre>



<p>Once the install is finished, check that the service is running:</p>



<pre class="wp-block-code"><code>systemctl status unifi.service</code></pre>



<p>If the service shows as failed or not running, restart the service with:</p>



<pre class="wp-block-code"><code>sudo systemctl restart unifi.service</code></pre>



<h2>Configuring the UniFi controller</h2>



<p>Check the status again and verify that the service is running. Once everything is up and running, open a web browser and go to https://[server&/#8217;s IP address]:8443. You will need to accept the self-signed certificate warning.</p>



<p>Next, chose a name for your controller and accept the terms and conditions.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1093" height="389" src="https://davidshomelab.com/wp-content/uploads/2021/05/image.png" alt="Unifi Controller Setup Step 1 of 6
Name your Controller" class="wp-image-577"/></figure>



<p>On the next screen, sign in with your UniFi account. If you want to be able to access your controller through Unifi&#8217;s cloud enter your login details here.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1093" height="389" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-1.png" alt="Unifi Controller Setup Step 2 of 6
Sign in with your Ubiquiti Account" class="wp-image-578"/></figure>



<p>If you want to keep your controller local to your network, set up a local account, click &#8220;Switch to Advanced Setup&#8221;. Uncheck both checkboxes and set up a local username and password.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1093" height="682" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-2.png" alt="Unifi Controller Setup Step 2 of 6 (Advanced)
Advanced remote and local access" class="wp-image-579"/></figure>



<p>On the next screen, leave auto backup and network optimisation enabled.</p>



<p>Installing on an Ubuntu server is one of the simplest and cheapest ways to deploy the UniFi controller.</p>



<p>If you already have your devices, you can now choose to set them up. If you are just setting up the controller in preparation for receiving the devices, you can add them later.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1061" height="479" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-4.png" alt="Unifi Controller Setup Step 4 of 6
Devices Setup" class="wp-image-581"/></figure>



<p>Enter a WiFi network name and password. If you plan to have multiple SSIDs you can add the rest later, just enter your primary one here.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1061" height="348" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-5.png" alt="Unifi Controller Setup Step 5 of 6
WiFi Setup" class="wp-image-582"/></figure>



<p>Finally, confirm your settings and set your location and time zone.</p>



<figure class="wp-block-image size-large is-style-default"><img loading="lazy" width="1061" height="348" src="https://davidshomelab.com/wp-content/uploads/2021/05/image-6.png" alt="Unifi Controller Setup Step 6 of 6
Review Configuration" class="wp-image-583"/></figure>



<p>The wizard will redirect you to the main dashboard and your network will be set up. </p>



<p>There is plenty more you can do with UniFi hardware such as having multiple SSIDs on separate vlans, captive portal and MAC address based vlan assignments. Come back soon for more guides.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/unifi-controller-setup-on-ubuntu-20-04lts/">UniFi Controller Setup on Ubuntu 20.04LTS</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites</title>
		<link>https://davidshomelab.com/install-zabbix-proxy-on-pfsense-to-monitor-hosts-in-remote-sites/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Tue, 23 Jun 2020 17:28:53 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Networking]]></category>
		<category><![CDATA[monitoring]]></category>
		<category><![CDATA[pfsense]]></category>
		<category><![CDATA[zabbix]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=528</guid>

					<description><![CDATA[<p>In a multi-site network you will most likely have VPNs connecting your sites, allowing remote connectivity to your main site. However, as these links are not necessarily 100% reliable, a dropped link could potentially result in the loss of monitoring data for a large number of hosts as the agent does not store any data ... <a title="Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites" class="read-more" href="https://davidshomelab.com/install-zabbix-proxy-on-pfsense-to-monitor-hosts-in-remote-sites/" aria-label="More on Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/install-zabbix-proxy-on-pfsense-to-monitor-hosts-in-remote-sites/">Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In a multi-site network you will most likely have VPNs connecting your sites, allowing remote connectivity to your main site. However, as these links are not necessarily 100% reliable, a dropped link could potentially result in the loss of monitoring data for a large number of hosts as the agent does not store any data by itself. By using Zabbix Proxies on remote sites it is possible to configure the agents to communicate only with the local proxy which will then store the data and relay it back to the main server when a link is available. Zabbix Proxy can be installed on any operating system that the Zabbix Server can be installed on but it can also be installed as a package on pfSense. This has the advantages that it does not require maintaining an additional server for each site and provides a simpler interface for configuring the proxy than installing it on a Linux server would.</p>


<h2>Install the Zabbix Proxy plug-in</h2>


<p>In the pfSense management console, go to System &gt; Package Manager &gt; Available Packages and search for &#8220;zabbix-proxy&#8221;. You will see multiple different versions. Unlike the agent, the proxy must be the same version as the server so select the appropriate one for your server. In this case we will install zabbix-proxy5.</p>


<h2>Connect the proxy to the server</h2>


<p>In the pfSense console go to Services &gt; Zabbix Proxy 5.0 and configure the proxy. Set the server to the IP address or hostname of the Zabbix server and the hostname to whatever you will call the proxy within the Zabbix console. The other options can be left as default but can be modified if desired. For example, if you only want the proxy to listen on one interface, set the listen IP to the pfSense IP address for the interface you want to listen on. If you have a large number of proxies you can increase this value to reduce load on the server but doing so will increase the propagation time for any changes you make to your setup.</p>


<p>While testing or for small deployments you may wish to lower the config frequency from the default of 1 hour. This value determines how often the proxy will contact the server to get the current configuration, including the hosts to monitor and the data it is supposed to collect.</p>


<p>Finally, enable the service and save the settings.</p>


<figure class="wp-block-image size-large"><img loading="lazy" width="1002" height="763" src="https://davidshomelab.com/wp-content/uploads/2020/06/image.png" alt="Screenshot of Zabbix proxy settings. Non-default settings:
Enable: Yes
Server: 172.16.24.8
Hostname: proxy-site-a
Config Frequency: 1025" class="wp-image-550" /></figure>


<p>Now the proxy is configured it is time to tell the server where to find it. From the Zabbix console, go to Administration &gt; Proxies. Click &#8220;Create proxy&#8221; to add the new proxy. Enter the name (this is the hostname you configured in the proxy) and the IP address of the proxy.</p>


<figure class="wp-block-image size-large"><img loading="lazy" width="567" height="286" src="https://davidshomelab.com/wp-content/uploads/2020/06/image-1.png" alt="Screenshot of add-proxy screen in Zabbix console.
Proxy Name: proxy-site-a
Proxy Mode: Active
Proxy Address: 172.16.24.1
Description: Site A Zabbix Proxy" class="wp-image-552" /></figure>


<p>Once you have added the proxy, reload the page and check that the proxy shows up in the list and it has a &#8220;last seen&#8221; value. If all is well and the listing shows something similar to the below, the proxy is set up and we are ready to configure hosts to talk to it.</p>


<figure class="wp-block-image size-large"><img loading="lazy" width="853" height="75" src="https://davidshomelab.com/wp-content/uploads/2020/06/image-2.png" alt="Screenshot showing proxy in Zabbix proxy list with last seen value of 1 second" class="wp-image-553" /></figure>


<h2>Configure the proxy to monitor hosts</h2>


<p>Configuring an agent to use a proxy is very similar to configuring it to talk to the server directly. For this example I will modify the CentOS host described in <a href="https://davidshomelab.com/install-zabbix-agent-to-monitor-windows-and-linux-hosts/" target="_blank" rel="noreferrer noopener">my post on configuring Zabbix agents</a> to get it to talk via the proxy.</p>


<p>Log in to the server and open <code>/etc/zabbix/zabbix_agent.conf</code>. Edit the Server and ServerActive values to the IP address or hostname of the Zabbix proxy and save and close the file. Restart the Zabbix agent:</p>


<pre class="wp-block-code"><code>systemctl restart zabbix-agent.service</code></pre>


<p>Now log in to the Zabbix console and go to Configuration &gt; Hosts and select the host you want to monitor via the proxy. In the host settings, click the drop-down next to &#8220;Monitored by proxy&#8221; and select the proxy you have configured the host to talk to. Click &#8220;Update&#8221; to save your settings.</p>


<figure class="wp-block-image size-large"><img loading="lazy" width="847" height="500" src="https://davidshomelab.com/wp-content/uploads/2020/06/image-3.png" alt="Zabbix host configuration screen with Monitored by proxy setting set to proxy-site-a" class="wp-image-554" /></figure>


<p>When you return to the hosts list you will probably see that the host appears to be offline. This is because the proxy will not find out that it is meant to be monitoring the host until the next time it syncs its settings with the server. If you have set a reasonable update frequency you will just need to wait for that period of time and it should start to appear and collect data automatically.</p>


<p>If you need to force the proxy to update immediately, log in to pfSense and go to Status &gt; Services and restart the zabbix_proxy service. When the service starts up this will force a config sync and you should see the host come online within a couple of seconds after the service restarts.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/install-zabbix-proxy-on-pfsense-to-monitor-hosts-in-remote-sites/">Install Zabbix Proxy on pfSense to Monitor Hosts in Remote Sites</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Upgrade Zabbix Server to 5.0 LTS</title>
		<link>https://davidshomelab.com/upgrade-zabbix-server-to-5-0-lts/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Sat, 30 May 2020 16:29:34 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[centos8]]></category>
		<category><![CDATA[Linux monitoring]]></category>
		<category><![CDATA[zabbix]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=535</guid>

					<description><![CDATA[<p>Earlier this month, Zabbix server 5.0 LTS was released. Version 5.0 brings an updated UI, additional cloud integrations and support for the latest Zabbix agent (the old agent is still supported so you don&#8217;t need to upgrade the agent on all your hosts immediately). If you are using Zabbix Proxy to monitor some of your ... <a title="Upgrade Zabbix Server to 5.0 LTS" class="read-more" href="https://davidshomelab.com/upgrade-zabbix-server-to-5-0-lts/" aria-label="More on Upgrade Zabbix Server to 5.0 LTS">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/upgrade-zabbix-server-to-5-0-lts/">Upgrade Zabbix Server to 5.0 LTS</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Earlier this month, Zabbix server 5.0 LTS was released. Version 5.0 brings an updated UI, additional cloud integrations and support for the latest Zabbix agent (the old agent is still supported so you don&#8217;t need to upgrade the agent on all your hosts immediately). If you are using Zabbix Proxy to monitor some of your hosts, you will need to update this at the same time as the proxy and server must have the same major version number as the server.</p>



<p>I will be upgrading the Zabbix server I set up in my previous tutorials on Zabbix so this guide assumes a CentOS 8 operating system. With the exception of the package manager commands, the steps should be more or less the same on any Linux distribution.</p>



<h2>Update your system</h2>



<p>Before making major changes it is worth making sure the base system is up to date to make sure there aren&#8217;t any issues arising from using older packages. Run the following commands to update and reboot the system.</p>



<pre class="wp-block-code"><code>dnf upgrade
reboot</code></pre>



<h2>Back up your current installation</h2>



<p>While the upgrade process shouldn&#8217;t cause too many problems, it will make changes to your database and overwrite the old installation. In the event that this process runs into an error or the machine crashes or loses power while the upgrade is in progress, having a backup of your database and configuration files will make recovery a lot easier.</p>



<p>Create a directory to store all your backups:</p>



<pre class="wp-block-code"><code>mkdir /opt/zabbix-backup
cd /opt/zabbix-backup</code></pre>



<p>Now we will need to stop the Zabbix server to ensure that the database remains in a consistent state while we are performing the backups. Run the following commands to stop Zabbix and write the database to a file:</p>



<pre class="wp-block-code"><code>systemctl stop zabbix-server.service
mysqldump -u&#91;zabbix-user] -p &#91;zabbix database] &gt; zabbix-database.sql</code></pre>



<p>For the <code>mysqldump</code> command, you will need to enter the names of the Zabbix database user and the database in the appropriate places. If you do not remember what these are, check in <code>/etc/zabbix/zabbix_server.conf</code>. The lines you are looking for are <code>DBName</code>, <code>DBUser</code> and <code>DBPassword</code>.</p>



<p>You will also want to back up your  configuration files as follows:</p>



<pre class="wp-block-code"><code>cp /etc/zabbix/zabbix_server.conf /opt/zabbix-backup/
cp /etc/httpd/conf.d/zabbix.conf  /opt/zabbix-backup/
cp -R /usr/share/zabbix/ /opt/zabbix-backup/
cp -R /usr/share/doc/zabbix-* /opt/zabbix-backup/</code></pre>



<h2>Update Zabbix</h2>



<p>Install the new repo by running the following commands:</p>



<pre class="wp-block-code"><code>rpm -Uvh https://repo.zabbix.com/zabbix/5.0/rhel/8/x86_64/zabbix-release-5.0-1.el8.noarch.rpm
dnf clean all </code></pre>



<p>If you are not running CentOS 8, check the <a href="https://www.zabbix.com/download" target="_blank" rel="noreferrer noopener">Zabbix download page</a> for the link the the correct repo for your distribution and the required commands to install it.</p>



<p>This will remove the old 4.0 repo from your system and replace it with the 5.0 repo. Now you can just upgrade your Zabbix packages as follows:</p>



<pre class="wp-block-code"><code>yum upgrade zabbix*</code></pre>



<p>If the update installs without errors, restart the Zabbix server and agent as follows:</p>



<pre class="wp-block-code"><code>systemctl restart zabbix-server.service
systemctl restart zabbix-agent.service</code></pre>



<p>Copy the Apache config file back to its original location and restart Apache:</p>



<pre class="wp-block-code"><code>cp /opt/zabbix-backup/zabbix.conf /etc/httpd/conf.d/
systemctl restart httpd</code></pre>



<p>You should now be able to log in to the Zabbix web console. You&#8217;ll see that the UI now has the navigation menus on the left hand side and the version number at the bottom has changed to 5.0.1. If the page doesn&#8217;t load correctly, clear your browser cache and cookies and try again.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1920" height="785" src="https://davidshomelab.com/wp-content/uploads/2020/05/image.png" alt="Screenshot of Zabbix 5.0.1 dashboard" class="wp-image-536"/></figure>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/upgrade-zabbix-server-to-5-0-lts/">Upgrade Zabbix Server to 5.0 LTS</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Updating Nextcloud to 17.0.2</title>
		<link>https://davidshomelab.com/updating-nextcloud-to-17-0-2/</link>
		
		<dc:creator><![CDATA[David]]></dc:creator>
		<pubDate>Thu, 26 Dec 2019 09:28:18 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[Networking]]></category>
		<category><![CDATA[files]]></category>
		<category><![CDATA[groupware]]></category>
		<category><![CDATA[nextcloud]]></category>
		<category><![CDATA[storage]]></category>
		<category><![CDATA[web]]></category>
		<guid isPermaLink="false">https://davidshomelab.com/?p=238</guid>

					<description><![CDATA[<p>Last week Nextcloud released version 17.0.2. While the X.0.2 release doesn&#8217;t have as many flashy new features as the first couple of releases in a major version it typically marks the point at which the Nextcloud team consider the release stable for production. For that reason it is sensible to hold off on feature updates ... <a title="Updating Nextcloud to 17.0.2" class="read-more" href="https://davidshomelab.com/updating-nextcloud-to-17-0-2/" aria-label="More on Updating Nextcloud to 17.0.2">Read more</a></p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/updating-nextcloud-to-17-0-2/">Updating Nextcloud to 17.0.2</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Last week Nextcloud released version 17.0.2. While the X.0.2 release doesn&#8217;t have as many flashy new features as the first couple of releases in a major version it typically marks the point at which the Nextcloud team consider the release stable for production. For that reason it is sensible to hold off on feature updates until this point. Fortunately updating Nextcloud is typically a fairly simple and painless experience.</p>


<h2>Preparing to update</h2>


<p>The most important thing to do before updating is to create a backup. There are a few ways this can be done, if you are running Nextcloud in a virtual machine you can just create a snapshot of the VM. If you are running on a physical server, the two things you need to back up are the webroot and the database.</p>


<p>The first step is to enable maintenance mode, this stops users from logging in to Nextcloud and reduces the risk that anything will change while you&#8217;re running the backup, leaving the installation in an inconsistent state in the event that you need to restore. You can enter maintenance mode by running the following command:</p>


<pre class="wp-block-code"><code>sudo -u www-data php /var/www/html/nextcloud/occ maintenance:mode --on</code></pre>


<div class="wp-block-image"><figure class="alignright size-large"><img loading="lazy" width="342" height="423" src="https://davidshomelab.com/wp-content/uploads/2019/12/maintenance-mode.png" alt="" class="wp-image-239" /></figure></div>


<p>Note that this must be done as the user the web server runs as. The above command will work for Ubuntu but if you are using a different distribution the web server user might not be &#8220;www-data&#8221; so you will need to adjust accordingly. If you are unsure, run <code>ls -l /var/www/html/nextcloud</code> and see who owns the files. Additionally the occ program is in the Nextcloud webroot so if you have installed Nextcloud somewhere other than /var/www/html/nextcloud you will also need to adjust this part of the command</p>


<p>Once you have run this command go to your Nextcloud URL in a browser and verify that logins have been disabled. You will see a warning that the host is in maintenance mode.</p>


<p>You can now run the following commands to back up your files and database. As before, take note to adjust any file paths or usernames:</p>


<pre class="wp-block-code"><code>sudo tar -czf NextCloud-backup-$(date -I).tar.gz /var/www/html/nextcloud/</code></pre>


<p>The above command will create a compressed archive of your full Nextcloud install directory and place it in your current working directory. If you wish you can offload this file to another system using rsync or sftp but as long as it is not created inside /var/www/html/nextcloud the installer shouldn&#8217;t touch it so it should be safe.</p>


<p>We now run the next command to back up our database to a file in the current directory:</p>


<pre class="wp-block-code"><code>sudo mysqldump --single-transaction -u&#091;username] -p&#091;password] &#091;databse name] &gt; NextCloud-db-$(date -I).sqlbak</code></pre>


<p>If you have set up your Nextcloud installation using my previous guide your username and database name will probably both be &#8220;nextcloud&#8221; but if your setup is different you will need to adjust accordingly. Additionally, if your database is not running on localhost you will need to specify the database location using the <code>-h [hostname]</code> flag. If the command is successful you should see a file in your current directory with a .sqlbak extension. This is a text file containing all the MySQL commands you would need to recreate the database in its current form. If a restore is needed, you will need to drop and recreate the existing database and then pipe the sql commands into the MySQL client as follows:</p>


<pre class="wp-block-code"><code>sudo mysql -u&#091;username] -p&#091;password] -e "DROP DATABASE nextcloud"
sudo mysql -u&#091;username] -p&#091;password] -e "CREATE DATABASE nextcloud"
sudo mysql -u&#091;username] -p&#091;password] &#091;db_name] &lt; &#091;backupfilename].sqlbak</code></pre>


<h2>Installing the update</h2>


<p>Now our files and database are safely backed up, we can proceed with the update process. The easiest way to do this is via the web interface, meaning we will need to disable maintenance mode:</p>


<pre class="wp-block-code"><code>sudo -u www-data php /var/www/html/nextcloud/occ maintenance:mode --off</code></pre>


<p>Now when we go to the web page we see that functionality has returned to normal and we will be able to log in. You will need to log in as the admin user to perform the update so if you normally work as a non admin user (you should be doing this) you will need to log out and back in as the admin.</p>


<p>Click your user icon in the top right corner to go to the settings page. From here you will need to go to the &#8220;Overview&#8221; screen and click &#8220;Open updater&#8221;</p>


<div class="wp-block-group"><div class="wp-block-group__inner-container">
<div class="wp-block-image"><figure class="alignright size-large is-resized"><img loading="lazy" src="https://davidshomelab.com/wp-content/uploads/2019/12/openupdater.png" alt="" class="wp-image-243" width="433" height="260" /></figure></div>


<figure class="wp-block-image size-large is-resized"><img loading="lazy" src="https://davidshomelab.com/wp-content/uploads/2019/12/settingsmenu.png" alt="" class="wp-image-242" width="186" height="298" /></figure>
</div></div>


<p>From the update screen you will be able to install the update. For the most part the installation is automatic and it will show you the steps it is taking and warn you if there are any problems which need manual intervention. You most likely won&#8217;t have any issues but if you do you should be able to find a solution on Google or leave me a comment and I will do my best to assist.</p>


<p>If all goes well you will see a screen similar to the one below. You will then be asked if you wish to keep maintenance mode active. As we want to use the web based updater to finish the update, select &#8220;no&#8221; for this.</p>


<figure class="wp-block-image size-large"><img loading="lazy" width="559" height="725" src="https://davidshomelab.com/wp-content/uploads/2019/12/updatefinished.png" alt="" class="wp-image-244" /></figure>


<p>You will then be taken to a screen with a list of the apps that need to be updated. As we are updating to a new major version there will be a lot of apps in this list. Just click &#8220;Start Update&#8221; to begin the installation. Note that at the bottom of the list you will see a list of incompatible apps which will be disabled if you go ahead with the update. If you rely on any of these apps you should not proceed with the update and should instead restore your Nextcloud deployment from the backups we created.</p>


<p>Once complete you will be given the option to view a detailed log of the installation process or just continue to your new Nextcloud installation. If everything has gone well with the update, you should be returned to the main login screen or your main home screen if you are already logged in. The look and feel of Nextcloud 17 is very similar to Nextcloud 16 as most of the changes focus on additional performance and security improvements. For the full list of changes, check out <a href="https://nextcloud.com/blog/nextcloud-17-scales-up-and-improves-data-protection-with-remote-wipe-collaborative-text-editor-2fa-updates-ibm-spectrum-scale-support-and-global-scale-improvements/">this</a> article on the Nextcloud blog.</p>
<p>The post <a rel="nofollow" href="https://davidshomelab.com/updating-nextcloud-to-17-0-2/">Updating Nextcloud to 17.0.2</a> appeared first on <a rel="nofollow" href="https://davidshomelab.com/">David&#039;s Homelab</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
