<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Daniel Hofman</title>
    <description>The latest articles on Forem by Daniel Hofman (@danhof).</description>
    <link>https://forem.com/danhof</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/danhof"/>
    <language>en</language>
    <item>
      <title>Just shipped v2.11 of the CLI manager I've been building for Windows</title>
      <dc:creator>Daniel Hofman</dc:creator>
      <pubDate>Mon, 23 Mar 2026 23:22:28 +0000</pubDate>
      <link>https://forem.com/danhof/just-shipped-v211-of-the-cli-manager-ive-been-building-for-windows-5aan</link>
      <guid>https://forem.com/danhof/just-shipped-v211-of-the-cli-manager-ive-been-building-for-windows-5aan</guid>
      <description>&lt;p&gt;I've been building TerminalNexus for a while now. It's a Windows terminal emulator that tries to solve the "I have 40 commands I run constantly and they live in 6 different places" problem. Sticky notes, Notepad files, .bat scripts scattered across the desktop, a OneNote with commands that stopped working six months ago.&lt;/p&gt;

&lt;p&gt;The idea was simple: one place, organized by project, run anything with a click. It grew from there.&lt;/p&gt;

&lt;p&gt;This week I shipped 2.11. Here's what's in it.&lt;/p&gt;

&lt;p&gt;The Windows Terminal dependency is gone&lt;/p&gt;

&lt;p&gt;This was the big one, honestly. Earlier versions depended on Windows Terminal being installed. That created a whole category of support headaches. Different WT versions behaved differently, some corporate machines had it blocked, and it added setup friction that shouldn't exist.&lt;/p&gt;

&lt;p&gt;2.11 ships with its own built-in terminal engine. Nothing extra to install. It just works.&lt;/p&gt;

&lt;p&gt;Variables Manager&lt;/p&gt;

&lt;p&gt;This one's been on the list for a long time.&lt;/p&gt;

&lt;p&gt;You can now define variables at global, project, or session scope and use them in any command with {{variable_name}} syntax. So instead of hardcoding a server IP or SSH key path everywhere, you set it once and reference it. When you're onboarding someone, you hand them a command set and they fill in the variables specific to their environment.&lt;/p&gt;

&lt;p&gt;Secrets (API keys, passwords, tokens) are stored encrypted and masked in the UI. They don't show up in logs or exports unless you explicitly allow that.&lt;/p&gt;

&lt;p&gt;It also supports multi-line values, which matters more than it sounds. SSH keys, certificates, JSON blobs. Paste them in once, reference them anywhere.&lt;/p&gt;

&lt;p&gt;Shell Conversion&lt;/p&gt;

&lt;p&gt;Right-click a command in the terminal, pick a target shell, and the AI converts it. Bash to PowerShell, PowerShell to CMD, whatever direction you need. The panel shows a confidence score and lets you re-run the conversion if the first result isn't right.&lt;/p&gt;

&lt;p&gt;I stopped having to Google "PowerShell equivalent of chmod" three times a day.&lt;/p&gt;

&lt;p&gt;Scheduled output panels&lt;/p&gt;

&lt;p&gt;You can already schedule commands to run on a timer. What 2.11 adds is proper visibility into what happened.&lt;/p&gt;

&lt;p&gt;Each scheduled command gets a panel in the assistant sidebar that shows the output history. The AI reads each run's output and classifies it as healthy, warning, or critical, so at a glance you can see whether your automated checks are passing without actually reading the output every time.&lt;/p&gt;

&lt;p&gt;You can reorder the panels, set header colors to visually separate them, and the history persists across restarts.&lt;/p&gt;

&lt;p&gt;The rest&lt;/p&gt;

&lt;p&gt;Per-provider AI config is new. You can now set different API keys and models for each provider (OpenAI, Anthropic, OpenRouter, Ollama, LM Studio) independently instead of one global setting. Useful if you use different models for different tasks.&lt;/p&gt;

&lt;p&gt;Quick Action Buttons on the toolbar for common AI commands. All dialogs got a visual refresh with a proper dark theme.&lt;/p&gt;

&lt;p&gt;The product&lt;/p&gt;

&lt;p&gt;TerminalNexus is Windows-only. There's a free version with 15 command buttons, one project, 400+ built-in command presets, SSH manager, and multi-shell tabs. It's a real tool, not a crippled demo.&lt;/p&gt;

&lt;p&gt;If you outgrow that, there's a paid version with unlimited projects, AI features, scheduling, and shell conversion.&lt;/p&gt;

&lt;p&gt;It supports PowerShell, CMD, Git Bash, WSL, and custom shells. The AI integration is bring-your-own-provider, including local models via Ollama or LM Studio. No telemetry, no usage data sent anywhere.&lt;/p&gt;

&lt;p&gt;If you're on Windows and find yourself re-typing the same commands every day, it's worth a look.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://safesoftwaresolutions.com" rel="noopener noreferrer"&gt;https://safesoftwaresolutions.com&lt;/a&gt; (&lt;a href="https://safesoftwaresolutions.com/" rel="noopener noreferrer"&gt;https://safesoftwaresolutions.com/&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;Happy to answer questions in the comments.&lt;/p&gt;

</description>
      <category>windows</category>
      <category>devtools</category>
      <category>productivity</category>
      <category>automation</category>
    </item>
    <item>
      <title>Securing Your Site: Obtain an SSL Certificate with Let’s Encrypt When Your ISP Blocks Port 80</title>
      <dc:creator>Daniel Hofman</dc:creator>
      <pubDate>Thu, 25 Apr 2024 03:34:30 +0000</pubDate>
      <link>https://forem.com/danhof/securing-your-site-obtain-an-ssl-certificate-with-lets-encrypt-when-your-isp-blocks-port-80-390g</link>
      <guid>https://forem.com/danhof/securing-your-site-obtain-an-ssl-certificate-with-lets-encrypt-when-your-isp-blocks-port-80-390g</guid>
      <description>&lt;p&gt;Wildcard certificates are highly beneficial because they secure all subdomains of your main domain with a single certificate. This simplifies domain management by eliminating the need to handle individual certificates for each subdomain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://letsencrypt.org/docs/challenge-types/#dns-01-challenge" rel="noopener noreferrer"&gt;DNS-01 challenge&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I chose the DNS-01 challenge for validation for my homelab setup because my Internet Service Provider (ISP) blocks port 80, which is necessary for the &lt;a href="https://letsencrypt.org/docs/challenge-types/#http-01-challenge" rel="noopener noreferrer"&gt;HTTP-01 challenge&lt;/a&gt;. If your ISP imposes similar restrictions, the DNS-01 challenge might be your best option for obtaining an SSL certificate from Let's Encrypt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up
&lt;/h2&gt;

&lt;p&gt;First, I created a directory to store the Let's Encrypt logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /var/log/letsencrypt/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, I installed Certbot, which simplifies the SSL certificate issuance and management process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install certbot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To initiate the certificate request, I ran the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;certbot certonly --manual
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;During this setup, Certbot prompted me for an email address for important notifications and to agree to the Let's Encrypt Terms of Service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;youremail@example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once that was done, I entered my domain name in the following format to request a wildcard certificate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;*.yourdomain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DNS-01 Challenge Configuration
&lt;/h2&gt;

&lt;p&gt;For the DNS-01 challenge, Certbot provided me with a specific TXT record that needed to be added to my domain's DNS settings under the name:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_acme-challenge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The record value looked something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;o7mU8KwvI7A1_phmxzrHOIA9jaGSOjkI-ngCRbSdhpc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bjcdhizeaoe3b68vxc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bjcdhizeaoe3b68vxc1.png" alt=" " width="800" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Search for your TXT record under _acme-challenge.yourdomain.com and verify that the record's value matches what you added.&lt;/p&gt;

&lt;p&gt;It's crucial NOT to proceed with the SSL setup until this TXT record has fully propagated across DNS servers worldwide. Depending on your DNS provider, this propagation process can take anywhere from a few minutes to an hour. If you proceed too early, you will have to repeat this process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Check DNS Propagation - Version 1
&lt;/h2&gt;

&lt;p&gt;To check if the record has propagated, you can use online tools like the &lt;a href="https://toolbox.googleapps.com/apps/dig/#TXT/_acme-challenge.yourdomain.com" rel="noopener noreferrer"&gt;Google Admin Toolbox&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fcvf923ye8f1h5olpxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fcvf923ye8f1h5olpxc.png" alt=" " width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Check DNS Propagation - Version 2
&lt;/h2&gt;

&lt;p&gt;Use this command to test for it propagated:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dig -t txt _acme-challenge.yourdomain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output of this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;;; ANSWER SECTION:
_acme-challenge.yourdomain.com. 0 IN  TXT     "AxSzdAxR3yyJYok3KkuIRwod82Ld5MhYuH4oJ8"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Certificate Renewal
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open the crontab for editing:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo crontab -e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Add a line to the crontab file to schedule the task. Here, the renewal process is set to run twice daily, which is frequent enough to handle any potential issues well before the certificate's expiration. The exact timing (4:47 AM and PM in this example) is staggered to avoid peak times on Let's Encrypt's servers. When you use the --post-hook option with Certbot, it ensures that the specified command, such as restarting or reloading Nginx, only runs after a successful renewal of the certificate. This is a safeguard to prevent service disruptions in case the renewal process encounters an issue.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;certbot renew --quiet --post-hook "systemctl reload nginx"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Dealing with an ISP that blocks port 80 can make securing your website with an SSL certificate a bit tricky. The DNS-01 challenge comes to the rescue, providing a workaround for this hiccup. Just follow these steps, and you'll be able to get and handle an SSL certificate from Let's Encrypt without the need for port 80.&lt;/p&gt;

</description>
      <category>letsencrypt</category>
      <category>certificate</category>
      <category>security</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Homelab Adventures: Crafting a Personal Tech Playground</title>
      <dc:creator>Daniel Hofman</dc:creator>
      <pubDate>Mon, 22 Apr 2024 23:20:58 +0000</pubDate>
      <link>https://forem.com/danhof/homelab-adventures-crafting-a-personal-tech-playground-54b</link>
      <guid>https://forem.com/danhof/homelab-adventures-crafting-a-personal-tech-playground-54b</guid>
      <description>&lt;p&gt;&lt;em&gt;Disclaimer: I am not affiliated with any of the applications mentioned in this post. I have chosen to discuss them solely based on my positive experiences and the benefits they have offered in my personal tech projects.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;No, I am not talking about a server rack in your closet or guest bathroom like an email server. This is about an old spare PC or laptop that can be repurposed into a Linux home server to give you a lot of fun if you are into tech.&lt;/p&gt;

&lt;p&gt;Why Linux, you wonder? Linux is often recommended because unlike its Windows cousin, it can run for long periods without needing constant attention. Ok, it’s also free which is a nice feature. Yes, Windows servers are stable, but definitely not free.&lt;/p&gt;

&lt;p&gt;This is a story of my own venture into the homelab world and what I learned along the way.&lt;/p&gt;

&lt;p&gt;A long, long time ago, last year, I finally realized that I had reached the end of my patience with being unable to access my notes from all my devices. Thousands of my notes were stuck in the past using the desktop version of OneNote 2010. The OneNote upgrade prompts also increased over the last couple of years and kept nagging me to switch to the new and shiny cloud connected OneNote. Since I am overprotective of my life’s collective stash of information, I was not going to put that in the cloud. Microsoft must already have a copy of it somewhere in a data center saved under my barcode, but that’s probably a story for another time. In any case, I was not going to make it easier for them if I could help it.&lt;/p&gt;

&lt;p&gt;On a quest to find a good reliable replacement for my old OneNote. Replacement that could be accessible from anywhere and from any device.&lt;/p&gt;

&lt;p&gt;Another serious problem I was facing was the fact that OneNote 2010 does not offer a nice way to export all the notes. I tried all forms of exporting and non of them really worked well. Either my laptop crashed or the notes were unreadable. This was really a terrible experience and I felt very locked into OneNote at that point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hello Trilium!
&lt;/h2&gt;

&lt;p&gt;After spending a significant amount of time searching and testing different apps, I finally settled on an app and an idea to solve all my issues. Welcome to Trilium, an open-source application that feels perfect. It has all the features I could dream of and more. What’s more important is it has a nested tree structure like I always loved and missed since my days using TreePad in the 2000s. When TreePad was discontinued, I made the switch to OneNote and missed the old nested tree ever since. Trilium can handle thousands of notes seamlessly, it has a very good search, and keeps all the notes in a SQLite database that, as a software developer, I can appreciate. It even has a page that is a query page for the database so you can run SQL queries against your notes all day long. You can even make updates to your notes via SQL. How cool is that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding a home for the Trilium app
&lt;/h2&gt;

&lt;p&gt;I mentioned earlier that I wasn't going to store my notes in the cloud. That still holds true, but the catch is that I would consider it if it were my own cloud, with a solid security layer around it. As a software developer, this seemed like an exciting new technology on the horizon, and I was eager to dive in, learn new shiny tech, and perhaps even need to buy new hardware. Along with a host of delightful new issues and bugs to tackle, and more notes to jot down. This was great, and I was prepared to face the challenge. If successful, I would have a homelab server and access to all my notes from anywhere in the world. A long-awaited dream, but now I was determined to make it a reality.&lt;/p&gt;

&lt;h2&gt;
  
  
  First attempt at homelab server: Raspberry Pi 4 and 5
&lt;/h2&gt;

&lt;p&gt;So I watched many tutorials online and it seemed that Raspberry Pi would be a good fit. Small, quiet, and very well-equipped. On-board WIFI, USB 3, Ethernet, and even two 4k monitor connections. All online videos and tutorials were raving about it, so I purchased version 4 since version 5 was not shipping yet, putting it on backorder for 4 months.&lt;/p&gt;

&lt;p&gt;I received the new Raspberry Pi 4 and started tinkering with it. I copied the latest Raspberry Pi OS onto the micro SD and booted the Pi. I connected a 4k monitor to it and, to my surprise, I realized that one thing all those YouTube videos did not mention or emphasize was that using it with a monitor would feel like being stuck in molasses. Not exactly a pleasant user experience. I had plans to install Docker and a containerized version of Trilium and NGINX on it. Raspberry Pi really felt underpowered. After a quick research, I found references to overclocking it.&lt;/p&gt;

&lt;p&gt;Since it seemed very simple to do, by modifying a text file, I tried to max it out to the point just before crashing. Going overboard, simply made the device would not boot at all.&lt;/p&gt;

&lt;p&gt;The parameters that worked for me and were at the max I could push it to were:&lt;/p&gt;

&lt;p&gt;/boot/firmware/config.txt&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;over_voltage=6
arm_freq=2000
gpu_freq=750
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Yes, I bumped up the voltage a bit as well. I did purchase the intercooler fan and a large heat sink for the Pi, so my temps were reasonable, staying at around 60 degrees Celsius. Not great for long term, but good for quick tests to see if I can even make this thing work.&lt;/p&gt;

&lt;p&gt;I installed Docker, Trilium, and a few more containers with some admin tools. It was still too slow, and I started to get disappointed. Even after overclocking it, it was still sluggish, especially when navigating through thousands of notes.&lt;/p&gt;

&lt;p&gt;Some time later the new Raspberry PI 5 showed up at my doorstep. At that point, I did not have much hope since I read all the specs and new that it was a bit faster, but not ground breaking improvement.&lt;/p&gt;

&lt;p&gt;To no surprise, the new Pi 5 was just a bit quicker even after, again, overclocking it to the max. I already made some other plans for it so I was not disappointed about the purchase.&lt;/p&gt;

&lt;p&gt;The new Raspberry Pi was re-purposed to be a portable device I would carry with my iPad for remote development while on the go. I configure it with a static IP, installed VS Code and I use rsync to copy data between the home PC and the Linux OS on the Pi. Connecting to the iPad via USB-C was super reliable and actually using it with the small screen of 12.9 iPad M2 is a blast. I went through some of the remote desktop apps as well before finally settling on the winner, RealVNC. I tried JumpDesktop and MS Remote Desktop as well. They all work great, so the RealVNC is just a personal preference.&lt;/p&gt;

&lt;p&gt;Why the USB-C connected Raspberry Pi, you ask?&lt;/p&gt;

&lt;p&gt;For the occasions when I have no internet access while traveling or in some cases some schools, universities block protocols like RDP and VNC. It’s nice to have a device with all your stuff connected whenever needed via an old fashion cable.&lt;/p&gt;

&lt;h2&gt;
  
  
  VSCode Server and the iPad
&lt;/h2&gt;

&lt;p&gt;Yes, I did try VS Code server in a Docker container. The issue with it is that it does not work with all the extensions I needed. What was even worse, the keyboard mappings in the browser on the iPad are just terrible when it comes to VS Code. It was actually much better to simply connect the iPad to the Raspberry Pi over USB-C and then connect to the Pi via remote desktop and go with that. No issues with key mapping, and the speed was quite acceptable. Plus, it was truly a full-screen experience. The browser version, even when started as an app, is never quite the same. Another significant advantage of running remote desktop to the Pi was the addition of the function keys on the iPad. Yes, they are on-screen, but at least they are available. So, F3, F5, or F10 are there when I need them, and I don’t have to search through the menus.&lt;/p&gt;

&lt;p&gt;I also tried vscode.dev and the tunneling extension, but faced the same browser issues as listed above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning experience
&lt;/h2&gt;

&lt;p&gt;I did learn a few tricks while working on the Pi. Even though I worked on Linux in my day job, this was a great learning exercise. I realized, for example, that the ARM processor was limited to only run certain software, and the software I had planned to run was not supported. I am referring to a SQL Server instance on ARM. I've seen some mentions on the Internet that Microsoft was working on a version that would run on the Raspberry Pi, but it wasn't available yet. What a bummer. I just needed a small DB for some testing that would be accessible from the web. Unfortunately, it had to be SQL Server at that time.&lt;/p&gt;

&lt;p&gt;I did have a good old laptop that I could use in place of the Pi. I was reluctant to use it before since it had Windows 11 Pro on it, and I didn’t want to wipe it out and install Linux.&lt;/p&gt;

&lt;p&gt;I finally had no choice. I decided to migrate all the apps I had installed on the Pi and use my old i7 laptop in its place.&lt;/p&gt;

&lt;p&gt;That worked well and the speed definitely improved and I was quite happy. The old now Raspberry Pi 4 was put aside and is currently waiting for a new assignment.&lt;/p&gt;

&lt;p&gt;Here is my docker-compose that I used in Portainer, which worked for me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'

services:
  trilium:
    image: zadam/trilium:latest
    container_name: trilium
    ports:
      - "8080:8080"
    environment:
      - USER_UID=1000
      - USER_GID=1000
    volumes:
      - ./data/trilium:/home/node/trilium-data
    networks:
      - dh_network
    restart: unless-stopped

networks:
  dh_network:
    driver: bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Docker, the endless containers
&lt;/h2&gt;

&lt;p&gt;I worked with Docker on a daily basis in my day job and understood the power of containers, especially coming from a Windows environment where the DLL hell and the ActiveX control referencing nightmare could drive someone crazy. Only minor improvements have been made to resolve this in the last couple of decades. Recently, .NET has made some strides to address it with their framework-dependent and self-contained install types. Let me tell you, this is still not that great. Knowing how frequently .NET receives updates, the framework-dependent type will swap out DLLs unexpectedly, leading your application to need another update. I am not excited about requiring so many .NET versions on my system just to run certain apps. On the flip side, the self-contained type seems excellent until you have to deploy the code and bundle around 300+ files, mostly .NET DLLs each time. In my situation, I utilize the WiX Toolset installer. I cringe when I attempt to publish my code and have to create an installer. Most of the time, the .NET version gets automatically updated on my PC, and WiX complains about missing or outdated DLLs due to the recent .NET update. It's quite exhausting, and there doesn't seem to be any solution in sight. I also work with Golang and truly value the single executable output. It's incredibly convenient. I am aware that .NET can also compile into a single file; however, this feature has never functioned properly for me, and the resulting file is massive. Sometimes I wish I wrote the API in Go, and other times I tell myself that I will rewrite it in Go. The truth is that I can't, due to some .NET encryption I use across my desktop app and the API. I tried a couple of times to make it work, but due to some padding issues, I was unable to and probably gave up to easy.&lt;/p&gt;

&lt;p&gt;Let's get back to the topic. Running apps in containers is super convenient. When something goes wrong, you can just restart it and move on. Of course, I am not referring to large Kubernetes deployments and container management; I am talking about a simple homelab environment. It's something you can tinker with without losing sleep over, well, maybe once or twice. Yes, I know some of us take homelabs to the next level, but I'm not talking about a swarm of interconnected Raspberry Pis either :)&lt;/p&gt;

&lt;p&gt;So the point here is that since it's so nice to run Docker containers, the next logical step is to add a bunch of apps to have more fun with it and learn something in the process as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.portainer.io/" rel="noopener noreferrer"&gt;Portainer&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My first app to manage the whole thing was Portainer, which is easy to follow and works really great. I used Rancher at work, and Portainer seemed like a super user-friendly version of Rancher.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/filebrowser/filebrowser" rel="noopener noreferrer"&gt;File Browser&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then thought it would be nice to have something that allows me to access the file system without needing to write CLI commands in WSL. The best app I could find was an app called File Browser. Yes, another container to play with. I could do some basic operations on the file system, and what was even cooler, I was able to simply create browser links to files I wanted to open with a single click. It's really a great solution, and the app is super responsive and fast. Some basic user management is also available. I highly recommend this application; it's really a joy to use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://webmin.com/" rel="noopener noreferrer"&gt;Webmin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then came the time when I wanted something for managing the Linux OS and getting some basic stats. I tried many apps until I finally settled on Webmin. Unfortunately, this app does not run well in a container, but it was worth it. One very helpful feature it has is the built-in terminal. It proved to be quite helpful for the times I wanted to access the server while on the iPad.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://ish.app/" rel="noopener noreferrer"&gt;iSH&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I should probably also mention that there is an app available on mobile that is essentially a terminal emulator and works great for free. The app is called iSH. I tried some paid options, but they seem to have too much fluff. This is a very streamlined and focused app that works great!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://about.gitlab.com/releases/categories/releases/" rel="noopener noreferrer"&gt;GitLab-CE&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, an app that I've been eagerly waiting to install on my personal server is GitLab-CE (Community Edition). As I mentioned, I am a software developer and have written and collected a ton of source code. I have many personal projects and I am not ready to put them in the cloud, not even in a private repo. Call me paranoid. At my regular job, I use Bitbucket, Jira, and Confluence on a daily basis. For some personal projects, I use GitHub, and for others, I use GitLab. My most anticipated scenario is a self-hosted GitLab version, one that I have full control of.&lt;/p&gt;

&lt;p&gt;I was finally ready for GitLab. The install was fairly easy and I was able to migrate all of my code and tickets into my long awaited self-hosted GitLab. I must say the app rand well, but there was quite a bit of a configuration headache and the memory footprint was more than I liked. It consumed more memory than any of my other apps, but I had no performance issues at all.&lt;/p&gt;

&lt;p&gt;Here is my docker-compose that I used in Portainer, which worked for me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.6'
services:
  gitlab:
    image: gitlab/gitlab-ce:16.6.2-ce.0
    container_name: gitlab
    restart: unless-stopped
    hostname: 'gitlab'
    user: "0:0"
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        gitlab_rails['gitlab_shell_ssh_port'] = &amp;lt;please_enter&amp;gt;
    ports:
      - '8282:80'
      - '2323:22'
    volumes:
      - ./data/gitlab/config:/etc/gitlab
      - ./data/gitlab/logs:/var/log/gitlab
      - ./data/gitlab/data:/var/opt/gitlab
    networks:
      - dh_network

networks:
  dh_network:
    external: true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here are the GitLab settings from gitlab.rb file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;external_url 'http://192.168.0.9:8282'
nginx['redirect_http_to_https'] = false
gitlab_rails['gitlab_shell_ssh_port'] = 22
letsencrypt['enable'] = false
nginx['listen_port'] = 8282
nginx['listen_https'] = false
nginx['listen_port'] = 80
puma['worker_processes'] = 0

registry_nginx['proxy_set_headers'] = {
          "X-Forwarded-Proto" =&amp;gt; "https",
          "X-Forwarded-Ssl" =&amp;gt; "on"
        }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://nextcloud.com/" rel="noopener noreferrer"&gt;NextCloud&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After successful GitLab addition, I ended up installing another containerized app worth mentioning, NextCloud. With the idea of replacing the standard cloud services and even iPhone photo sync, this was an interesting and almost too good to be true solution. This whole homelab server was really paying off and getting better by the minute. Since I had everything working just right, I spent all my free time researching new apps I could add to my Docker. This was indeed a pleasurable experience and one that also gave me a lot of technical satisfaction. At this point, I had about 15 container apps, and let me tell you, not all wanted to play nicely on the system. There was quite a bit of research and configuration to get them all to behave and work as expected. I became even more familiar with Docker, Portainer, the OS and networking between containers and all the components in the server. I had many happy moments when something finally clicked and started to work. Sometimes due to my tinkering and other times due to miracles. Back to the NextCloud, I initially had many issues with performance. It got to the point that I wanted to uninstall it and drop the idea of a personal dedicated cloud service. After a lot of research and trial and error, I finally got the performance under control. My eyes finally opened to the idea that it's true, I can have a personal cloud not shared with the big boys and not data mined for someone else's benefit. There are some aspects of NextCloud I use on a daily basis: the calendar, the contacts list, and of course the iPhone photo sync. The actual data storage portion of this experience is not that great. I use the File Browser app for that. The reason? Well, it can take forever when it comes to copying files to the cloud. Why? I think it's because of the file indexing performed internally. Copying a file directly to a folder that is part of the external file storage locations for NextCloud will not show that file if it was not copied using the NextCloud interface. This might be an error on my part, not entirely sure yet, but I do know that copying a large number of files can be extremely slow. I can't say the same for File Browser, which copies files very quickly. Overall, it's a good solution, except for the minor indexing problem.&lt;/p&gt;

&lt;p&gt;Here is my File Browser docker-compose that I used in Portainer, which worked for me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.8'
services:
  filebrowser:
    image: filebrowser/filebrowser:latest
    container_name: filebrowser
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    ports:
      - "8083:80"
    volumes:
      - "/:/srv"  # Mounts the root of the host to /srv in the container
      - "./data/filebrowser:/data"
    networks:
      - dh_network
    restart: unless-stopped

networks:
  dh_network:
    driver: bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the docker-compose for the NextCloud app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "2"
services:
  nextcloud:
    image: linuxserver/nextcloud:latest
    container_name: nextcloud
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    volumes:
      - ./data/next_cloud/config:/config
      - ./data/next_cloud/data:/data
      - ./next_cloud_users/daniel_cloud:/daniel_cloud
    ports:
      - 444:443
      - 8082:80
    restart: unless-stopped
    depends_on:
      - nextcloud_db

  nextcloud_db:
    image: linuxserver/mariadb:latest
    container_name: nextcloud_db
    environment:
      - PUID=1000
      - PGID=1000
      - MYSQL_ROOT_PASSWORD=&amp;lt;strong_password&amp;gt;
      - TZ=America/New_York
      - MYSQL_DATABASE=nextcloud_db
      - MYSQL_USER=nextcloud
      - MYSQL_PASSWORD=&amp;lt;strong_password&amp;gt;
    volumes:
      - ./data/next_cloud/db:/config
    restart: unless-stopped
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/butlerx/wetty" rel="noopener noreferrer"&gt;WeTTY&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since I already alluded to a terminal access from the iPad via a browser, from Webmin’s interface, I have to mention my favorite browser access app to do this with. That would be the open-source and free app called WeTTY. I have tried many, some outright paid to get and others on a subscription. This app wins on simplicity and functionality. It is a Dockerized app that runs in a container, so this is a huge win for my environment and convenience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nicolargo/glances" rel="noopener noreferrer"&gt;Glances&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I must also mention an open-source stats-dedicated app I use, called Glances. It’s a great and to-the-point app that lets you keep an eye on the system performance at a glance:) It provides a window into your Docker container stats as well, making it super useful for a homelab setup running Docker.&lt;/p&gt;

&lt;p&gt;As I loaded my laptop with so many containerized apps, I realized that there was still one thing missing. I had all my notes connected via Trilium, and NextCloud provided photo sync and a very functional calendar with reminders. It would be cool if I could also connect my hard drives to the server and basically use that as a file server. Being an avid photographer with half a million high-res RAW images collected from my travels, I use Lightroom to organize them all. I thought to myself, it would be great to have access to all those images and collections accessible from anywhere. I use Lightroom on the phone and also on the iPad, so not having all my images with me while traveling is always a bummer. Yes, you can copy some selected sets of images to the cloud or the iPad on the go, but it always feels limited and more hassle than it should be. Having my entire life's collection always with me and at my fingertips has always been a dream.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.samba.org/" rel="noopener noreferrer"&gt;Samba&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After some research, I came across Samba, which allowed me to share folders from my server to any device. It's possible to install it in a Docker container, so it was a no-brainer. After a quick docker-compose stack creation and volumes setup in Portainer, I had it working on the server. I created a shared folder and configured my hard drives to permanently mount in that folder and become instantly available. Lightroom does work with shares, so repointing the main catalog was a breeze, and just like that, I had all my images available in Lightroom. I must say, mounting drives on Linux is a much nicer experience than on Windows. Once you have the correct UUID of the drive and modified the "/etc/fstab" file, you can count on that drive always being there. I can't say the same for Windows drives. External drives seem to always get a different drive letter. Even going through the Disk Management utility, there is no guarantee that it will get the same letter. Many times I connected to Lightroom on my desktop just to find out that the Lightroom catalog cannot find the images because the drive letter had changed. Absolutely crazy. No more of this nonsense, and it was a side benefit to my setup I had not thought about before. One thing I wish I could do is to re-use my Windows PC Lightroom catalog on my iPad. Unfortunately, that is not possible.&lt;/p&gt;

&lt;p&gt;Here is the docker-compose for the Samba server:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: '3.7'

services:
  samba:
    image: dperson/samba
    container_name: samba_server
    restart: unless-stopped
    environment:
      - SAMBA_LOG_LEVEL=0
      - TZ=America/New_York
    volumes:
      - ./data/samba/:/data
      - ./data/docs:/samba/public
    ports:
      - "139:139"
      - "445:445"
    networks:
      - sambanet

networks:
  sambanet:
    driver: bridge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Outgrowing my laptop
&lt;/h2&gt;

&lt;p&gt;It seemed that my laptop was reaching a breaking point with all these Docker images, so I decided to up the game to a dedicated desktop server. I wanted more performance, more space, and more of everything for all my toys. This was especially true when I added the photo drives, and each file I tried to open on the iPad was about 30-100GB or more in size. This is the price you pay for high-res files and the ability to push and pull image boundaries in Lightroom. I shot a few weddings and other professional photo shoots, and preserving as many details in the images was particularly important.&lt;/p&gt;

&lt;p&gt;I also added a media server, JellyFin, to my collection of docker images. Serving music and movies with occasional transcoding required not only a fast CPU but also a better GPU.&lt;/p&gt;

&lt;p&gt;It was time to repurpose the laptop and get something a bit more performant that would serve all my data in a more satisfying way. I wanted to forget about how long the file takes to open; I just wanted to use the data.&lt;/p&gt;

&lt;p&gt;I purchased an i9 DELL server with a pretty nice GPU and 128GB of RAM. I was all in on the homelab server thing, and I convinced my wife that it was for the betterment of the household. She completely understood that my life and well-being were going to improve when I got a better server.&lt;/p&gt;

&lt;p&gt;Thankfully, early on, I decided that I would not install an app on the server, with one or two exceptions, unless it was containerized for Docker. That decision really paid off. I had all my docker-compose files that became stacks in Portainer, and I kept all my server setup/configuration notes in Trilium.&lt;/p&gt;

&lt;p&gt;When it came to migrating everything, I promptly installed Ubuntu Server, recreated the same folder structure as on the laptop so my Docker volumes would just snap into place. I auto-mounted the same hard drives, installed UFW (Uncomplicated Firewall), set up all the policies and rules from my Trilium notes, and I was almost back. I copied over my SSH key and removed the password access from the new Ubuntu install. All I had to do now was search the Internet for the correct rsync commands to safely copy some of my data from the laptop to the new shiny server.&lt;/p&gt;

&lt;p&gt;One thing I found out right away is that I had to go into the server's BIOS and configure it to restart in case of power failure. This is something that slipped my mind, and frankly, I was not aware that it existed until I accidentally unplugged my server and realized it was still dead after I plugged it back in. I felt like it was a slap in the face, how silly of me to not even think about it beforehand. After using a battery-powered laptop, you can forget what that plug is for. Plus, I thought, I write software for a living; I don't administer servers, even though I have built a few desktop PCs in my lifetime. I do remember one feature I loved on my 2008 MacBook Pro: the ability to start the laptop on a timer, which is a super useful feature I miss ever since switching to Windows. I think the same goes for Time Machine backup. I don't know how I live without that as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  iPad as a server display
&lt;/h2&gt;

&lt;p&gt;Another purpose for my iPad, since the server didn’t come with a monitor and there was no monitor in sight, I decided it would be great if I could connect my iPad to the server when I needed to work on it directly. Luck has it, just recently I purchased a $10 capture card on Amazon in order to play Steam Deck games on the iPad. The capture card adapter itself weights nothing. As I mentioned, it’s a 12.9” display and it is just perfect. Now it’s also perfect to grab it and take it to the server when needed. The software that works great with this capture card and Steam Deck is Genki Studio and it’s also free on the Apple app store. One thing that is awesome about this software is that it lets me adjust the screen resolution on the fly and super seamlessly. I have tried a few apps for this purpose, but the easiest to use and the one that actually had the features I needed without extra bloat was this one. I am happy that I found another purpose for the iPad. I alway considered them to be just a larger versions of the iPhone. Now with the possibility to access external drives, use it as a monitor and access to more and more real productivity apps, it turns out to be a pretty cool device. I must mention another feature of the iPad that is just not the same on a laptop, its the ability to open it and be read to go without waiting for the thing to wake up from sleep or hibernation. The updates are painless and it just works. The battery life is excellent too. Just to add to this, now with a full featured development possibility, I just love this setup. I never thought I would say that in a million years just few months ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hello World!
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://letsencrypt.org/" rel="noopener noreferrer"&gt;Let's Encrypt&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What good is it when you have all those apps and files to access, but can only do it at home on the local network? Since I've already added NGINX to Docker and got it running, I thought I would configure it to access all my apps remotely. This is something I've always wanted to do, but was hesitant to actually go ahead with, fearing that malicious actors might discover me and exploit my security setup. This time, my desire to open my files to the outside overcame my security worries, and I got a Let's Encrypt certificate for my server.&lt;/p&gt;

&lt;p&gt;This did not go smoothly out of the box. The first issue that came up was the Let's Encrypt certificate creation. Since I am on COX Internet, I ran into an issue with port 80, which is essential for the certificate challenge to work, but COX blocks it. I had to do some more research into this and found good information on a workaround. Here is some info on this directly from their website:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Pros:

It’s easy to automate without extra knowledge about a domain’s configuration.
It allows hosting providers to issue certificates for domains CNAMEd to them.
It works with off-the-shelf web servers.
Cons:

It doesn’t work if your ISP blocks port 80 (this is rare, but some residential ISPs do this).
Let’s Encrypt doesn’t let you use this challenge to issue wildcard certificates.
If you have multiple web servers, you have to make sure the file is available on all of them.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://letsencrypt.org/docs/challenge-types/#dns-01-challenge" rel="noopener noreferrer"&gt;DNS-01 challenge&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The workaround is the DNS-01 challenge which does not require port 80. This worked as expected, and I received my new shiny Let's Encrypt certificate. Luckily, ISPs cannot block port 443, so all was good now.&lt;/p&gt;

&lt;p&gt;I started to reconfigure NGINX to allow traffic into my Docker containers. This went smoothly with a few exceptions. Some containers did not like to be called over a Docker service name but insisted on the container's IP addresses. It took me a bit of time to figure that out. I had to tweak some of my Docker networking, but in the end, it worked well. Another hiccup was the Web Sockets configuration for some containers. Some apps required it (like Trilium), and the configuration was sometimes tricky due to different versions and setups.&lt;/p&gt;

&lt;p&gt;Now that I had all this working, I added Apache basic auth to ease my paranoia. I also implemented some NGINX throttling for good measure. Even though most apps have the ability to authenticate, and in the case of NextCloud, offer Multi-Factor Authentication, not all do. I thought that adding the extra step would be a good idea, and I can live with it. This didn't quite work out as I expected.&lt;/p&gt;

&lt;p&gt;Since one of my objectives was to use my homelab apps from my iPad, I soon realized how much of a pain it would be to deal with that extra Apache auth. The issue became apparent when, after logging into an app like Trilium (remember, I already went through two sets of authentication), the gesture to scroll to the top would prompt the app to log in again. This wasn't the only app facing this issue. The Apache auth proved to be too much for me to handle, but what to replace it with? Not to mention, I didn't feel very comfortable with all the port forwards and extra records in my domain DNS setup.&lt;/p&gt;

&lt;p&gt;The NGINX did its job and forwarded all requests to my targets. The ISP allowed me to port forward all the required ports, and even the Apache Basic Auth did what it was supposed to. However, the iPad wouldn't stop me from scrolling past the top point, causing pages to refresh. This issue wasn't isolated to just the Trilium app; it occurred with many other apps as well. Maybe it had to do with my NGINX setup of the server sections, who knows, it was really getting on my nerves. The whole potential security vulnerability regarding the open ports and authentication that was the only thing standing between my data and the rest of the world was too worrisome for me. I started to look for alternatives. I loved the no port forwarding aspect of my local access. I thought about a VPN; would that solve my dilemma?&lt;/p&gt;

&lt;h2&gt;
  
  
  VPN forever
&lt;/h2&gt;

&lt;p&gt;After some digging, I came to the conclusion that the VPN was probably the best and the safest option. I first looked at self-hosted OpenVPN and WireGuard. Both seemed like pretty good choices. I used both for work, and I had no issues. I started looking at some reviews online and quickly realized that most people mentioned that WireGuard was the faster option and that OpenVPN was a few times slower. That's unfortunate since most also mentioned that the learning curve to set up WireGuard was much steeper.&lt;/p&gt;

&lt;p&gt;I kept searching for an even better solution that was secure and performant. I came across Cloudflare, and the reviews were extremely positive, almost convincing me. Then, I watched Chritian Lempa's video about how Cloudflare funnels all your data through it, allowing them to see everything passing through. It felt like one of those countless VPN commercials advising users to use a VPN from a specific company to hide their identity. It's ridiculous, and many non-techy individuals might fall prey to it. It's insane how many online influencers promote this idea without considering their audience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://tailscale.com/blog/how-tailscale-works" rel="noopener noreferrer"&gt;Tailscale&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Final VPN solution. Well, I found something that quickly struck me as an excellent idea. Welcome to Tailscale. From a high-level overview, it's just like Cloudflare, no port forwarding required, a small installable app on the network, IP automatically synchronized with their server so even moving a device will resolve the new IP and the best part of all, it uses WireGuard ynder the hood so it's fast, minus the short handshake between devices. Let me explain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Tailscale works:&lt;/strong&gt; Tailscale uses the WireGuard protocol to establish a secure network that is easy to manage. It prioritizes simplicity and efficiency. Unlike OpenVPN, Tailscale creates direct, secure connections between your devices. This isn’t just convenient—it’s fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it’s better than traditional VPNs:&lt;/strong&gt; Traditional VPNs often route your data through a central server, slowing things down. Tailscale skips that step. It utilizes a central coordination node to assist in setting up connections initially but does not handle your traffic. Therefore, your data flows directly between your devices, resulting in less lag and enhanced security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Under the hood:&lt;/strong&gt; At its core, Tailscale is powered by WireGuard, renowned for its high-speed capabilities and modern cryptographic techniques. Not to forget the ease of use.&lt;/p&gt;

&lt;p&gt;I signed up for an account and surprisingly received an allowance for 100 connected devices. This is unbelievable. I installed the app on my Ubuntu server, iPad, iPhone, and my travel laptop. I quickly retired the port forwards on my router and the NGINX server routing. I could not be happier with this setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building this homelab has been more than merely a technical project - it's been an adventure from start to finish. I struggled to run Trilium smoothly on an underpowered Raspberry Pi, then fine-tuned Docker containers on a beefy i9 server. Each step taught me some valuable lessons.&lt;/p&gt;

&lt;p&gt;What started as a quest for secure remote note access morphed into a full-fledged dive into networking, security, and server management knowhow. It's been a rollercoaster with ups and downs - hardware limits, software breakthroughs, Docker networking, you name it. But throughout, I developed a newfound appreciation for the freedom and control of owning my personal homelab setup.&lt;/p&gt;

&lt;p&gt;Introducing Tailscale was a total game-changer. It simplified remote access while locking down my data tightly and keeping it blazing fast no matter where I worked. It shows how the right tools can radically shift how we engage with tech, making even the most intricate stuff feel seamless.&lt;/p&gt;

&lt;p&gt;So if you're considering your own homelab journey, here's my two cents: just dive in headfirst. Yeah, the learning curve can seem daunting at times. But the payoff of building and mastering your tech environment from the ground up? Absolutely worth the effort, no question.&lt;/p&gt;

&lt;p&gt;Are you planning to start your own homelab journey, or do you have insights from your personal tech adventures? Please share your thoughts and questions in the comments below!&lt;/p&gt;

&lt;p&gt;Thanks for reading.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>homelab</category>
      <category>opensource</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Git/CLI Commands; One GUI to rule them all!</title>
      <dc:creator>Daniel Hofman</dc:creator>
      <pubDate>Sat, 20 Apr 2024 05:32:11 +0000</pubDate>
      <link>https://forem.com/danhof/gitcli-commands-one-gui-to-rule-them-20em</link>
      <guid>https://forem.com/danhof/gitcli-commands-one-gui-to-rule-them-20em</guid>
      <description>&lt;p&gt;After working with CLI commands including Git, Linux, WSL and even Windows, I realized it was impossible to keep track of them all across notes tools. Not to mention that copy and paste was getting tiresome. Besides replacing parameters proved to be quite error-prone, and I was always worried that I could inadvertently execute the wrong command or replace the wrong parameter (and this happened on many occasions).&lt;/p&gt;

&lt;p&gt;To many, command line interfaces were always a bit overwhelming and unapproachable.&lt;/p&gt;

&lt;p&gt;I decided to try to solve this by implementing what everyone already loved in the Windows environment - the GUI. I wanted to create an app that used a configurable graphical interface with the ability to attach CLI commands to buttons and execute them directly in the Windows Terminal app.&lt;/p&gt;

&lt;p&gt;I've been literally using it on a daily basis for everything I can think of, from checking my Git repo state when I first open the app to switching projects and running commands on remote machines using WSL. I use it for checking the state of my servers as well as switching Docker containers on the fly for a given project. I start and stop my containerized databases depending on the project I'm working on. It helps that a single click on the tray icon opens my app and the Windows Terminal side by side and displays the results of project commands that already executed in the configured terminal tab.&lt;/p&gt;

&lt;p&gt;Since the possibilities are endless for what can be automated using this approach, I thought I'd post about it here just to let you guys know that it's available. I don't want to sound spammy, but it's been super useful, so I just wanted to get this out there.&lt;/p&gt;

&lt;p&gt;Here is a list of some of the things you can do with &lt;a href="https://commandgit.com" rel="noopener noreferrer"&gt;CommandGit&lt;/a&gt;. It took me a long time to get it to this point on my own, so that's why I didn't open source it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj4fgym8timj6qw69b1u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpj4fgym8timj6qw69b1u.png" alt=" " width="800" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Distributing command output via popular platforms like Slack, Microsoft Teams, or email.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16y7cvrcoacytjaydr3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16y7cvrcoacytjaydr3k.png" alt=" " width="800" height="1064"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ability to insert user input at any point of a command via configurable user input prompts, with any number of custom screens per command. Super useful if you have a Git command where you need to pass in a Jira ticket#. You can configure a button with a Git command and add a user input capture to prompt for the ticket number when you execute the command.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A date picker capture screen for embedding dates into commands before they execute.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run multiple commands from a single button. This means that you can, for example, run git fetch and git status with one click. This applies to any CLI command, and also any number of commands can be grouped.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure scripts to run with a button click. Bash, Python, Go, etc...&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AI-powered (ChatGPT) features for explaining commands, listing similar commands, creating commands from descriptions, and correcting commit messages.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Grouping commands into projects and categories within projects.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Capture project notes and command descriptions. Command titles are also saved next to commands for easy recognition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy and move commands across projects and categories. Sort commands alphabetically or as added to the category.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjan56x0lr6dfwkk8moff.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjan56x0lr6dfwkk8moff.png" alt=" " width="607" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Link commands across projects and categories to prevent duplication of commands. A report of all the linked commands across all projects and categories is also available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zycabu2n5oudrpbvcqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zycabu2n5oudrpbvcqj.png" alt=" " width="800" height="1072"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Since CommandGit works by inserting commands into the Windows Terminal, you can use any terminal screen to run commands in. You can, for example, use a button that will open a Linux tab and send commands to it. The next button can send commands to PowerShell by auto-opening that tab and running a Windows command. You can also put this in a single button and open tabs as the commands are executed. I also added a built-in sleep command that does not get interpreted by the terminal but only within CommandGit. It's ideal for waiting between commands. Let's say, wait 300 milliseconds to open another tab and start sending commands to it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Warning before executing sensitive commands. This saved me many times. Click a button and receive a message box warning, asking if you really want to execute that command. This is a configurable property for each command. The main screen has indicators for different command properties, so you can see at a glance which commands have warning prompts set or which have a user input box configured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Execute an external application before and after command execution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Menu options for quickly creating Git repositories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Information screens showing all linked commands across projects and categories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;View reports of all scheduled commands. Easily see what commands run on a schedule across all your projects and categories. Super useful to find that pesky command that still runs after you thought you disabled it and forgot on which command it was set.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnrbg3tfkwhzdp3zr3l8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnrbg3tfkwhzdp3zr3l8.png" alt=" " width="800" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ability to set a default project to navigate to when the application starts or navigate to the last opened project on startup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ability to configure multiple commands on a project level that execute when you navigate to that project in CommandGit. This is great when you want to see the state of the project's repo just by navigating to it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Favorite category feature for easy access to frequently used commands.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Solo mode for clean navigation through categories. Always collapsed or expanded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I wrote and tested over 400 CLI commands that are now built-in and ready for use with any project. You can copy, move or modify them to your heart's content.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Supported distribution of command output based on configured criteria, such as search terms, regular expressions, and case sensitivity, with options for distribution via Teams, Slack, email, or a pop-up dialog box on the screen.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ability for the application to start with Windows startup and be ready for commands in seconds. Access to the main window from the system tray.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Option to save the screen position of the terminal window and the application main window for the next startup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tray icon notifications when commands execute on schedule in the background.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Spell checker for all user input boxes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save the state of all commands, projects, and category settings for the last 1000 application startups for peace of mind. This serves as a quick backup of the last 1000 changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easy access to most application features through an intuitive user interface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Intuitive search for commands across projects and categories.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optional background command desktop popup to view command results.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn09cwc1io8k2kw8fexsr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn09cwc1io8k2kw8fexsr.png" alt=" " width="800" height="1072"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope you find this useful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://commandgit.com" rel="noopener noreferrer"&gt;CommandGit.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks&lt;/p&gt;

</description>
      <category>git</category>
      <category>cli</category>
      <category>softwaredevelopment</category>
      <category>powershell</category>
    </item>
    <item>
      <title>Building CommandGit: A Developer's Tale of Passion and Innovation</title>
      <dc:creator>Daniel Hofman</dc:creator>
      <pubDate>Tue, 16 Apr 2024 21:05:15 +0000</pubDate>
      <link>https://forem.com/danhof/building-commandgit-a-developers-tale-of-passion-and-innovation-43p3</link>
      <guid>https://forem.com/danhof/building-commandgit-a-developers-tale-of-passion-and-innovation-43p3</guid>
      <description>&lt;h2&gt;
  
  
  &lt;a href="https://commandgit.com" rel="noopener noreferrer"&gt;CommandGit&lt;/a&gt; Journey
&lt;/h2&gt;

&lt;p&gt;For a number of years, I have been working as a software engineer and have always been interested in developer tools that make my job easier. During my free time, I have created several free automation applications and utilities, allowing me to explore technologies I may not have had the chance to use in my daily work. Many of these projects were driven by the hope that they could help my colleagues, while also providing me with the opportunity to stay up-to-date with the latest trends in the industry. It has been very rewarding to see my software being used by my work peers.&lt;/p&gt;

&lt;p&gt;In 2012, I attempted to introduce Git at my workplace as a replacement for the outdated Subversion version control system. Some of my colleagues were hesitant to adopt it due to its command line interface. To address this issue, I began developing CommandGit, a graphical user interface for Git that would make it easier and more user-friendly for my colleagues to use. By combining the power of the command line with the simplicity of a graphical interface, I hoped to help my team overcome their hesitation and fully embrace Git.&lt;/p&gt;

&lt;p&gt;As a developer, I understand the challenges that come with using a command line interface. I also recognize the benefits of using a graphical user interface to make certain tasks easier and more approachable. With these ideas in mind, I set out to create a tool that would combine the power and flexibility of a CLI with the simplicity and accessibility of a GUI.&lt;/p&gt;

&lt;p&gt;My goal with CommandGit was to create a transparent, user-friendly tool that would enable developers to easily create and execute Git commands and scripts, while still providing them with the knowledge and understanding of what those commands were doing behind the scenes. I wanted to create a tool that would not only serve as a replacement for the command line, but also as a learning tool that would help developers gain confidence and familiarity with the command line.&lt;/p&gt;

&lt;p&gt;I wanted CommandGit to be a tool that would allow developers to quickly and easily create and execute Git commands and scripts, while also providing them with the ability to see and understand what those commands were doing behind the scenes. By combining the power of the CLI with the simplicity of a GUI, I believe that CommandGit can help developers overcome their hesitance to use the command line and fully embrace the benefits of Git.&lt;/p&gt;

&lt;p&gt;In the early stages of development, I used C++ and MFC to build CommandGit. However, after a year of slow progress, I decided to switch to C# and Windows Forms in order to speed up the development process. I was using these technologies in my day job and believed that .NET would continue to improve and address any deficiencies in the platform.&lt;/p&gt;

&lt;p&gt;As I continued to work on CommandGit, I realized that I had invested a significant amount of personal time into the project and began to consider monetizing it. At the same time, I recognized that the Windows Forms user interface was functional, but did not have the modern look and feel that I wanted for a commercial application.&lt;/p&gt;

&lt;p&gt;Throughout the development process, I relied heavily on Git and a bug tracking system to organize my work and track progress. As CommandGit became usable, I started using it for all of my Git commands related to the development of the application, which allowed me to test it extensively and quickly fix any bugs that I discovered. Overall, the development of CommandGit has been a valuable learning experience for me and has helped me to improve my skills as a developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Back to coding
&lt;/h2&gt;

&lt;p&gt;To improve the appearance of my Windows Forms application without the use of third-party plugins, I decided to redesign and re-engineer the GUI layer from the ground up. This required additional time and effort on my part, but I believed that it would be worth it in the end. I chose to use WPF for the new GUI because it offered greater flexibility and ease of development compared to other options.&lt;/p&gt;

&lt;p&gt;The switch to WPF proved to be a valuable decision, as it enabled me to create more attractive and user-friendly screens for my application. The built-in support for DPI awareness and the ability to easily size and arrange screen controls made the development process much smoother. Additionally, the use of WPF allowed my application to look great on high-resolution screens without any fuzzy text elements, which was a significant improvement over the Windows Forms version.&lt;/p&gt;

&lt;p&gt;Throughout the development process, I remained committed to avoiding the use of external libraries in my code whenever possible. This allowed me to maintain control over the entire application and ensure that it met my standards for quality and performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  .NET 5
&lt;/h2&gt;

&lt;p&gt;When .NET 5 was released, I saw it as an opportunity to convert my application to the latest version of the .NET. This would make the deployment and distribution of my application easier and more straightforward. Additionally, the performance improvements in .NET 5 were too significant to ignore.&lt;/p&gt;

&lt;p&gt;While I did miss the faster performance of my C++/MFC applications, the ease of development and deployment offered by .NET more than made up for it. The self-contained publishing option in .NET 5 was particularly useful, as it ensured that my application would run on my clients' computers regardless of the Windows environment.&lt;/p&gt;

&lt;p&gt;Overall, I am confident that the switch to .NET 5 was the right decision for my application, and I look forward to continuing to improve and optimize it in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend API and Cloud Computing
&lt;/h2&gt;

&lt;p&gt;I started to look at a way to implement my licensing and the application update model via a cloud service. There were few to consider. I ended up choosing Azure, since I used AWS at my daytime job and I wanted to learn something new. Azure serverless option seemed like a good alternative. I tried serverless functions since they are a part of Azure pay-as-you-go model, and you only pay for what you use. I spent all my free time implementing this technology on the back end. Since my app was a hybrid app at this point, meaning that it was a desktop app with cloud components, I quickly realized one downfall of the serverless architecture. It would fall asleep. Yep, my desktop application would take up to 30 seconds to start up while checking the user's license or the state of the trial period. This was unacceptable. I found workarounds like the Azure Logic Apps which worked well, but defeated the purpose of a pay-as-you-go model. Basically the logic app would run on a schedule and wake up the serverless API every five seconds. This means that my users would still experience long loading times that were dependent on the five-second window and raise the serverless cost. This was unacceptable, especially for a desktop application that is supposed to be snappy unlike its web app cousins.&lt;/p&gt;

&lt;h2&gt;
  
  
  A New Beginning
&lt;/h2&gt;

&lt;p&gt;As I previously mentioned, I used CommandGit for all of my Git interactions and it worked well for me. I enjoyed using my app for development and the ability to test it while continuing to work on it. However, the design and requirements for my project have shifted towards cloud and backend API development, which required me to use cloud CLI and deployment solutions that I was not comfortable with. As a single developer, these solutions seemed like a waste of time and overly complex. In addition, I was tired of having to constantly type CLI commands. While CommandGit took care of my Git needs, the numerous other CLI commands were starting to become tedious. My backend solution is still a work in progress and I need to do more research, which will likely involve learning new CLIs.&lt;/p&gt;

&lt;p&gt;As I continued to think about the problem, I had a breakthrough: I needed to make CommandGit work with any CLI. At first, I thought this was a crazy idea and that it wouldn't work. But as I considered it more carefully, I realized that it was not as crazy as it initially seemed and that I was actually closer to a viable solution than I had thought. With this realization, I became more confident that I could make CommandGit work with any CLI, and I began to explore how I could make this happen.&lt;/p&gt;

&lt;p&gt;With this new goal in mind, I set out to make it happen. The first step was to integrate other shells into CommandGit. I already loved using Git Bash for my Git commands, so I decided to add Windows PowerShell and the Command Prompt as well. This allowed me to write and execute PowerShell scripts or CMD commands from a single button, depending on the type of terminal I had configured for the project or opened from the toolbar. This added flexibility and made it easier for me to work with different types of CLIs from within CommandGit.&lt;/p&gt;

&lt;p&gt;To enable running any command at any time, I redesigned the flow of CommandGit. The new toolbar included buttons for switching between shells, which made it easier to use different CLIs. I also wrote over 400 commands and organized them into separate categories to help users get started. Initially, I hadn't planned on doing this, but since I was using the app regularly for my own development, I decided to share my knowledge with others by including some built-in commands. This added value to the app and made it even more useful for developers.&lt;/p&gt;

&lt;p&gt;In designing the main application screen, I tried to keep it slim and simple, with the idea of having it displayed on the left side of the terminal as an easy-to-use addition without taking up the full screen. I wanted the process of running commands to be straightforward and transparent, with a list of available commands on one side and the terminal screen on the other. I also added the ability to save the screen positions of the terminal and the CommandGit application's main screen, so that users could easily maintain a consistent and efficient workflow. Overall, I aimed to create a simple and intuitive interface that made it easy for developers to use CommandGit with any CLI.&lt;/p&gt;

&lt;p&gt;As I continued to expand the capabilities of CommandGit, I had another realization: in addition to executing individual commands or groups of commands with a single button click, users could also execute commands on a schedule in the background. This was a powerful idea with many potential uses, and I could see myself using it in my own CommandGit development, such as checking the health of my Linux instances in Azure or quickly pulling NGINX error logs from a web server. I didn't have time to set up and manage open-source telemetry tools, which seemed like too much work and increased the risk of failure or wasted time learning the wrong technology. The ability to schedule commands in CommandGit was exactly what I needed and I was excited to make it available to others who might find it useful.&lt;/p&gt;

&lt;p&gt;To be able to understand the outcome of scheduled commands, I created a command logging system for CommandGit. This system included a user interface and a logging mechanism that captured the output of all scheduled commands. I also added filters and log categories to make it easy to navigate the log data and quickly find the information that was relevant. This enabled users to track the results of their scheduled commands and understand the output of those commands more easily.&lt;/p&gt;

&lt;p&gt;In some cases, users may want to know the results of their commands at the time of their execution. To address this need, I added the ability to display the results of commands as they are running in CommandGit. This allows users to see the output of their commands in real time, rather than having to wait for the command to finish before viewing the results. This can be useful for monitoring the progress of long-running commands or for getting feedback on the results of commands as they are executed.&lt;/p&gt;

&lt;p&gt;The distribution portion of the scheduling was born. I designed and implemented a way to capture the command output. Any scheduled command could report its outcome and notify the user when it was executed. Initially, this was a popup screen on the desktop, which was ok, but I then realized that working in a team environment required some form of distribution of command output. I quickly started the work on the command distribution system. I added the ability to send data via email, Slack and Teams. I was quite happy with this approach and the app became even more useful. That was great, but what if I only needed to distribute the command output sometimes. Let's assume that the NGINX logs had no errors and the output of my scheduled command was boring and negligible most of the time. The distribution of this type of information would be disruptive for my team if someone had to look at a new Slack message every 5 minutes, as I configured the schedule for 5 minutes intervals. I sure wanted to catch any issues withing a very short time of them occurring. That part was totally correct and made sense. The distribution on the other hand, did not meet the necessary need-to-know basis approach. No one needed to know that everything was running smoothly and there were no issues. It should be more in line of "no news is good news".&lt;/p&gt;

&lt;p&gt;The conditional distribution criteria portion of the scheduling was born. Since I already had the command output, I could scan that data and match it to the criteria on the distribution screen. For example, only distribute the command output if the search term "up to date" was found in the command's output. Let's say this was a git status command. I wanted to get notified when my server branch had a new commit before I had too many conflicts to wrestle through. The outcome of this implementation worked well and another chapter of this application felt right and I could move to the next breakthrough.&lt;/p&gt;

&lt;p&gt;Another large portion of the design was the search capability. Since we are dealing with many buttons across multiple categories, I had to invent a way to easily search and find relevant commands. This was not easy and I have tried many different solutions. I finally arrived at what it is today. Not the most straight forward, but I can always find what I'm looking for relatively quickly even with many commands in the project.&lt;/p&gt;

&lt;p&gt;Finally, I was able to configure and use any CLI I could get my hands on, right from within CommandGit. That was a great accomplishment and I truly enjoyed the end result. Maybe it's not the last Coca-Cola in the desert, but it gets the job done and beats hand typing or searching for old commands half of my day, not to mention, many convenience functions like color coding categories, scheduling commands, distributing command results to others or setting up safety message screens before executing sensitive commands. And again, I started to move on to the next revelation as I began to use CommandGit with all its new features.&lt;/p&gt;

&lt;p&gt;I will spare you all the details here, they are all mentioned in the help file. What I can tell you for sure, the development of this application will not stop for a very long time. I am always looking for new and innovative ways to improve it. I always welcome any suggestions and I always welcome any constructive criticism. We all learn from our mistakes and this is not an exception to that rule. Please send me a line, if you love the app or hate it, let me know what I can do to make it more useful for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Payment Model
&lt;/h2&gt;

&lt;p&gt;Once the main application was developed, I had to get back to something that was lingering in the background for a long time now. How could I distribute the app as a desktop application, but yet implement a payment model and still keep the application secure for me and for my customers? Sure, Adobe has done it with their suite of applications that I enjoy and use, but they have a gazillion of developers and, basically, unlimited resources to do that with.&lt;/p&gt;

&lt;p&gt;I needed to come up with a payment system and a secure implementation of that to assure that the licensing model was somehow useful. Most SAAS implementations are straightforward. Everything is Web based so you control the access via a user signup and you know who can log in and who cannot. In my hybrid model, it's not so clear as CommandGit is a Windows application that needs to be installed locally and manage its paid users via a cloud backend.&lt;/p&gt;

&lt;p&gt;I also wanted a fully functional trial run of CommandGit without any signups or user payment information. I know I don't like to give out my email or my card info just to try an application that I may not like and care to use after the trial ends. Not to mention that some apps make it really difficult to cancel such options and I just didn't want to put my users through that.&lt;/p&gt;

&lt;p&gt;This was not the easiest thing to do, but the end result was a somewhat acceptable solution. I used Microsoft Azure for the Cloud API and with some JavaScript and C# code in the Windows application, I was able to implement something that is manageable. Nothing is hackerproof and my app is not an exception to that, but my earlier objectives were accomplished and I am quite happy with the outcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Current State
&lt;/h2&gt;

&lt;p&gt;At the moment (2024), the main application is a C# and WPF implementation utilizing the .NET technology.&lt;/p&gt;

&lt;p&gt;The cloud API was moved from the serverless model to a Linux based OS running on Azure and it is utilizing NGINX as the Web Server with all the goodness NGINX has to offer.&lt;/p&gt;

&lt;p&gt;The application is still a work in progress and being improved as much as possible. So many developers, DevOps or Sysadmins are now working from home and so many were thrown into the CLI world without having a chance to adequately prepare. Then there is the group that never really subscribed to the CLI paradigm and never cared for it more than it was necessary to accomplish tasks at their place of work. I have to admit, I am partially in that group. This is why I thought of creating CommandGit in the first place. Sure, I wanted to help others, but I also wanted to help myself just as much. I think there are many of us that can benefit from this approach of a GUI/CLI duo. So give it a chance, it maybe worth your time.&lt;/p&gt;

&lt;p&gt;All the application development and design is done by me. I am happy to learn new technologies and just as happy to dive in and start coding and implementing my visions. If there is a person you would like to blame for a badly implemented feature, that would be me:) Feel free to send me your hate mail or a few words of encouragement. I read everything, so even if I don't have time to reply right away , rest assured, I read your message and I am either learning from it or I deleted it;)&lt;/p&gt;

&lt;p&gt;And yes, there is a free trial, so please take a look and hopefully you will find it helpful.&lt;/p&gt;

&lt;p&gt;Thank you for reading!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://CommandGit.com" rel="noopener noreferrer"&gt;https://CommandGit.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>git</category>
      <category>cli</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>How Serverless Almost Killed my App</title>
      <dc:creator>Daniel Hofman</dc:creator>
      <pubDate>Sat, 13 Apr 2024 18:24:25 +0000</pubDate>
      <link>https://forem.com/danhof/how-serverless-almost-killed-my-app-27p2</link>
      <guid>https://forem.com/danhof/how-serverless-almost-killed-my-app-27p2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As a developer who has been around the block a few times, I have experienced molasses-like performance of Oracle's Java database tools to the lightning-fast responsiveness of C++ applications. I've learned that every project comes with its own set of challenges and lessons. In my quest to monetize my personal project, CommandGit, a tool designed to manage and execute CLI commands. I embarked on a journey to integrate payment processing and license verification using Azure's serverless functions. Little did I know, this decision would lead me down a path fraught with performance pitfalls and the realization that sometimes, even the most cutting-edge technologies may not be the perfect fit for every scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Initial Allure of Serverless
&lt;/h2&gt;

&lt;p&gt;A few years ago, after months of countless hours developing CommandGit, I decided it was time to monetize my hard work. Ready to expand my skillset and escape the punishment of working with the relatively sluggish C# (hey, at least it's not Java!), I chose to explore Azure's serverless offerings for handling payment processing and license verification. The free tier and scalability promises of serverless computing seemed too good to be true – I could potentially onboard numerous users without incurring significant infrastructure costs. Considering my familiarity with .NET, the decision to build the backend API using Azure Functions felt like a good choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Smooth Sailing (Initially)
&lt;/h2&gt;

&lt;p&gt;The initial development phase went remarkably well. I seamlessly integrated PayPal for payment processing, implemented NGINX for load balancing and throttling (who doesn't love a good throttling strategy to avoid waking up to a DDOS-induced serverless financial nightmare?), and even ventured into the world of NoSQL databases by utilizing Azure Cosmos DB with MongoDB. Everything was functioning as expected, and I felt confident in my technical choices, perhaps a bit too confident.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cold Start Conundrum
&lt;/h2&gt;

&lt;p&gt;Little did I know, my decision to validate licenses and trial periods during the application startup would soon become a problem. As CommandGit users began reporting sluggish startup times, I delved deeper into the issue, only to discover the dreaded "cold start" problem associated with serverless functions. Azure's documentation revealed that cold starts could lead to delays of up to 30 seconds for functions on the free tier – an eternity in the world of desktop applications, and a stark reminder of my days working with those molasses-like Java tools.&lt;/p&gt;

&lt;p&gt;Disappointed, I set out on a quest to find a solution, unwilling to rewrite substantial portions of my codebase that I had poured my blood, sweat, and tears into (okay, maybe not blood, but you get the idea). After some research, I stumbled upon Azure Logic Apps as a potential workaround. By periodically invoking my serverless functions, I could theoretically keep them "warm" and reduce cold start times. While this approach yielded some improvement, the lingering delay was still unacceptable for a desktop application that demanded the snappy responsiveness I had come to expect from my beloved C++ days.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ultimate Resolution
&lt;/h2&gt;

&lt;p&gt;Faced with the realization that serverless computing might not be the optimal solution for my use case, I made the difficult decision to migrate to a dedicated Azure Linux instance. By transitioning the API and NGINX components to a dedicated environment, I could eliminate cold starts entirely and maintain consistent performance.&lt;/p&gt;

&lt;p&gt;With only minor code changes required, I was able to preserve the majority of my existing codebase, including the Cosmos DB integration. It was a relief to know that my adventures into the world of NoSQL databases wouldn't go to waste, even if the serverless dream had been shattered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Looking back, my journey with Azure's serverless functions taught me a valuable lesson: no matter how cutting-edge a technology may be, it's crucial to thoroughly evaluate its suitability for your specific requirements. While serverless computing offers undeniable advantages in certain scenarios, its limitations became apparent when applied to a desktop application with stringent performance demands. However, it's worth noting that both Azure and AWS have introduced solutions to address the cold start issue in recent years. Azure now offers a Premium Plan for Azure Functions, which keeps instances warm and ready to handle requests immediately, reducing cold start times significantly. Additionally, Azure has made runtime optimizations to improve cold starts for .NET workloads using the isolated worker process model. On the other hand, AWS has implemented similar optimizations and offers provisioned concurrency for AWS Lambda functions, allowing users to keep their functions initialized and hyper-ready to respond to invocations without any cold starts. These advancements demonstrate the ongoing efforts by cloud providers to enhance the serverless experience and cater to a wider range of performance requirements..&lt;/p&gt;

&lt;p&gt;For new developers, my experience serves as a reminder to carefully research and weigh the pros and cons of any technology before committing to it fully. And for seasoned professionals like myself, it's a humbling reminder that even the experienced among us can stumble when venturing into unfamiliar territory. &lt;/p&gt;

&lt;p&gt;I hope you'll take something away from this post, or if nothing else, at least you got a few chuckles out of my misadventures.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>cli</category>
      <category>cloud</category>
      <category>dotnet</category>
    </item>
    <item>
      <title>Navigating the Resilience Landscape: Polly &amp; Temporal</title>
      <dc:creator>Daniel Hofman</dc:creator>
      <pubDate>Fri, 12 Apr 2024 20:34:15 +0000</pubDate>
      <link>https://forem.com/danhof/navigating-the-resilience-landscape-polly-temporal-okn</link>
      <guid>https://forem.com/danhof/navigating-the-resilience-landscape-polly-temporal-okn</guid>
      <description>&lt;p&gt;Both Polly and Temporal have become cornerstones in the quest for resilience in software systems. Polly is a library that boosts the resilience of .NET applications, effectively managing transient failures through strategies like retries, timeouts, and fallbacks. Temporal, on the other hand, manages long-running, reliable workflows, ensuring systems are fault-tolerant.&lt;/p&gt;

&lt;p&gt;Polly's fluent API allows developers to easily create and manage retry policies, complementing Temporal's capabilities in orchestrating complex workflows. Although they operate at different levels of abstraction, both frameworks are united in their goal to architect systems that are robust against failures.&lt;/p&gt;

&lt;p&gt;Polly's architecture allows for detailed policy configuration, which can dictate how often to retry an operation, or manage multiple bulkhead instances to prevent system overloads. Temporal's strength lies in its ability to maintain state across retries and even in the face of system failures, which is critical for applications requiring high reliability over long execution times.&lt;/p&gt;

&lt;p&gt;Technical deep dive reveals that both frameworks are not just about preventing failures but also about managing them so efficiently that systems can continue to operate and recover without losing critical data or functionality. By leveraging both Polly and Temporal, developers are equipped with a full spectrum of tools to enhance application reliability and resilience.&lt;/p&gt;

&lt;p&gt;This approach emphasizes the technical prowess of both frameworks without sounding too rigid, appealing directly to developers looking for robust solutions.&lt;/p&gt;

&lt;p&gt;Resources:&lt;br&gt;
&lt;a href="https://learn.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly" rel="noopener noreferrer"&gt;https://learn.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/implement-http-call-retries-exponential-backoff-polly&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://thepollyproject.azurewebsites.net/about/" rel="noopener noreferrer"&gt;https://thepollyproject.azurewebsites.net/about/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://temporal.io" rel="noopener noreferrer"&gt;https://temporal.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>temporal</category>
      <category>development</category>
      <category>polly</category>
    </item>
  </channel>
</rss>
