<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Amit Singh</title>
    <description>The latest articles on Forem by Amit Singh (@semmet).</description>
    <link>https://forem.com/semmet</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/semmet"/>
    <language>en</language>
    <item>
      <title>Source Score: Using AI to automate addition of new sources</title>
      <dc:creator>Amit Singh</dc:creator>
      <pubDate>Mon, 11 May 2026 11:14:19 +0000</pubDate>
      <link>https://forem.com/semmet/source-score-using-ai-to-automate-addition-of-new-sources-ego</link>
      <guid>https://forem.com/semmet/source-score-using-ai-to-automate-addition-of-new-sources-ego</guid>
      <description>&lt;p&gt;&lt;em&gt;This post is a continuation of a microservice I've been building. You can check out my last post in the series &lt;a href="https://dev.to/semmet/continuing-the-microservice-journey-3n97"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  Automate News Source Ingestion with Firecrawl, OpenRouter, and GitHub Actions
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; : I turned a manual, copy‑paste routine for adding news outlets into a fully automated, monthly GitHub Actions workflow. The pipeline scrapes a ranking page with Firecrawl, extracts clean URLs using three free‑tier LLMs on OpenRouter, generates &lt;code&gt;Source&lt;/code&gt; YAML files, and opens a PR that lands these sources straight on the live dashboard post merge.&lt;/p&gt;




&lt;h2&gt;
  
  
  How it all began
&lt;/h2&gt;

&lt;p&gt;When I first ingested sources in &lt;em&gt;source‑score&lt;/em&gt; database I seeded it with five manually‑added outlets. It was enough to verify that the endpoints work, but I kept thinking about the “real” world: the top global media brands that people actually read. I wanted the repo to stay fresh without someone constantly creating PRs to add new sources.&lt;/p&gt;

&lt;p&gt;The idea was simple on paper: &lt;strong&gt;fetch the latest list of popular English‑language news sites, turn each entry into a valid &lt;code&gt;Source&lt;/code&gt; document, and let the existing CI validate and ingest them&lt;/strong&gt;. In practice though, I ran into three big hurdles (which further break down into their own little challenges as you'll see):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Finding a reliable source&lt;/strong&gt; – I discovered a &lt;a href="https://pressgazette.co.uk/media-audience-and-business-data/media_metrics/most-popular-websites-news-world-monthly-2/" rel="noopener noreferrer"&gt;page&lt;/a&gt; on &lt;strong&gt;PressGazette&lt;/strong&gt; website that publishes the top 50 news sites each month, but the raw HTML was going to be a mess. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extracting just the URLs&lt;/strong&gt; – The page’s markup mixed headlines, ads, and footnotes. A regular regex wouldn't have been enough.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keeping the process cheap&lt;/strong&gt; – I didn’t want to spin up a paid scraper or a paid LLM every month.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The first breakthrough: Firecrawl does the heavy lifting
&lt;/h2&gt;

&lt;p&gt;Firecrawl’s Python SDK made the scraping step painless and their free tier limits are high enough for my requirements. A tiny &lt;a href="https://github.com/SatyaLens/sources/blob/main/scripts/scrape_firecrawl.py" rel="noopener noreferrer"&gt;wrapper&lt;/a&gt; takes a URL and returns a clean Markdown:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;main&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Usage: scrape_firecrawl.py URL&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stderr&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;argv&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FIRECRAWL_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;fc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Firecrawl&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;scrape&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;formats&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;markdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error scraping &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;markdown&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running it against the ranking page gives me a tidy Markdown blob that still contains a lot of noise, but it’s a far better starting point than raw HTML. I want to store the scraped content in plaintext and forward it to LLMs as it is. For this requirements, the markdown format seemed the best of all the available options.&lt;/p&gt;




&lt;h2&gt;
  
  
  Using LLMs to extract news source URLs from the scraped data
&lt;/h2&gt;

&lt;p&gt;Before diving into the technical details, I'd like to add that so far my AI usage was limited to: Claude CLI for high level project management, Copilot VSCode extension for code generation, and various chat services (ChatGPT, Gemini, Kimi, etc) for any questions or learning new stuff. None of them, however, were going to meet my new requirements. This, and the fact that I love free stuff, led me to &lt;a href="https://openrouter.ai/" rel="noopener noreferrer"&gt;OpenRouter&lt;/a&gt;, and I'm so glad I signed up. Shout out to them for their generous free tier limits and giving people like me the option to use these new, super capable models for free.&lt;br&gt;&lt;br&gt;
Coming back to the main problem, the next step is where OpenRouter shines. I wrote a small &lt;a href="https://github.com/SatyaLens/sources/blob/main/scripts/openrouter.py" rel="noopener noreferrer"&gt;helper script&lt;/a&gt; that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Loads a &lt;strong&gt;Markdown‑processing skill&lt;/strong&gt;, that I wrote (using the power of googling) to help the model process MD files, from my &lt;a href="https://raw.githubusercontent.com/semmet95/agent-skills/refs/heads/main/md-processing/SKILL.md" rel="noopener noreferrer"&gt;repo&lt;/a&gt;.
&lt;em&gt;I have some other skills there as well that I've been experimenting with to help me write these blogs, maybe I'll cover them later&lt;/em&gt; 😉&lt;/li&gt;
&lt;li&gt;Sends the scraped Markdown file content to three free models (&lt;code&gt;gemma‑4‑31b‑it&lt;/code&gt;, &lt;code&gt;nemotron‑3‑nano‑omni‑30b&lt;/code&gt;, &lt;code&gt;gemma‑4‑26b&lt;/code&gt;) until one returns a non‑empty answer. Sometimes requests to OpenRouter API fail because the model doesn't respond. To deal with that I'm iterating over these 3 models and so far at least one of them has always worked.&lt;/li&gt;
&lt;li&gt;Asks the model to list the &lt;em&gt;top‑10 latest most popular news outlets&lt;/em&gt; and output &lt;strong&gt;only URLs&lt;/strong&gt;, one per line.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;SOURCE_QUESTION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What are the top 10 latest most popular news outlets in the world listed in this document? Only output URLs of these news outlets separated by new lines. Do not output anything else.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_raw_doc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;md_processing_skill&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;md_doc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;FREE_MODELS_DOC&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;md_processing_skill&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                 &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Here is a web page in Markdown:&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;md_doc&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                            &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Answer this question:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;SOURCE_QUESTION&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;raw_source_list&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;req_openrouter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;raw_source_list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;raw_source_list&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error: All models failed.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;raw_source_list&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The model’s answer still contains stray characters and occasional combined or shortened URLs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7de311ra4gfo6vz93ekj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7de311ra4gfo6vz93ekj.png" alt="New source URLs on the web page" width="734" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To get a clean list of URLs, I'm running a second OpenRouter call that uses the built‑in &lt;strong&gt;web‑search tool&lt;/strong&gt; to validate each line and verify the URL is correct. Following is the payload for my second OpenRouter request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Here is a raw list of URLs of news outlets with each line containing one or more unformatted URLs:&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;raw_source_list&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use web search to access these URLs and discard those that are invalid. Do not scrape the web page, only check if the URL is valid&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Based on successful web searches, return a list of corresponding properly formatted URLs without any extra test. Keep only one URL per line.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tools&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openrouter:web_search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  From source URLs to proper &lt;code&gt;Source&lt;/code&gt; YAML
&lt;/h2&gt;

&lt;p&gt;With a verified list of URLs in hand, the next step is to generate a full &lt;code&gt;Source&lt;/code&gt; document for each outlet. To achieve that goal, yeah you guessed it right, another OpenRouter API call.&lt;br&gt;
I'm adding an ingested &lt;a href="https://github.com/SatyaLens/sources/blob/main/sources/nyt.yaml" rel="noopener noreferrer"&gt;source yaml doc&lt;/a&gt; as a schema example, then asking the model to fill in the blanks by using the web search tool again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Extract schema from the following yaml document and store it as source_schema:&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;sample&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Following is a list of urls of media outlets separated by new lines&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;sources&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use web search to fetch information about these media outlets and create yaml docs for each of them following the source_schema schema. Do not output anything except for the yaml documents for these medial outlets separated by ---.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tools&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openrouter:web_search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response is a string of YAML manifests separated by &lt;code&gt;---&lt;/code&gt;. I split, filter, and &lt;strong&gt;deduplicate&lt;/strong&gt; against existing files to avoid overwriting anything.&lt;/p&gt;




&lt;h2&gt;
  
  
  Writing the new files (safely)
&lt;/h2&gt;

&lt;p&gt;The deduplication logic avoids creating ingestion docs for sources that already exist in the repository. It loads every YAML under &lt;code&gt;sources/&lt;/code&gt;, compares &lt;code&gt;name&lt;/code&gt; and &lt;code&gt;uri&lt;/code&gt;, and returns only truly new entries. The final loop writes each doc to a sanitized filename:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc_str&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;unique_src_docs&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;parsed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;yaml&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;safe_load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;filename&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parsed&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;continue&lt;/span&gt;
    &lt;span class="n"&gt;filename&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sub&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[^A-Za-z0-9._-]+&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.yaml&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sources_dir&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;continue&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;w&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;encoding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc_str&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All files land in &lt;code&gt;sources/&lt;/code&gt; ready for the existing validation workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  The actual automation: GitHub Actions workflow
&lt;/h2&gt;

&lt;p&gt;Now that we have all the scripts ready, time for the final piece of the puzzle, the &lt;a href="https://github.com/SatyaLens/sources/blob/main/.github/workflows/fetch_new_sources.yml" rel="noopener noreferrer"&gt;scheduled CI job&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It runs on the first day of each month&lt;/li&gt;
&lt;li&gt;checks out the repo&lt;/li&gt;
&lt;li&gt;creates a new branch&lt;/li&gt;
&lt;li&gt;sets up Python&lt;/li&gt;
&lt;li&gt;runs a tiny wrapper &lt;a href="https://github.com/SatyaLens/sources/blob/main/scripts/source_scraper.sh" rel="noopener noreferrer"&gt;shell script&lt;/a&gt; that strings the three Python helpers together&lt;/li&gt;
&lt;li&gt;commits any new YAML files&lt;/li&gt;
&lt;li&gt;and opens a PR:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the PR lands, the existing &lt;a href="https://github.com/SatyaLens/sources/blob/main/.github/workflows/validate.yml" rel="noopener noreferrer"&gt;validate.yml&lt;/a&gt; workflow validates the new YAML files, and the &lt;a href="https://github.com/SatyaLens/sources/blob/main/.github/workflows/post_on_merge.yml" rel="noopener noreferrer"&gt;post_on_merge.yml&lt;/a&gt; workflow posts them to the API when the new source docs get merged, now ready to be fetched by the live dashboard.&lt;/p&gt;




&lt;h2&gt;
  
  
  The results so far
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source fetch workflow&lt;/strong&gt;: &lt;a href="https://github.com/SatyaLens/sources/actions/runs/25617971999/job/75199251257" rel="noopener noreferrer"&gt;https://github.com/SatyaLens/sources/actions/runs/25617971999/job/75199251257&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated PR&lt;/strong&gt;: &lt;a href="https://github.com/SatyaLens/sources/pull/22" rel="noopener noreferrer"&gt;https://github.com/SatyaLens/sources/pull/22&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ingestion run&lt;/strong&gt;: &lt;a href="https://github.com/SatyaLens/sources/actions/runs/25628583734/job/75228007512" rel="noopener noreferrer"&gt;https://github.com/SatyaLens/sources/actions/runs/25628583734/job/75228007512&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live dashboard&lt;/strong&gt;: &lt;a href="https://satyalens.github.io/source-score-dashboard/" rel="noopener noreferrer"&gt;https://satyalens.github.io/source-score-dashboard/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the new sources appear with correct URLs and short descriptions, and the CI passes without a hitch. The whole process now takes less than a minute of human time each month, and that too just to merge the PR 😁&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaoko8rpfq1wxv9kq5yv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxaoko8rpfq1wxv9kq5yv.png" alt="Updated live dashboard" width="800" height="705"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;Adding sources was the low‑hanging fruit (relatively). The next challenge is to replicate the same pattern for &lt;strong&gt;claims&lt;/strong&gt; and &lt;strong&gt;proofs&lt;/strong&gt;. A larger data set, more complex validation, and a higher chance of model hallucination.&lt;br&gt;&lt;br&gt;
Completing this goal led me to discovering OpenRouter and learning how to use AI agent skills, can't wait to see what the next challenge brings.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By chaining Firecrawl, OpenRouter, and GitHub Actions I turned a tedious, error‑prone task into a reliable, monthly automation. The result is instantly visible on the live dashboard, and the repo stays in sync with the world’s most popular news outlets.&lt;br&gt;
If you try this yourself and have a suggestion or question, feel free to open an issue or drop a comment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foij8pb25wnpp3o0lpjor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foij8pb25wnpp3o0lpjor.png" alt="OpenRouter token usage" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Firecrawl docs – &lt;a href="https://www.firecrawl.dev/docs" rel="noopener noreferrer"&gt;https://www.firecrawl.dev/docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;OpenRouter API reference – &lt;a href="https://openrouter.ai/docs" rel="noopener noreferrer"&gt;https://openrouter.ai/docs&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




</description>
      <category>mcp</category>
      <category>python</category>
      <category>githubactions</category>
      <category>ai</category>
    </item>
    <item>
      <title>Continuing the Microservice Journey...</title>
      <dc:creator>Amit Singh</dc:creator>
      <pubDate>Tue, 05 May 2026 14:58:19 +0000</pubDate>
      <link>https://forem.com/semmet/continuing-the-microservice-journey-3n97</link>
      <guid>https://forem.com/semmet/continuing-the-microservice-journey-3n97</guid>
      <description>&lt;p&gt;&lt;em&gt;This post is a continuation of the microservice I've been building. You can check out my last post in this series &lt;a href="https://singhamit.medium.com/learning-microservices-from-scratch-part-2-2d022910459f" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Over the years, I've come across headlines that turned out to be half‑truths or outright hoaxes. Around the same time, I’ve also been spending a lot of time practicing microservice development in Golang, so I started wondering: why not build something that combines both interests? 🤷&lt;br&gt;
That idea became &lt;code&gt;source-score&lt;/code&gt;, a project that aims to rate news sources based on how often the claims they publish turn out to be true. It’s still very early and nowhere near finished 🫣, but I have a demo instance up and running. In this post, I’ll briefly walk through what the project is meant to do, how it currently works under the hood, and where I want to take it next.&lt;/p&gt;


&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;p&gt;Three repos work together to turn YAML documents into a live dashboard:  &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Repo&lt;/th&gt;
&lt;th&gt;What it does&lt;/th&gt;
&lt;th&gt;Key tech&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/SatyaLens/sources" rel="noopener noreferrer"&gt;sources&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Stores and validates YAML docs (sources, claims, proofs)&lt;/td&gt;
&lt;td&gt;Python, YAML, GitHub Actions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/SatyaLens/source-score" rel="noopener noreferrer"&gt;source‑score&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Go microservice that verifies claims and calculates scores&lt;/td&gt;
&lt;td&gt;Go, Gin, REST, Swagger&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://github.com/SatyaLens/source-score-dashboard" rel="noopener noreferrer"&gt;source‑score‑dashboard&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Static UI that polls the API and shows sources, claims, proofs&lt;/td&gt;
&lt;td&gt;HTML, CSS, vanilla JS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The demo instance is live at &lt;a href="https://satyalens.github.io/source-score-dashboard/" rel="noopener noreferrer"&gt;https://satyalens.github.io/source-score-dashboard/&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
I've manually added 5 news sources, 2 claims for each of them with 1 proof backing up each claim.&lt;br&gt;
&lt;em&gt;The app is deployed on Render free instance so it might take a few seconds for it to come online and return data&lt;/em&gt; 🙏&lt;/p&gt;


&lt;h2&gt;
  
  
  The Idea
&lt;/h2&gt;

&lt;p&gt;The model is intentionally simple for now.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;source&lt;/strong&gt; is a media outlet or information provider. A &lt;strong&gt;claim&lt;/strong&gt; is something that source has published. A &lt;strong&gt;proof&lt;/strong&gt; is another link that either supports or refutes that claim.&lt;/p&gt;

&lt;p&gt;Once claims have proofs attached, the API can verify them. Right now, the verification logic is basic: if a claim has more supporting proofs than refuting proofs, it is marked valid. A source score is then calculated like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;valid checked claims / total checked claims
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So if a source has two checked claims, and both are valid, the score is &lt;code&gt;1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is not supposed to be the final credibility algorithm. I wanted the first version to be understandable, testable, and easy to argue with. A simple score gives me something concrete to improve instead of starting with a scoring model that looks impressive but is hard to explain.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;The project is split into three parts because I wanted the data, backend, and dashboard to stay separate.&lt;/p&gt;

&lt;h3&gt;
  
  
  sources: The Data Pipeline
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/SatyaLens/sources" rel="noopener noreferrer"&gt;&lt;code&gt;sources&lt;/code&gt;&lt;/a&gt; repo stores structured documents under three folders:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sources/
claims/
proofs/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each document is written in YAML and validated against an OpenAPI schema. A source document describes the outlet. A claim document points back to a source using the source URI digest. A proof document points back to a claim and records whether the proof supports the claim.&lt;/p&gt;

&lt;p&gt;This repo also has a small CI flow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/SatyaLens/sources/blob/main/.github/workflows/validate.yml" rel="noopener noreferrer"&gt;validate&lt;/a&gt; new YAML documents on pull requests&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/SatyaLens/sources/blob/main/.github/workflows/post_on_merge.yml" rel="noopener noreferrer"&gt;post&lt;/a&gt; newly added documents to the API after merge&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/SatyaLens/sources/blob/main/.github/workflows/refresh_scores.yml" rel="noopener noreferrer"&gt;refresh&lt;/a&gt; claim verification and source scores on demand or after the document post workflow is completed successfully&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I like this setup because it makes the dataset reviewable. Instead of manually sending API requests every time I want to add a source or proof, I can add a structured file, validate it, and let GitHub Actions handle the posting step. This repo also acts as a user friendly interface for someone who is not super technical to add documents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsv0jjcw8y8qfj6ondwbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsv0jjcw8y8qfj6ondwbs.png" alt="Docs hierarchy" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  source-score: The API
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/SatyaLens/source-score" rel="noopener noreferrer"&gt;&lt;code&gt;source-score&lt;/code&gt;&lt;/a&gt; repo is the backend service.&lt;/p&gt;

&lt;p&gt;It is a Go API built with Gin, GORM, PostgreSQL, and OpenAPI-generated types. It exposes endpoints for creating and reading sources, claims, and proofs. It also has endpoints to verify claims and calculate source scores.&lt;/p&gt;

&lt;p&gt;The main flow looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source -&amp;gt; claim -&amp;gt; proof -&amp;gt; claim validation -&amp;gt; source score calculation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are a few technical choices in the first version that I wanted to keep simple:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTPS URIs identify sources, claims, and proofs.&lt;/li&gt;
&lt;li&gt;SHA-256 URI digests are used as stable IDs.&lt;/li&gt;
&lt;li&gt;OpenAPI defines the request shapes (I thank my past self for this choice because now I can use the OpenAPI schema to validate documents before ingesting them).&lt;/li&gt;
&lt;li&gt;The API can be protected with an &lt;code&gt;X-API-Key&lt;/code&gt; header.&lt;/li&gt;
&lt;li&gt;The dashboard is allowed through CORS for the live demo.&lt;/li&gt;
&lt;li&gt;Unit and acceptance tests cover the main source, claim, and proof flows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The scoring logic is intentionally small right now. Checked claims are grouped by source, valid claims are counted, and the source score is updated as a ratio.&lt;br&gt;
You can access the demo instance Swagger UI, deployed on Render &lt;a href="https://source-score.onrender.com/swagger" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Shout out to Render for helping people like me test their side projects with their free tier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyx5tygkoexwxk9ecyv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feyx5tygkoexwxk9ecyv4.png" alt="Claim verification and score update flow" width="369" height="715"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  source-score-dashboard: The Demo UI
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/SatyaLens/source-score-dashboard" rel="noopener noreferrer"&gt;&lt;code&gt;source-score-dashboard&lt;/code&gt;&lt;/a&gt; repo is a small static dashboard.&lt;/p&gt;

&lt;p&gt;No framework. Just HTML, CSS, and JavaScript. The code is mostly AI generated. I'm frontend-ally disabled so please cut me some slack here 🥲&lt;/p&gt;

&lt;p&gt;It has three pages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;index.html   -&amp;gt; sources
claims.html  -&amp;gt; claims for one source
proofs.html  -&amp;gt; proofs for one claim
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The dashboard calls the live API and refreshes every few seconds. Clicking a source opens its claims. Clicking a claim opens the proofs attached to it.&lt;/p&gt;

&lt;p&gt;It is plain on purpose. I wanted a working view of the data before spending time on UI polish.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7jfdtw8o17gsy0khid0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7jfdtw8o17gsy0khid0.png" alt="Proofs for a claim" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How the Score is Computed
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// pkg/domain/source/source_service.go&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Service&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;RecalculateScore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sourceID&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;claims&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ListClaimsForSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sourceID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;valid&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;total&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;claims&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Checked&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;continue&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="c"&gt;// skip unverified claims&lt;/span&gt;
        &lt;span class="n"&gt;total&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;c&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Validity&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;valid&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="kt"&gt;float64&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;valid&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="kt"&gt;float64&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;total&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UpdateSourceScore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sourceID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this matters&lt;/strong&gt;: The score is a simple ratio, but it gives a quick sanity check. A source with a score of 0.9 has 9 valid claims for every 10 verified claims – a strong signal that the outlet is generally reliable.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I want to improve next
&lt;/h2&gt;

&lt;p&gt;The next step is to keep adding more sources, claims, and proofs.&lt;/p&gt;

&lt;p&gt;After that, the scoring model needs to get smarter. A simple ratio is fine as a prototype, but it does not capture enough nuance. Some claims matter more than others. Some proofs are stronger than others. Some claims are too ambiguous and hard to prove right or wrong conclusively. Sources publish across different topics, and I probably should not treat a sports claim and a geopolitical claim as if they carry the same weight.&lt;/p&gt;

&lt;p&gt;Some improvements I want to work on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;better scoring rules&lt;/li&gt;
&lt;li&gt;topic-wise source scores&lt;/li&gt;
&lt;li&gt;richer proof metadata&lt;/li&gt;
&lt;li&gt;easier contribution flow for new YAML documents&lt;/li&gt;
&lt;li&gt;dashboard filtering and sorting&lt;/li&gt;
&lt;li&gt;clearer handling for conflicting proofs&lt;/li&gt;
&lt;li&gt;some score associated with unverified claims&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For now, Source Score is a personal side project born out of a very practical annoyance: I want a better way to know which news sources I should trust.&lt;/p&gt;

&lt;p&gt;Maybe, over time, it can help others wondering the same thing.&lt;/p&gt;

</description>
      <category>go</category>
      <category>openapi</category>
      <category>microservices</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Sharing music with Navidrome, Filebrowser and Civo</title>
      <dc:creator>Amit Singh</dc:creator>
      <pubDate>Thu, 05 Mar 2026 13:24:36 +0000</pubDate>
      <link>https://forem.com/semmet/sharing-music-with-navidrome-filebrowser-and-civo-17e5</link>
      <guid>https://forem.com/semmet/sharing-music-with-navidrome-filebrowser-and-civo-17e5</guid>
      <description>&lt;p&gt;&lt;em&gt;This post builds on top of my &lt;a href="https://singhamit.medium.com/learning-longhorn-64e0127d0314" rel="noopener noreferrer"&gt;last one&lt;/a&gt; where I deployed Navidrome on a K3S cluster with Longhorn volumes mounted to store data and music files.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Before we get started, all my code is stored in the following repo.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/semmet95" rel="noopener noreferrer"&gt;
        semmet95
      &lt;/a&gt; / &lt;a href="https://github.com/semmet95/navidrome-deployer" rel="noopener noreferrer"&gt;
        navidrome-deployer
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      A simple chart to deploy Navidrome on a Kubernetes cluster
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Navidrome Deployer&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://github.com/semmet95/navidrome-deployer/actions/workflows/e2e-tests.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/semmet95/navidrome-deployer/actions/workflows/e2e-tests.yml/badge.svg" alt="E2E Tests"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Prerequisites&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;Before deploying &lt;code&gt;navidrome-deployer&lt;/code&gt;, ensure the following tools are installed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://kubernetes.io/docs/tasks/tools/" rel="nofollow noopener noreferrer"&gt;kubectl&lt;/a&gt; - Kubernetes command-line tool&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://golang.org/doc/install" rel="nofollow noopener noreferrer"&gt;go&lt;/a&gt; - Go programming language&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://magefile.org/" rel="nofollow noopener noreferrer"&gt;mage&lt;/a&gt; - Go-based task runner&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://helm.sh/docs/intro/install/" rel="nofollow noopener noreferrer"&gt;helm&lt;/a&gt; - Kubernetes package manager&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/roboll/helmfile#installation" rel="noopener noreferrer"&gt;helmfile&lt;/a&gt; - Helm values file manager&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Installation on a cluster&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;To install the latest release&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;helmfile apply -f https://github.com/semmet95/navidrome-deployer/releases/latest/download/helmfile.yaml&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;To install a specific version&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;helmfile apply -f https://github.com/semmet95/navidrome-deployer/releases/download/&lt;span class="pl-k"&gt;&amp;lt;&lt;/span&gt;version&lt;span class="pl-k"&gt;&amp;gt;&lt;/span&gt;/helmfile.yaml&lt;/pre&gt;

&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Local Setup and Deployment&lt;/h2&gt;

&lt;/div&gt;
&lt;p&gt;To deploy navidrome-deployer locally, execute the following command:&lt;/p&gt;
&lt;div class="highlight highlight-source-shell notranslate position-relative overflow-auto js-code-highlight"&gt;
&lt;pre&gt;./scripts/local-deployment.sh&lt;/pre&gt;

&lt;/div&gt;
&lt;/div&gt;



&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/semmet95/navidrome-deployer" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;





&lt;p&gt;In this post I'll share how I deployed Navidrome on a Civo cluster and exposed interfaces to upload and play music.&lt;br&gt;
The last release I made of &lt;code&gt;navidrome-deployer&lt;/code&gt; deployed Navidrome on a K3S cluster and exposed Navidrome UI to play music. However, I was still using &lt;code&gt;kubectl&lt;/code&gt; commands to copy music files into Navidrome pods. This will be our starting point.&lt;br&gt;
To begin with, we need some way to mount the volume that stores these music files and provide a UI that lets users upload and manage them. This search led me to &lt;a href="https://github.com/filebrowser/filebrowser" rel="noopener noreferrer"&gt;Filebrowser&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I must say, integrating Filebrowser was a lot easier than I expected. All I had to do was create a Deployment to run the &lt;code&gt;filebrowser/filebrowser:v2.61.0&lt;/code&gt; image and mount the &lt;code&gt;music-volume&lt;/code&gt; PVC that was already being used by Navidrome deployment. Filebrowser lets you manage files under the &lt;code&gt;/srv&lt;/code&gt; path, mount that music volume on this path and voilà, you now have a UI to upload music to Navidrome.&lt;/p&gt;

&lt;p&gt;I did have to make some config changes though. By default, Filebrowser pod logs the &lt;code&gt;admin&lt;/code&gt; user password that you can use to log in and access admin-level settings, but letting everyone log in as the admin is probably not a good idea. Luckily, Filebrowser has an option to let users sign up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sj2tkq9ue8jd75jtitk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sj2tkq9ue8jd75jtitk.png" alt=" " width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I could deploy Filebrowser, log in as the admin, change the password, and enable user sign up. But there's a way to slightly decrease the manual setup by using the &lt;code&gt;filebrowser&lt;/code&gt; cli. When I run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;filebrowser config &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--createUserDir&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Filebrowser lets users create new accounts and generates their home directory automatically. This comes with a catch, but we'll dive into that when we get to the code changes section.&lt;/p&gt;




&lt;h2&gt;
  
  
  Creating public endpoints
&lt;/h2&gt;

&lt;p&gt;Now that I had interfaces to play and upload music, ready to be exposed, I needed to figure out a way to expose them safely. This was a problem because networking is not my strongest suit. Thankfully Civo has a &lt;a href="https://www.civo.com/learn/exposing-applications-https-traefik" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt; that tackles this exact problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Traefik LoadBalancer
&lt;/h3&gt;

&lt;p&gt;When creating a Civo cluster, you can select the &lt;code&gt;Traefik v2 (LoadBalancer)&lt;/code&gt; addon, and it will provision a load balancer that will handle all the public traffic. The part that surprised me, pleasantly, was how easy it was to get a self-signed certificate. Just combine &lt;code&gt;cert-manager&lt;/code&gt; and Let's Encrypt to issue a &lt;code&gt;Certificate&lt;/code&gt; and then use it in Traefik's &lt;code&gt;Ingressroute&lt;/code&gt; for incoming HTTPS traffic. I also learned about some "best practices" type middlewares that you can add to the &lt;code&gt;Ingressroute&lt;/code&gt; to make it more secure, we'll go over them down the line (as you scroll).&lt;/p&gt;




&lt;p&gt;Alright, before we jump into the code, time to see Filebrowser and Navidrome in action.&lt;br&gt;
&lt;a href="https://navidrome-uploader.cf1539b6-7d51-4af3-8c87-140a1a3252dd.lb.civo.com/login?redirect=/files/" rel="noopener noreferrer"&gt;This&lt;/a&gt; is the Filebrowser endpoint which is the uploader interface. In my head, it's like the interface artists could interact with to upload their music (not that it's even remotely close, but I like the idea). You can create a new account, log in and start uploading &lt;code&gt;.mp3&lt;/code&gt; files. It might take a few seconds, but the song will appear on Navidrome UI, ready for you to play and enjoy.&lt;br&gt;
&lt;a href="https://navidrome.cf1539b6-7d51-4af3-8c87-140a1a3252dd.lb.civo.com/app/#/login" rel="noopener noreferrer"&gt;This&lt;/a&gt; is the Navidrome interface where you can play the uploaded music. I was hoping to create an "open Spotify web" type experience where you can play music without having to create an account but since I couldn't achieve that, I created a guest account with the following credentials.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;username: guest
password: guest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Log in using the guest account, and you can play music uploaded by all the users through the Filebrowser interface.&lt;br&gt;
&lt;em&gt;P.S.: Don't worry, I have &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/config.yml#L10" rel="noopener noreferrer"&gt;configured&lt;/a&gt; Navidrome so this password cannot be changed.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Putting it all together, this is what it looks like.&lt;br&gt;
  &lt;iframe src="https://www.youtube.com/embed/1JpiseVvjyI"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  Into the code
&lt;/h2&gt;

&lt;p&gt;And now, we finally dive into the code changes. There's a lot to cover, so I'll group them into sections.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrating Filebrowser
&lt;/h3&gt;

&lt;p&gt;When you use Filebrowser UI to upload files, all of them are stored in the &lt;code&gt;/srv&lt;/code&gt; directory (or subdirectories under it). This is where our music files will be uploaded, meaning this where Navidrome needs to fetch music from. Navidrome deployment already &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/main/charts/navidrome-deployer/templates/deployment.yml#L53-L54" rel="noopener noreferrer"&gt;uses&lt;/a&gt; a Longhorn volume to store music files. If we set the access mode of this volume's &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/music-volume.yml" rel="noopener noreferrer"&gt;PVC&lt;/a&gt;, &lt;code&gt;music-volume&lt;/code&gt;, to &lt;code&gt;ReadWriteMany&lt;/code&gt; and mount it in the Filebrowser deployment on the &lt;code&gt;/srv&lt;/code&gt; path, we are good to go. All the music files will be shared by both the deployments, Filebrowser manages them, Navidrome plays them.&lt;br&gt;
Now that we have figured out what Filebrowser's integration point is going to be, time to configure it based on our requirements. We need to run the following commands to allow user signups, create user directories, and lock the admin account password.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;filebrowser config &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--createUserDir&lt;/span&gt;
filebrowser &lt;span class="nb"&gt;users &lt;/span&gt;update admin &lt;span class="nt"&gt;--lockPassword&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now here comes the tricky part, you cannot configure Filebrowser database while it's being used, which is not surprising but on top of that, you need to run these commands only after Filebrowser has booted up once and created the admin account and the database. It was at this moment I remembered how useful Helm hooks are.&lt;br&gt;
So following is the strategy I ended up with:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Store Filebrowser database in a Longhorn &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/filebrowser/db-volume.yml" rel="noopener noreferrer"&gt;volume&lt;/a&gt; and mount it in the Filebrowser &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/filebrowser/deployment.yml#L52-L53" rel="noopener noreferrer"&gt;deployment&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Create a &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/filebrowser/reconfig.yml" rel="noopener noreferrer"&gt;job&lt;/a&gt;, &lt;code&gt;filebrowser-reconfig&lt;/code&gt;, with &lt;code&gt;post-install,post-upgrade&lt;/code&gt; helm hooks that also mounts this volume. This way the job will only run after the Filebrowser deployment is ready, and the database is created.&lt;/li&gt;
&lt;li&gt;Run the following commands in the job container to safely update Filebrowser database.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DEPLOYMENT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"filebrowser"&lt;/span&gt;
&lt;span class="c"&gt;# ensure filebrowser deployment is ready&lt;/span&gt;
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;condition&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Available deployment/&lt;span class="nv"&gt;$DEPLOYMENT&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nv"&gt;$NAMESPACE&lt;/span&gt; &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;300s

&lt;span class="c"&gt;# store the original replica count&lt;/span&gt;
&lt;span class="nv"&gt;ORIGINAL_REPLICAS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get deployment &lt;span class="nv"&gt;$DEPLOYMENT&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nv"&gt;$NAMESPACE&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'{.spec.replicas}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# scale the deployment down to 0 replicas&lt;/span&gt;
kubectl scale deployment &lt;span class="nv"&gt;$DEPLOYMENT&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nv"&gt;$NAMESPACE&lt;/span&gt; &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0
kubectl &lt;span class="nb"&gt;wait&lt;/span&gt; &lt;span class="nt"&gt;--for&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;delete pod &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;filebrowser &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nv"&gt;$NAMESPACE&lt;/span&gt; &lt;span class="nt"&gt;--timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;120s

&lt;span class="c"&gt;# run the filebrowser cli commands&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;database
filebrowser config &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;--createUserDir&lt;/span&gt;
filebrowser &lt;span class="nb"&gt;users &lt;/span&gt;update admin &lt;span class="nt"&gt;--lockPassword&lt;/span&gt;

&lt;span class="c"&gt;# scale the deployment back up to the original replica count&lt;/span&gt;
kubectl scale deployment &lt;span class="nv"&gt;$DEPLOYMENT&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="nv"&gt;$NAMESPACE&lt;/span&gt; &lt;span class="nt"&gt;--replicas&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$ORIGINAL_REPLICAS&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;One last thing, this job runs &lt;code&gt;filebrowser&lt;/code&gt; and &lt;code&gt;kubectl&lt;/code&gt; CLI commands, so I had to create an image that contains them both. &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/Dockerfile.filebrowser" rel="noopener noreferrer"&gt;Here&lt;/a&gt; is the Dockerfile and the image is &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/.github/workflows/release-packages.yml#L57-L62" rel="noopener noreferrer"&gt;built and published&lt;/a&gt; every time a new release is created.&lt;/p&gt;

&lt;p&gt;There was one more issue I kept running into. Filebrowser pods would often crash with issues related to accessing the database, so as a quick fix I add an &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/filebrowser/deployment.yml#L20-L31" rel="noopener noreferrer"&gt;init-container&lt;/a&gt; to run the following command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; 777 /database
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And I'm running this init-container and the filebrowser container as root. It's far from ideal but it did fix the issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Exposing Filebrowser and Navidrome deployments to public traffic
&lt;/h3&gt;

&lt;p&gt;Earlier I mentioned I was following a Civo tutorial to issue certificates using Let's Encrypt which in turn were supposed to be used by Ingress. However, I wanted to add some security to the public endpoints. After doing a little research I found that if I use Traefik's &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/ingressroute.yml" rel="noopener noreferrer"&gt;IngressRoute&lt;/a&gt;, I could add &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/middleware.yml" rel="noopener noreferrer"&gt;Middleware&lt;/a&gt; layers to the routes that harden the endpoints against common browser-based attacks. To be specific, I  added rate limiting and HSTS enforcement. I doubt that's enough but I think it's a good starting point.&lt;/p&gt;

&lt;p&gt;Coming to the &lt;code&gt;IngressRoute&lt;/code&gt; itself, I have created 2 routes, one for the Filebrowser deployment and the other for Navidrome. Here I ran into a little challenge. The host name for each route is dynamic because the base domain includes Traefik LoadBalancer ID. Luckily, this ID is stored as an annotation added to the &lt;code&gt;traefik&lt;/code&gt; service in &lt;code&gt;kube-system&lt;/code&gt; namespace. I'm using the &lt;code&gt;lookup&lt;/code&gt; function in &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/_helpers.tpl#L17-L37" rel="noopener noreferrer"&gt;_helpers.tpl&lt;/a&gt; to fetch this service and in turn, the loadbalancer ID from the annotations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{{- define "baseDomain" -}}
{{- $svc := (lookup "v1" "Service" "kube-system" "traefik") -}}
{{- $annotations := $svc.metadata.annotations -}}
{{- $loadbalancerId := index $annotations "kubernetes.civo.com/loadbalancer-id" -}}
{{- printf "%s.lb.civo.com" $loadbalancerId -}}
{{- end -}}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I'm referring to the &lt;code&gt;baseDomain&lt;/code&gt; in my &lt;code&gt;Ingressroute&lt;/code&gt;. I also found out that you can specify path patterns in the routes to accept, deny, or redirect traffic. I'm currently using it to deny requests targeting API endpoints and admin level settings. So everything combined, this is how the routes are defined.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;routes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Host(`navidrome.{{ include "baseDomain" . }}`) &amp;amp;&amp;amp; ! PathPrefix(`/api/user`)&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Host(`navidrome-uploader.{{ include "baseDomain" . }}`) &amp;amp;&amp;amp; ! PathPrefix(`/api/settings`) &amp;amp;&amp;amp; ! PathPrefix(`/settings/global`) &amp;amp;&amp;amp; ! PathPrefix(`/settings/users`)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minor test updates
&lt;/h3&gt;

&lt;p&gt;To ensure I have basic smoke tests to cover these changes, I have &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/magefiles/test.go" rel="noopener noreferrer"&gt;updated&lt;/a&gt; them to verify cert-manager deployments are healthy and that the Filebrowser DB configuration job is completed successfully.&lt;br&gt;
In my last article I mentioned that I was disabling firewall service to create a K3S cluster and install Navidrome Deployer chart locally. This was not ideal and after reading K3S docs I found out that I could just add some exceptions to the &lt;code&gt;firewalld&lt;/code&gt; service to whitelist K3S cluster and allow inter-pod communication by running the following commands in my test setup &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/scripts/test-setup.sh" rel="noopener noreferrer"&gt;script&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--add-port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;6443/tcp
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;trusted &lt;span class="nt"&gt;--add-source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.42.0.0/16
firewall-cmd &lt;span class="nt"&gt;--permanent&lt;/span&gt; &lt;span class="nt"&gt;--zone&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;trusted &lt;span class="nt"&gt;--add-source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10.43.0.0/16
firewall-cmd &lt;span class="nt"&gt;--reload&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Having covered everything, this is the final version of my helmfile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;helmDefaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt;
  &lt;span class="na"&gt;wait&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;waitForJobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;repositories&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://charts.longhorn.io&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;quay.io/jetstack/charts&lt;/span&gt;
  &lt;span class="na"&gt;oci&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;navidrome&lt;/span&gt;
  &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ghcr.io/semmet95/navidrome-deployer/charts&lt;/span&gt;
  &lt;span class="na"&gt;oci&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;span class="na"&gt;releases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager&lt;/span&gt;
  &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;cert-manager/cert-manager&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1.19.4&lt;/span&gt;
  &lt;span class="na"&gt;createNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;crds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;cainjector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;50m&lt;/span&gt;
        &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;memory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;256Mi&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn-system&lt;/span&gt;
  &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;longhorn/longhorn&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.11.0&lt;/span&gt;
  &lt;span class="na"&gt;createNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;longhornUI&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;navidrome&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;navidrome-system&lt;/span&gt;
  &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;navidrome/navidrome-deployer&lt;/span&gt;
  &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.21.2&lt;/span&gt;
  &lt;span class="na"&gt;createNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;disableValidationOnInstall&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;cert-manager/cert-manager&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;longhorn-system/longhorn&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have a cluster with Traefik installed, you can install Navidrome Deployer with this one-liner.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helmfile apply -f https://github.com/semmet95/navidrome-deployer/releases/download/v0.21.2/helmfile.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Even though it's just meant to be a demo instance, I know there's a lot of room for improvement, so that's what I would end this article with.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Currently accessing Navidrome requires sharing credentials. I need to find a way to either let users access the music without requiring an account, or to automate that process behind the scenes in a way that scales up well.&lt;/li&gt;
&lt;li&gt;Filebrowser and Navidrome containers are running as root. They are sharing a PVC so I need to switch to a non-root user without having them interfere with each other's access to the shared files.&lt;/li&gt;
&lt;li&gt;I have also observed significant delay (up to 30 seconds) on Navidrome's side before it imports newly uploaded music. Some &lt;a href="https://gitmemories.com/navidrome/navidrome/issues/4354" rel="noopener noreferrer"&gt;discussions&lt;/a&gt; around this issue suggest that it could be because of auto generated directories like &lt;code&gt;lost+found&lt;/code&gt; that Navidrome watcher might not have access to, leading to it crashing. I've added a  &lt;code&gt;postStart&lt;/code&gt; &lt;a href="https://github.com/semmet95/navidrome-deployer/blob/v0.21.2/charts/navidrome-deployer/templates/deployment.yml#L26-L31" rel="noopener noreferrer"&gt;hook&lt;/a&gt; to Navidrome and Filebrowser containers to delete this directory but that doesn't seem to have any effect.&lt;/li&gt;
&lt;li&gt;I would also like to add some restrictions to Filebrowser so users may only upload MP3/audio files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hopefully the next time I write about this project all of these would be resolved. If you, the reader, have any suggestions for me feel free to drop them in the comment section. Until next time 🫡&lt;/p&gt;

</description>
      <category>navidrome</category>
      <category>civo</category>
      <category>filebrowser</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
