<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Paderich</title>
    <description>The latest articles on Forem by Paderich (@paderich87).</description>
    <link>https://forem.com/paderich87</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/paderich87"/>
    <language>en</language>
    <item>
      <title>Versioning Your AI Workflow with a Custom Claude Code Marketplace</title>
      <dc:creator>Paderich</dc:creator>
      <pubDate>Sat, 21 Mar 2026 08:36:15 +0000</pubDate>
      <link>https://forem.com/paderich87/versioning-your-ai-workflow-with-a-custom-claude-code-marketplace-4298</link>
      <guid>https://forem.com/paderich87/versioning-your-ai-workflow-with-a-custom-claude-code-marketplace-4298</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; You can host a Git repository that acts as a personal marketplace for Claude Code skills. Any project you work on can point to it, pin a version, and get your curated skills installed automatically. This is how you move from "chatting with an AI" to "engineering a workflow."&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why Build a Skill Marketplace?
&lt;/h2&gt;

&lt;p&gt;In my previous logs, I talked about the frustration of "black boxes" and the realization that I was only "scratching the surface" with basic setups. If you use &lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; regularly, you’ve probably noticed a pattern: you end up reexplaining the same architectural standards, user story formats, or code review checklists for every new project.&lt;/p&gt;

&lt;p&gt;A personal marketplace treats your AI prompts like &lt;strong&gt;managed infrastructure&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Portable&lt;/strong&gt;: Your skills live in one standalone Git repo. Every project you work on can reference it.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Versioned&lt;/strong&gt;: You pin to a Git tag (e.g., &lt;code&gt;v1.0.2&lt;/code&gt;). If you update a skill, it won't break your older projects until you're ready to upgrade.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Shareable&lt;/strong&gt;: Push the repo to GitHub and your entire team gets the same "Senior Engineer" level skills.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Zero Copy-Paste&lt;/strong&gt;: Claude Code handles the fetching and installation. No manual file management required.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Architecture at a Glance
&lt;/h2&gt;

&lt;p&gt;The setup is straightforward. You have the &lt;strong&gt;Marketplace Repo&lt;/strong&gt; (the source of truth) and your &lt;strong&gt;Client Repos&lt;/strong&gt; (the projects where you actually work).&lt;/p&gt;




&lt;h2&gt;
  
  
  Step 1: Create the Marketplace Structure
&lt;/h2&gt;

&lt;p&gt;Claude Code expects a specific folder hierarchy to discover plugins. Your marketplace repository should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;claude-skills/
├── .claude-plugin/
│   └── marketplace.json       # The "App Store" index
├── plugins/
│   └── user-story-skill/      # A folder for each specific tool
│       ├── .claude-plugin/
│       │   └── plugin.json    # Metadata for this specific plugin
│       ├── commands/
│       │   └── user-story.md  # Defines the /user-story slash command
│       └── skills/
│           └── user-story/
│               └── SKILL.md   # The "Brain" (The actual logic)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 2: Define the Manifests
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Marketplace Index
&lt;/h3&gt;

&lt;p&gt;At the root, &lt;code&gt;.claude-plugin/marketplace.json&lt;/code&gt; tells Claude what plugins are available in this repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"pat-skills"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"owner"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Pat"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"metadata"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Personal Claude Code skill library"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"plugins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user-story-skill"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"source"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"./plugins/user-story-skill"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Generate agile stories from plain text"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Plugin Metadata
&lt;/h3&gt;

&lt;p&gt;Inside &lt;code&gt;plugins/user-story-skill/.claude-plugin/plugin.json&lt;/code&gt;, define the individual plugin:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"user-story-skill"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Reference skill for agile user stories"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"author"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Pat"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 3: Writing the "Brain" (SKILL.md)
&lt;/h2&gt;

&lt;p&gt;This is the most critical file. As I noted in Log 003, vague instructions lead to hallucinations. A professional skill file needs a strict contract, including few-shot examples.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Location:&lt;/strong&gt; &lt;code&gt;plugins/user-story-skill/skills/user-story/SKILL.md&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Purpose&lt;/span&gt;
Generate a well-formed agile user story from a plain-language feature request.

&lt;span class="gu"&gt;## Context&lt;/span&gt;
This skill follows standard agile coaching patterns. Stories must be independent, 
negotiable, and small enough to fit in a single sprint.

&lt;span class="gu"&gt;## Instructions&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; Identify the role from the input. Infer the most likely role if not stated.
&lt;span class="p"&gt;2.&lt;/span&gt; Format the story: &lt;span class="gs"&gt;**As a [role], I want [goal], so that [benefit].**&lt;/span&gt;
&lt;span class="p"&gt;3.&lt;/span&gt; Generate 3–5 acceptance criteria using the &lt;span class="gs"&gt;**Given / When / Then**&lt;/span&gt; pattern.
&lt;span class="p"&gt;4.&lt;/span&gt; If the input is ambiguous, use a &lt;span class="sb"&gt;`[CLARIFY: ...]`&lt;/span&gt; marker rather than guessing.
&lt;span class="p"&gt;5.&lt;/span&gt; Output must be concise: no preamble, just the story and criteria.

&lt;span class="gu"&gt;## Examples (Few-Shot)&lt;/span&gt;

&lt;span class="gu"&gt;### Example 1&lt;/span&gt;
&lt;span class="gs"&gt;**Input:**&lt;/span&gt; Users should be able to reset their password via email.
&lt;span class="gs"&gt;**Output:**&lt;/span&gt; 
As a registered user, I want to reset my password via email, so that I can 
regain access to my account if I forget my credentials.

Acceptance criteria:
&lt;span class="p"&gt;-&lt;/span&gt; Given I am on the login page, when I click "Forgot password", then I am 
  prompted to enter my email address.
&lt;span class="p"&gt;-&lt;/span&gt; Given I submit a valid email, when the system processes it, then I receive 
  a reset link within 2 minutes.

&lt;span class="gu"&gt;## Quality Criteria&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; The role is specific (not "the system" or "a person").
&lt;span class="p"&gt;-&lt;/span&gt; The benefit explains &lt;span class="ge"&gt;*why*&lt;/span&gt; the user wants this, not just what they want.
&lt;span class="p"&gt;-&lt;/span&gt; Every acceptance criterion is testable by a QA engineer.
&lt;span class="p"&gt;-&lt;/span&gt; Ambiguous inputs are flagged with &lt;span class="sb"&gt;`[CLARIFY]`&lt;/span&gt;.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Step 4: Connecting a Project
&lt;/h2&gt;

&lt;p&gt;Once you’ve pushed your marketplace to GitHub (or a local Git path), head to your project directory and start Claude Code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Type &lt;code&gt;/plugin&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; Select &lt;strong&gt;Add marketplace&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; Enter your Git URL (e.g., &lt;a href="https://github.com/your-username/claude-skills.git" rel="noopener noreferrer"&gt;https://github.com/your-username/claude-skills.git&lt;/a&gt;).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Claude handles the rest, updating your &lt;code&gt;.claude/settings.json&lt;/code&gt;. Commit that file so your whole team shares the same skill set.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Pro-tip:&lt;/strong&gt; In your &lt;code&gt;settings.json&lt;/code&gt;, always ensure the &lt;code&gt;ref&lt;/code&gt; field points to a specific tag (like &lt;code&gt;v1.0.0&lt;/code&gt;) rather than &lt;code&gt;main&lt;/code&gt;. This is how you ensure an update to your library doesn't silently change the behavior of an older project mid-flight.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Reality Check: Testing Skills
&lt;/h2&gt;

&lt;p&gt;As I learned with my "RAG in a box", the magic is in the plumbing. Testing a skill isn't about string matching; it's about &lt;strong&gt;behavioral validation&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;You should build a &lt;code&gt;tests/&lt;/code&gt; folder in your plugin with JSON files defining your expectations. If a skill doesn't catch a "vague" input or misses a quality criterion, your &lt;code&gt;SKILL.md&lt;/code&gt; instructions aren't sharp enough. Fix the instructions, don't just "hope" the AI gets it right next time.&lt;/p&gt;




&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Building a marketplace turns your accumulated prompting knowledge into a reusable, versioned, and shareable library. It moves you from being a "prompt user" to an &lt;strong&gt;AI Systems Engineer&lt;/strong&gt;. It takes about 20 minutes to set up, but the compounding value across every new project is massive.&lt;/p&gt;

&lt;p&gt;Cheers,&lt;br&gt;
Pat&lt;/p&gt;

</description>
      <category>learning</category>
      <category>ai</category>
      <category>career</category>
      <category>coding</category>
    </item>
    <item>
      <title>Log Entry 004 - Certified (The Hard Way)</title>
      <dc:creator>Paderich</dc:creator>
      <pubDate>Mon, 02 Mar 2026 12:13:40 +0000</pubDate>
      <link>https://forem.com/paderich87/log-entry-004--14cd</link>
      <guid>https://forem.com/paderich87/log-entry-004--14cd</guid>
      <description>&lt;p&gt;The original plan was February 26th. Today is March 2nd, and I just closed the exam window with a "Pass" on the screen. I had to take an extra week because I realized I had underestimated the fundamentals. &lt;br&gt;
I thought I could breeze through them, but the deeper I dug, the more &lt;/p&gt;

&lt;p&gt;I realized that "knowing of" a service is not the same as understanding its architectural purpose.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Reality Check on Study Material
&lt;/h2&gt;

&lt;p&gt;In my first log, I mentioned using Whizlabs. To be honest: I wasn't happy with it this time. The questions felt slightly off-target for the AI-900. &lt;/p&gt;

&lt;p&gt;I had to pivot my strategy: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Microsoft Practice Assessment: This was the real game-changer. It was much more aligned with the actual exam logic and helped me identify the gaps I had previously overlooked.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NotebookLM: I used this heavily to "interrogate" specific sections of the documentation. It’s one thing to read a paragraph; it’s another to have a focused tool help you deconstruct complex concepts through targeted Q&amp;amp;A.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Bigger Picture: Breadth vs. Depth
&lt;/h2&gt;

&lt;p&gt;The AI-102 remains my North Star because it aligns my private interests with my current professional environment. However, passing the AI-900 made me realize that I want to be more than just an "Azure implementer". I want to be a well-rounded AI Engineer. To achieve that, I’m considering broadening my horizon by working through DataCamp’s AI Fundamentals. While Microsoft gives me the tools, I want to ensure my foundation is platform-agnostic and theoretically sound.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s next?
&lt;/h2&gt;

&lt;p&gt;The "Fundamentals" badge is on the digital shelf. For now, the "RAG-in-a-box" project is going on the back burner. I have some pressing job-related tasks that require my full attention, which will take some time. Once the dust settles, I’ll dive into the DataCamp curriculum. From there, I’ll reassess my path and decide on the next concrete steps toward becoming an AI Engineer.&lt;/p&gt;

&lt;p&gt;Cheers,&lt;br&gt;
Pat&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>learning</category>
      <category>career</category>
    </item>
    <item>
      <title>Log Entry 003 - My Lag in RAG-in-a-box</title>
      <dc:creator>Paderich</dc:creator>
      <pubDate>Sun, 22 Feb 2026 14:03:59 +0000</pubDate>
      <link>https://forem.com/paderich87/log-entry-003-my-lag-in-rag-in-a-box-2ooj</link>
      <guid>https://forem.com/paderich87/log-entry-003-my-lag-in-rag-in-a-box-2ooj</guid>
      <description>&lt;p&gt;In my recent blog post, I was pretty happy about setting up a simple RAG system. Well, it turns out it wasn't that big of a deal. After reading more on the topic, I realized my approach was just scratching the surface. That is why I added a quick P.S. to the original post with a reality check.&lt;/p&gt;

&lt;p&gt;Reflecting on my "RAG in a box", I realized the following issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My AI factory pattern is essentially a hardcoded if/else block, lacking dynamic registration.&lt;/li&gt;
&lt;li&gt;My Docker configuration is perfectly functional but very basic; there is nothing highly app-specific about it.&lt;/li&gt;
&lt;li&gt;My approach to chunking documents relied purely on LlamaIndex's built-in functionality, which I now know is a black box that can lead to retrieval failures and hallucinations when data gets messy.&lt;/li&gt;
&lt;li&gt;My response payload is blind: The API currently just spits out a text string. Because I am not returning any source nodes, citations, or similarity scores, I have no way to prove if the system actually retrieved the answer from my PDF, or if the LLM is just confidently hallucinating.&lt;/li&gt;
&lt;li&gt;In general, I orchestrated the tools well, but I really only hit the tip of the iceberg.&lt;/li&gt;
&lt;li&gt;My Python skills could also use some polishing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What is next? I think I will stay with this project for now so I can do it right. I am not quite there yet on a theoretical level, and I need to play around with the different parts of the problem scope. However, for now, my priority is getting ready for my AI-900 exam this coming Thursday. I am using NotebookLM heavily to study properly. I added some videos, documentation, and the MS Learn path into the system, and I am getting a lot of fantastic learning tools out of it.&lt;/p&gt;

&lt;p&gt;Well... that’s my update. Not much, but something honest to share.&lt;/p&gt;

&lt;p&gt;Cheers&lt;br&gt;
Pat&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>career</category>
      <category>learning</category>
    </item>
    <item>
      <title>Log Entry 002 - RAG in a box</title>
      <dc:creator>Paderich</dc:creator>
      <pubDate>Mon, 16 Feb 2026 18:13:56 +0000</pubDate>
      <link>https://forem.com/paderich87/log-entry-002-rag-in-a-box-5e7f</link>
      <guid>https://forem.com/paderich87/log-entry-002-rag-in-a-box-5e7f</guid>
      <description>&lt;p&gt;Today I decided to refresh my Python skills. Nothing fancy, just the basics that carried me through the past years. I installed VS Code, added some helpful extensions, set up a venv and wrote a few test lines. Simple steps, but it felt good to get back into it.&lt;/p&gt;

&lt;p&gt;My main goal was to build a small RAG application. I wanted a simple retrieval augmented generation setup that I could take with me and run anywhere, as long as Docker is installed. That is why I started calling it Rag in a Box.&lt;/p&gt;

&lt;p&gt;Before writing any code, I watched some YouTube videos to get an idea of where things stand today. It was a bit underwhelming. Most videos focused on clicking things together with n8n or Zapier. Nice tools, but not what I wanted right now. I wanted to get my hands dirty :D&lt;br&gt;
Spoiler, nothing got dirty here. My expectations were probably too high.&lt;/p&gt;

&lt;p&gt;So I approached the problem like I would approach a new feature at work. I broke it down into smaller parts. Things that, together, would make this tiny project work.&lt;/p&gt;

&lt;p&gt;Here is what I noted down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;must be portable, so a Docker container is needed&lt;/li&gt;
&lt;li&gt;must be exchangeable, so instead of a container I could plug in an Azure service&lt;/li&gt;
&lt;li&gt;must read files from a directory and load them into a vector database&lt;/li&gt;
&lt;li&gt;must ingest this data properly&lt;/li&gt;
&lt;li&gt;must retrieve data properly&lt;/li&gt;
&lt;li&gt;must work fully locally, for example with LM Studio but also offer the option to switch to another LLM provider if needed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With that list in mind, I looked around for tools that could make this doable. I found several posts and examples using qdrant. It is open source, runs locally and seems perfect for a project like this. It also pulls an LLM model at startup, which is fine for now.&lt;/p&gt;

&lt;p&gt;To orchestrate the entire workflow, I picked LlamaIndex. I wanted as much flexibility as possible and LlamaIndex makes it easy to connect my data to any LLM backend. But the real heavy lifting happens before that: to turn my documents into searchable numbers (embeddings), I am running a local Hugging Face model. This brings torch and transformers into the picture, allowing me to process data privately without sending it to an external API.&lt;/p&gt;

&lt;p&gt;To ensure the "exchangeability" I listed in my requirements, I implemented a simple &lt;strong&gt;Factory Pattern&lt;/strong&gt;. I didn't want to hardcode the connection to LM Studio. If I want to switch to GPT-4 or Azure tomorrow, I want to do it by changing a single environment variable, not by rewriting code.&lt;/p&gt;

&lt;p&gt;Since LM Studio mimics the OpenAI API, I can actually use the standard OpenAI driver, just pointing to my local machine instead of their servers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_llm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Factory to switch between Local, OpenAI, or Azure drivers&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;local&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# LM Studio mimics the OpenAI API, so we just point the base_url locally.
&lt;/span&gt;        &lt;span class="c1"&gt;# No real API key is needed, but the library expects a string.
&lt;/span&gt;        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;api_base&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://host.docker.internal:1234/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;not-needed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;local-model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; 
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;elif&lt;/span&gt; &lt;span class="n"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;azure&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;AzureOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;engine&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;my-deployment&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
            &lt;span class="n"&gt;api_version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;2023-05-15&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;ValueError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Unknown provider: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next steps were straightforward. I wrote a docker compose file and defined the containers. One for the database, one for ingestion and one for retrieval via API. The rest was mostly following the documentation of each module. Preparation took me around two to three hours. The actual implementation maybe thirty to forty minutes. It is not an enterprise grade RAG system, but it worked.&lt;/p&gt;

&lt;p&gt;I added a PDF file to the designated folder, ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and then used curl to ask my system about the document.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="s2"&gt;"http://localhost:8000/query"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
     &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"query": "gibe me 5 key facts of the document. shouldnt ne longer than a normal sentence"}'&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And it answered:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"response"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1. The document discusses an algorithm for generating test cases from sequence diagrams.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;2. It presents a method to transform sequence diagrams into tree representations for analysis.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;3. The system was evaluated using a login feature's sequence diagram and test cases.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;4. The results showed that the generated test cases matched the sequence of messages in the diagram.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;5. Sequence Dependency Tables (SDT) are used to analyze message dependencies in sequence diagrams."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That moment made me happy.&lt;/p&gt;

&lt;p&gt;But then I thought, this cannot be everything that AI engineering is about.&lt;/p&gt;

&lt;p&gt;So I started reading about chunking and clustering. It is impressive how the tools split the work: &lt;strong&gt;LlamaIndex handles the chunking automatically, while Qdrant takes care of the vector indexing.&lt;/strong&gt; All this logic is hidden from me. I am just a handyman with a set of tools. I know how to replace a light bulb, but I do not need to understand how energy travels through the entire grid to make it shine. I know where the fuses are and that is enough for now.&lt;/p&gt;

&lt;p&gt;This was just the first tiny dip into the water with a very tiny toe.&lt;br&gt;
But it is a start, and I like where this is going.&lt;/p&gt;

&lt;p&gt;Cheers&lt;br&gt;
Pat&lt;/p&gt;

&lt;p&gt;P.S.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Reality Check&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After the initial high of getting a response, I’ve quickly realized that building the "box" was just the tip of the iceberg. It’s one thing to get a clean summary from a simple PDF, but it’s another thing entirely to ensure the system doesn't confidently hallucinate when the data gets messy. I’ve learned that the "magic" is actually in the plumbing, specifically how you chunk the data and how you verify the source of the answer.&lt;/p&gt;

&lt;p&gt;By letting LlamaIndex handle everything "automatically," I’ve essentially handed over the keys to a black box. If the chunking slices a paragraph in the wrong place, the retrieval fails; if the retrieval fails, the LLM just starts guessing. Moving forward, my focus has to shift from just "orchestrating" tools to actually validating the data pipeline. The infrastructure is solid, but the real engineering challenge is turning "it looks right" into "I can prove this is right."&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>learning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Log Entry 001 - My first steps toward AI‑102</title>
      <dc:creator>Paderich</dc:creator>
      <pubDate>Mon, 16 Feb 2026 16:05:32 +0000</pubDate>
      <link>https://forem.com/paderich87/log-entry-001-my-first-steps-toward-ai-102-1kp3</link>
      <guid>https://forem.com/paderich87/log-entry-001-my-first-steps-toward-ai-102-1kp3</guid>
      <description>&lt;p&gt;I like the idea of treating this series as a personal log. A bit nerdy, a bit structured, something that marks progress. So here it is: Log Entry 001.&lt;/p&gt;

&lt;p&gt;The year is still young, and so is my path toward becoming an AI engineer. Time to take the first real step.&lt;/p&gt;

&lt;p&gt;Recently I decided to go for the AI‑900 certification. It is the most basic AI certification on Azure, but that is exactly why it feels right. I want to build a solid foundation instead of jumping straight into the deep end. Understanding the principles behind AI services, machine learning basics and how Microsoft structures its ecosystem will help me later when things get more complex.&lt;/p&gt;

&lt;p&gt;Learning for AI‑900 also forces me to zoom out. I get to revisit ideas that I have seen many times over the years, but with a fresh perspective. Things like supervised learning, responsible AI, cognitive services, classification and prediction. Nothing too complex, but enough to get my brain into AI mode again.&lt;/p&gt;

&lt;p&gt;My plan for the next days is simple. Read, try things out and write down what I discover. As with everything in tech, theory alone does not stick. So I want to get hands‑on as early as possible. Azure has plenty of small playgrounds for this exam, and I want to use them.&lt;/p&gt;

&lt;p&gt;In addition to the official Microsoft learning path, I bought a book called Machine Learning and AI by Sebastian Raschka. This book is completely out of my comfort zone, but I like it. It opens up things I never thought about. Sure, I learned linear algebra back in my math classes, but for the past 15 years I didn’t even think about it. I am at least happy that I remembered parts of it, which reminds me that you don’t just learn useless stuff when studying.&lt;/p&gt;

&lt;p&gt;So yes, this book will keep me busy for a while. It is packed with information, way too dense to read in one go.&lt;/p&gt;

&lt;p&gt;To prepare myself better for the exam I bought an exam prep course on Whizlabs, just to study with as much focus as possible. This exam prep consists of a lot of mock questions that are pretty close to the actual exam. I used a similar prep for my AZ‑900, and it helped a lot. The built‑in Microsoft assessment is just a bunch of checkboxes, but the real exam had drag and drop, sentence completion and many other formats.&lt;/p&gt;

&lt;p&gt;Anyway, I just scheduled the exam: 26 February.&lt;/p&gt;

&lt;p&gt;Yes, right, next Thursday...&lt;/p&gt;

&lt;p&gt;So ... this is Log Entry 001.&lt;/p&gt;

&lt;p&gt;This log will follow my progress. Not perfect, not structured for a textbook, but honest and curious. If I learn something useful, I will bring it here. If I hit a wall, I will write about that too.&lt;/p&gt;

&lt;p&gt;Cheers&lt;br&gt;
Pat&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>learning</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Starting My Journey Into AI</title>
      <dc:creator>Paderich</dc:creator>
      <pubDate>Mon, 16 Feb 2026 15:34:57 +0000</pubDate>
      <link>https://forem.com/paderich87/starting-my-journey-into-ai-4e7l</link>
      <guid>https://forem.com/paderich87/starting-my-journey-into-ai-4e7l</guid>
      <description>&lt;p&gt;I’ve been working in the industry for quite some time. I worked as a sys admin, test engineer, software developer and later as an agile coach and product owner. I’ve seen a lot and I’ve learned a lot. During all these years, something kept growing in the background and slowly turned into something big: AI Engineering.&lt;/p&gt;

&lt;p&gt;I want to use this space as a learning log where I share my thoughts and ideas around this domain. I want to become an AI Engineer, not today, not tomorrow, but maybe the day after. I’m eager to learn new things, just like I have for the past 15 years.&lt;/p&gt;

&lt;p&gt;So, what’s next?&lt;/p&gt;

&lt;p&gt;I don’t want to pressure myself with a lot of structure right from the beginning. I just want to use this blog as a companion. Right now I’m experimenting with new technologies, probably connected to AI engineering, obviously. &lt;/p&gt;

&lt;p&gt;I also want to strengthen my path by getting the AI‑102 certificate from Microsoft.&lt;/p&gt;

&lt;p&gt;Why AI‑102, you ask?&lt;/p&gt;

&lt;p&gt;Because it’s the common tech stack at the place where I’m working.&lt;br&gt;
So yes, this is the plan.&lt;/p&gt;

&lt;p&gt;Let’s see how far I’ll come.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is my starting point.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Cheers&lt;br&gt;
Pat&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>learning</category>
      <category>career</category>
    </item>
  </channel>
</rss>
