<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mahinsha Nazeer</title>
    <description>The latest articles on Forem by Mahinsha Nazeer (@mahinshanazeer).</description>
    <link>https://forem.com/mahinshanazeer</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mahinshanazeer"/>
    <language>en</language>
    <item>
      <title>Building Friday: A Multi-Provider AI Agent That Lives in Your Terminal</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Sun, 12 Apr 2026 01:38:39 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/building-friday-a-multi-provider-ai-agent-that-lives-in-your-terminal-mkb</link>
      <guid>https://forem.com/mahinshanazeer/building-friday-a-multi-provider-ai-agent-that-lives-in-your-terminal-mkb</guid>
      <description>&lt;p&gt;&lt;em&gt;“Just a rather very intelligent system.”&lt;/em&gt; — Tony Stark 😀&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqhrfqdnm6ho40rc4ufx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqhrfqdnm6ho40rc4ufx.png" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every DevOps Engineer has a terminal open. What if that terminal could think?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Friday&lt;/strong&gt;  — an open-source, multi-provider AI agent that runs entirely in your terminal. No browser tabs, no Electron apps, no subscriptions bundling features you don’t need. Just a clean, minimal CLI that connects to Gemini, ChatGPT, Claude, GitHub Copilot, or your own local Ollama server — and lets you switch between them with a single command.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk through why I built it, the architecture behind it, and how you can install it in one command.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk through why I built it, the architecture behind it, and how you can install it in one command.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Friday Delivers
&lt;/h3&gt;

&lt;p&gt;Friday is a terminal-native AI agent with three core capabilities:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Multi-Provider Chat
&lt;/h4&gt;

&lt;p&gt;Connect to any major AI provider through a unified interface:&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem
&lt;/h3&gt;

&lt;p&gt;Modern AI workflows are fragmented:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Multiple browser tabs (ChatGPT, Claude, Gemini)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Constant copy-paste between the terminal and AI tools&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;No direct execution — only suggestions&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Even simple queries, like checking disk usage, require manual execution after being asked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The requirement was clear:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 A single interface that supports multiple models and can perform real actions on the machine.&lt;/p&gt;
&lt;h3&gt;
  
  
  What Friday Delivers
&lt;/h3&gt;
&lt;h3&gt;
  
  
  1. Unified Multi-Provider Interface
&lt;/h3&gt;

&lt;p&gt;Friday integrates major AI providers into one CLI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1vhq7qb8w89gioookze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1vhq7qb8w89gioookze.png" width="800" height="210"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;available providers&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Model availability is dynamically fetched via APIs — ensuring accuracy without manual updates.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Agentic Capabilities for DevOps
&lt;/h3&gt;

&lt;p&gt;Friday is built for execution, not just conversation.&lt;/p&gt;

&lt;p&gt;Core capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Shell command execution (with confirmation)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;File operations (read, write, search)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Web search integration&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;System diagnostics (CPU, memory, disk)&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Python execution&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;&lt;br&gt;
 Instead of suggesting commands, Friday directly performs tasks like file discovery or diagnostics.&lt;/p&gt;

&lt;p&gt;All critical operations require explicit confirmation, ensuring safety.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Voice Interaction
&lt;/h3&gt;

&lt;p&gt;Voice support enables hands-free interaction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Toggle via /voice&lt;/li&gt;
&lt;li&gt;Speak queries with empty input&lt;/li&gt;
&lt;li&gt;Configure pitch, speed, and voice profile&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is particularly useful during troubleshooting or multitasking scenarios.&lt;/p&gt;
&lt;h3&gt;
  
  
  Architecture Overview
&lt;/h3&gt;

&lt;p&gt;Friday follows a modular, provider-agnostic architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0olq5a6akcfkppq6ll0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0olq5a6akcfkppq6ll0r.png" width="800" height="437"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Application Layer (CLI + Commands)
        ↓
Friday Client (Abstraction Layer)
        ↓
Providers (Gemini, OpenAI, Claude, Copilot, Ollama)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Provider Abstraction
&lt;/h4&gt;

&lt;p&gt;Each provider implements a common interface:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BaseProvider&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ABC&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;
    &lt;span class="nd"&gt;@property&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="bp"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enables seamless extensibility with minimal integration effort.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tool System
&lt;/h3&gt;

&lt;p&gt;Tools are defined once and adapted across providers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@register_tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_system_info&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{...}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A universal schema ensures compatibility across different APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dynamic Model Discovery
&lt;/h3&gt;

&lt;p&gt;Friday queries provider APIs at runtime:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Eliminates outdated configurations&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Supports local and remote Ollama instances&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Ensures real-time model availability&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Command System
&lt;/h3&gt;

&lt;p&gt;A structured command interface simplifies interaction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/login Connect provider
/switch Change provider
/model Select model
/logout Remove credentials
/voice Toggle voice
/voice config Configure voice
/tools List tools
/help Show commands
bye Exit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Credentials are securely stored with restricted permissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  User Interface
&lt;/h3&gt;

&lt;p&gt;Built using the &lt;strong&gt;Rich&lt;/strong&gt; library, the UI is clean and minimal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Color-coded provider panels&lt;/li&gt;
&lt;li&gt;Markdown rendering with syntax highlighting&lt;/li&gt;
&lt;li&gt;Interactive menus&lt;/li&gt;
&lt;li&gt;Lightweight terminal-first design&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The focus remains on usability rather than visual complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/mahinshanazeer/friday.git &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;friday &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; bash install.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  First Run
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;friday
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Authenticate using /login, and the session persists for future use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Uninstall
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;friday &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; bash uninstall.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Learnings
&lt;/h3&gt;

&lt;h3&gt;
  
  
  1. Provider APIs Differ Significantly
&lt;/h3&gt;

&lt;p&gt;Function-calling implementations vary widely across providers, requiring a unified abstraction layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Terminal UX Matters
&lt;/h3&gt;

&lt;p&gt;Design improvements significantly impact usability and adoption.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Reliability Is Critical
&lt;/h3&gt;

&lt;p&gt;Graceful handling of missing dependencies or failures ensures stability.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Persistence Improves Experience
&lt;/h3&gt;

&lt;p&gt;Credential storage transformed the tool from experimental to practical.&lt;/p&gt;

&lt;h3&gt;
  
  
  Built Using AI
&lt;/h3&gt;

&lt;p&gt;Friday itself was built using AI-assisted development:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Antigravity&lt;/strong&gt;  — architecture and scaffolding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Copilot&lt;/strong&gt;  — inline coding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude&lt;/strong&gt;  — design validation and reviews&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Define architecture via prompts&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Iteratively build features&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Debug and refine using AI&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Result: A production-ready system built in hours rather than weeks.&lt;/p&gt;

&lt;h3&gt;
  
  
  Roadmap
&lt;/h3&gt;

&lt;p&gt;Planned improvements include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Streaming responses&lt;/li&gt;
&lt;li&gt;Chat export functionality&lt;/li&gt;
&lt;li&gt;Plugin architecture&lt;/li&gt;
&lt;li&gt;Multi-modal input (image support)&lt;/li&gt;
&lt;li&gt;Git integration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Repository
&lt;/h3&gt;

&lt;p&gt;🔗 &lt;a href="https://github.com/mahinshanazeer/friday" rel="noopener noreferrer"&gt;https://github.com/mahinshanazeer/friday&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/mahinshanazeer/friday.git &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;friday &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; bash install.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Closing Note
&lt;/h3&gt;

&lt;p&gt;Friday is built on a simple principle:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your terminal should not just execute commands — it should assist, decide, and act.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>agenticai</category>
      <category>ai</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Installing and Configuring n8n on a Raspberry Pi (Private Home Server)</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Mon, 23 Mar 2026 14:49:52 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/installing-and-configuring-n8n-on-a-raspberry-pi-private-home-server-1npe</link>
      <guid>https://forem.com/mahinshanazeer/installing-and-configuring-n8n-on-a-raspberry-pi-private-home-server-1npe</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/mahinshanazeer/n8n-home-server" rel="noopener noreferrer"&gt;GitHub - mahinshanazeer/n8n-home-server: Installing and Configuring n8n on a Raspberry Pi (Private Home Server)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdqk3eddivg0k5qn100r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdqk3eddivg0k5qn100r.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Automation is becoming a core part of modern workflows, whether for DevOps, personal productivity, or integrations. &lt;strong&gt;n8n&lt;/strong&gt; is a powerful, open-source workflow automation platform that allows you to create custom integrations with full control over your data.&lt;/p&gt;

&lt;p&gt;In this guide, we will set up n8n on a Raspberry Pi using Docker, configured for &lt;strong&gt;private network access (no domain, no SSL)&lt;/strong&gt; — ideal for home labs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Self-Host n8n on Raspberry Pi?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Low-cost, always-on device&lt;/li&gt;
&lt;li&gt;Full data privacy (no third-party cloud dependency)&lt;/li&gt;
&lt;li&gt;Perfect for home lab and DevOps experimentation&lt;/li&gt;
&lt;li&gt;Lightweight yet powerful automation engine&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;Ensure the following are ready:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Raspberry Pi (Pi 4/5 recommended, 4GB+ RAM)&lt;/li&gt;
&lt;li&gt;Linux OS (Ubuntu Server / Raspberry Pi OS Lite)&lt;/li&gt;
&lt;li&gt;Docker &amp;amp; Docker Compose installed&lt;/li&gt;
&lt;li&gt;Static private IP (recommended)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;

&lt;p&gt;We will keep everything inside a single directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/n8n/
├── .env
├── docker-compose.yml
├── n8n-data/
└── local-files/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8g3yso38x1jnir6f0gxd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8g3yso38x1jnir6f0gxd.png" width="591" height="136"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;directory structure&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This approach ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Easy backup&lt;/li&gt;
&lt;li&gt;Clean portability between systems&lt;/li&gt;
&lt;li&gt;Full control over persistent data&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 1: Create Project Directory
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; ~/n8n &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd&lt;/span&gt; ~/n8n
&lt;span class="nb"&gt;mkdir &lt;/span&gt;n8n-data local-files
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 2: Create Environment Configuration
&lt;/h3&gt;

&lt;p&gt;Create a .env file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;nano .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;N8N_HOST&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;192.168.1.200&lt;/span&gt;
&lt;span class="py"&gt;N8N_PORT&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;5678&lt;/span&gt;
&lt;span class="py"&gt;N8N_PROTOCOL&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;http&lt;/span&gt;

&lt;span class="py"&gt;WEBHOOK_URL&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;http://192.168.1.200:5678/&lt;/span&gt;

&lt;span class="py"&gt;GENERIC_TIMEZONE&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;Asia/Kolkata&lt;/span&gt;

&lt;span class="py"&gt;N8N_SECURE_COOKIE&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Notes:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;N8N_HOST → Your Raspberry Pi private IP&lt;/li&gt;
&lt;li&gt;WEBHOOK_URL → Required for workflows&lt;/li&gt;
&lt;li&gt;N8N_SECURE_COOKIE=false → Mandatory for HTTP access&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Create Docker Compose File
&lt;/h3&gt;

&lt;p&gt;Create docker-compose.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;n8n&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker.n8n.io/n8nio/n8n&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;n8n&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;always&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5678:5678"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;N8N_HOST=${N8N_HOST}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;N8N_PORT=${N8N_PORT}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;N8N_PROTOCOL=${N8N_PROTOCOL}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=${N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;N8N_SECURE_COOKIE=${N8N_SECURE_COOKIE}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;NODE_ENV=${NODE_ENV}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;WEBHOOK_URL=${WEBHOOK_URL}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;GENERIC_TIMEZONE=${GENERIC_TIMEZONE}&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;TZ=${GENERIC_TIMEZONE}&lt;/span&gt;
    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="c1"&gt;# n8n data: SQLite database and encryption key&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./n8n-data:/home/node/.n8n&lt;/span&gt;
      &lt;span class="c1"&gt;# Shared files between n8n and host (use /files path inside n8n)&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;./local-files:/files&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Start n8n
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Access n8n
&lt;/h3&gt;

&lt;p&gt;Open in browser:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://192.168.1.200:5678
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bzh4ccffo5wid753jgr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bzh4ccffo5wid753jgr.png" width="800" height="397"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;home page&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zvtns9r1hen59a4ltb8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zvtns9r1hen59a4ltb8.png" width="716" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create your admin account&lt;/li&gt;
&lt;li&gt;Start building workflows (will create another blog on building workflows)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Common Issue (Important)
&lt;/h3&gt;
&lt;h3&gt;
  
  
  Secure Cookie Error
&lt;/h3&gt;

&lt;p&gt;You may see:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“Your n8n server is configured to use a secure cookie…”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Fix:
&lt;/h3&gt;

&lt;p&gt;Set in .env:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight properties"&gt;&lt;code&gt;&lt;span class="py"&gt;N8N_SECURE_COOKIE&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose down
docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Data Persistence
&lt;/h3&gt;

&lt;p&gt;All important data is stored in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/n8n/n8n-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workflows&lt;/li&gt;
&lt;li&gt;Credentials&lt;/li&gt;
&lt;li&gt;SQLite database&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Github: &lt;a href="https://github.com/mahinshanazeer/n8n-home-server" rel="noopener noreferrer"&gt;https://github.com/mahinshanazeer/n8n-home-server&lt;/a&gt;&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>n8n</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>From a Confused Graduate to a DevOps Engineer</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Mon, 09 Mar 2026 06:50:36 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/from-a-confused-graduate-to-a-devops-engineer-4dfh</link>
      <guid>https://forem.com/mahinshanazeer/from-a-confused-graduate-to-a-devops-engineer-4dfh</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/wecoded-2026"&gt;2026 WeCoded Challenge&lt;/a&gt;: Echoes of Experience&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After graduating with a B.Tech in Electronics and Communication Engineering, I was confused about which profession to choose. There were no mentors around me, and no one I knew worked in this field. Choosing a career path was difficult. I attended several aptitude tests and managed to clear some of them, but I often struggled during interviews because of my poor communication skills. Even today, I still find it challenging to express what is in my mind clearly. Many times, when I tried to explain something, people misunderstood me, which made me even more disappointed.&lt;/p&gt;

&lt;p&gt;After about two months, I received an offer from a company based in Bangalore. Because of the COVID situation, the job was remote. However, the role turned out to be very different from what was described in the job description. It was not a technical role, and the work environment was quite toxic. Since there was nothing meaningful to learn and it did not help in building my career, I decided to resign after just one month. Looking back today, it was one of the wisest decisions I have made in my life.&lt;/p&gt;

&lt;p&gt;Soon after that, I received a call from another company located in Cochin, close to my hometown. The role was related to Linux, so I decided to give it a try.&lt;/p&gt;

&lt;p&gt;I joined Poornam Infovision in January 2022. Initially, it was very tough because I had no background in Linux. After completing my training period, I moved to a team that provided shared hosting support. We handled more than 60 clients and mainly provided ticket-based support. Even though there were some Linux-related tasks, most of the work involved cPanel and Plesk.&lt;/p&gt;

&lt;p&gt;I struggled with multitasking. The ticket flow in the team was very high, and handling multiple issues simultaneously was difficult for me. When I worked on several things at the same time, I sometimes missed important details in tickets, which started affecting my performance. After a few months, HR noticed this, and they initially decided to let me go. I was mentally prepared for it because I felt this role was not the right fit for me.&lt;/p&gt;

&lt;p&gt;However, instead of terminating my role, the HR lead decided to give me another chance and moved me to a different team. That decision completely changed my life.&lt;/p&gt;

&lt;p&gt;In my second team, the most interesting person was my team lead. He became my first real mentor and guided me well. When I joined the team, we had only two clients. Suddenly, we received another client — a data centre where we managed operations. The work involved pure Linux and networking. That was the moment things started to become interesting for me.&lt;/p&gt;

&lt;p&gt;My lead handed over full responsibility for that client to me. It became my first client. In my previous team, my lead was afraid to even let me handle a single ticket, but here someone trusted me with an entire client. I gave my 100% effort, and things slowly started to change. I became more interested in learning new technologies.&lt;/p&gt;

&lt;p&gt;During the same time, one of my college friends joined an MNC and told me about the tasks he was doing during training. That motivated me to start learning on my own. I created an AWS account and launched my first EC2 instance. I experimented with setting up reverse proxies, configuring firewalls, and deploying websites on my server. Slowly, I began exploring other AWS services as well.&lt;/p&gt;

&lt;p&gt;During this period, the client I was handling was moved to another team because it required deeper networking expertise. However, the company decided to merge our team with another team that was focused on AWS. This gave me the opportunity to apply what I had been learning on my own.&lt;/p&gt;

&lt;p&gt;By this time, I had developed a basic understanding of Linux and networking through data centre operations. Through self-learning, I also gained knowledge of core AWS services. From my hosting support experience, I already knew the basics of DNS and Linux systems. This was only the seventh month of my career.&lt;/p&gt;

&lt;p&gt;After the teams merged, we started working on multiple projects together. The environment was very open and collaborative. There were no strict restrictions — we explored problems from different perspectives and discussed ideas as a team. This helped us gain a deeper technical understanding.&lt;/p&gt;

&lt;p&gt;I never stopped learning. I dedicated at least one hour every day to learning something new. I started learning Docker and CI/CD tools. In one of our projects, Docker was required, but the microservices were running as traditional Linux services. I proposed the idea of containerising them to my lead, and he showed interest.&lt;/p&gt;

&lt;p&gt;We attempted to containerise the microservices. I spent more than two months working on it, but eventually failed to achieve the final result. However, during that process, I learned a lot. I gained a deep understanding of file descriptor tables, Docker, Docker Compose, Docker Swarm, Docker networking, volumes, resource management, and secrets.&lt;/p&gt;

&lt;p&gt;Because my early performance in the first team had affected my evaluation, my salary remained very low — around ₹12,000 per month. Meanwhile, some new employees in the team were earning more than ₹25,000. I never complained about it. I focused on learning and improving myself. From a management perspective, they were evaluating based on output, so technically it was understandable.&lt;/p&gt;

&lt;p&gt;After spending two years there, I decided it was time to explore new opportunities. I started applying to other companies and attending interviews. Once again, my communication challenges became a barrier. Even though I had the technical knowledge, explaining it clearly was difficult. But I kept trying.&lt;/p&gt;

&lt;p&gt;Eventually, I received an offer from Infosys.&lt;/p&gt;

&lt;p&gt;At Infosys, I joined a legacy project that was around 15 years old. The team was working on modernising the infrastructure. The project was related to aircraft maintenance systems, and the infrastructure was fully based on AWS.&lt;/p&gt;

&lt;p&gt;At Infosys, I learned a lot about professional processes and enterprise environments. I also worked on improving my communication skills. I learned Bash scripting and started building automation using AWS CLI. I also designed and set up a self-managed Kubernetes cluster as a testing environment. Through this, I gained a better understanding of system architecture and infrastructure design.&lt;/p&gt;

&lt;p&gt;However, after spending almost a year there, I felt that my learning opportunities were becoming limited. Most of the work involved routine operational tasks. I applied internally for other opportunities and even got selected for an Azure-based project. Unfortunately, my manager did not release me from the current project because my performance was strong.&lt;/p&gt;

&lt;p&gt;At that point, I realized that resignation was the only option. Exactly one year after joining, I decided to resign.&lt;/p&gt;

&lt;p&gt;Currently, I am working at Accenture Song. It has been about seven months now. I received the offer and joined immediately after my last working day at Infosys.&lt;/p&gt;

&lt;p&gt;The work here is more challenging. I have to communicate directly with clients and handle complex tasks. There is definitely pressure, but I actually enjoy it. Every day is different, with new challenges and new learning opportunities.&lt;/p&gt;

&lt;p&gt;When I look back at my journey, I realise that the best decisions I made were choosing the right opportunities at the right time. Instead of staying in my comfort zone, I focused on improving myself.&lt;/p&gt;

&lt;p&gt;I am naturally a very introverted person, and my current role requires constant communication with clients. I am still working on improving this skill. At one point in my life, I never imagined that I would work at a company like Accenture. But life has its own way of surprising us.&lt;/p&gt;

&lt;p&gt;I do not know where I will be in the next 5 or 10 years. But I am certain about one thing — I will continue to take opportunities, keep learning, and keep improving myself.&lt;/p&gt;

&lt;p&gt;I still remember that, in the early stages of my career, I often thought about quitting. Looking back today, not quitting was one of the best decisions I have ever made. I may not be working at the biggest company or earning the highest salary. But for someone like me — a person who started with nothing and had no background or guidance in this field — this journey itself feels like an achievement.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>wecoded</category>
      <category>dei</category>
      <category>career</category>
    </item>
    <item>
      <title>Automating Daily Tasks with systemd Timers: A Practical Guide Using Python</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Sat, 29 Nov 2025 06:32:45 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/automating-daily-tasks-with-systemd-timers-a-practical-guide-using-python-1bhm</link>
      <guid>https://forem.com/mahinshanazeer/automating-daily-tasks-with-systemd-timers-a-practical-guide-using-python-1bhm</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7vrcegh6szg1qracnzm.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx7vrcegh6szg1qracnzm.jpeg" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this guide, I will demonstrate how to automate a daily Python task using &lt;strong&gt;systemd timers&lt;/strong&gt; on Linux. We plan to automate a scheduled script execution on our home-lab Raspberry Pi, which operates 24×7 and also functions as the in-house DNS server for our home network.&lt;/p&gt;

&lt;h3&gt;
  
  
  1) Short overview
&lt;/h3&gt;

&lt;p&gt;The setup uses a virtual environment located at /home/admin/devops_digest/devops_env/bin/python3 and runs a Python script placed at /home/admin/devops_digest/daily_linkedin.py every day at &lt;strong&gt;7:00 AM&lt;/strong&gt;. This walkthrough provides a clean and reliable approach suitable for production systems, ensuring your scheduled jobs run consistently without relying on external cron utilities.&lt;/p&gt;

&lt;p&gt;Below is a concise, production-ready guide you can use in your blog. It explains each step, includes copy-pasteable commands and example service/timer files, and covers testing, logging, security and common troubleshooting notes.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Linux system with systemd (most modern distributions).&lt;/li&gt;
&lt;li&gt;Script located at /home/admin/devops_digest/daily_linkedin_debug.py.&lt;/li&gt;
&lt;li&gt;A Python virtual environment at /home/admin/devops_env with required packages installed.&lt;/li&gt;
&lt;li&gt;User who will run the job (admin in examples). Adjust paths/user if different.&lt;/li&gt;
&lt;li&gt;(Optional) bsd-mailx configured if your script sends mail.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3) Why use systemd timers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;More reliable than cron (handles missed runs with Persistent=true).&lt;/li&gt;
&lt;li&gt;Centralised logs via journalctl.&lt;/li&gt;
&lt;li&gt;Fine-grained control over runtime environment and restart/timeout policies&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4) Create the systemd service unit
&lt;/h3&gt;

&lt;p&gt;Create /etc/systemd/system/devops_digest.service with the exact content below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Daily LinkedIn DevOps Digest Generator
Documentation=man:systemd.service(5)
Wants=network-online.target
After=network-online.target

[Service]
Type=oneshot
User=admin
WorkingDirectory=/home/admin/devops_digest
# Use the venv python interpreter to avoid environment issues
ExecStart=/home/admin/devops_env/bin/python /home/admin/devops_digest/daily_linkedin_debug.py
# Set a sensible timeout
TimeoutStartSec=600
# Restrict capabilities for safety (optional)
PrivateTmp=yes
ProtectSystem=full
NoNewPrivileges=yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User=admin runs the job as admin. Change it if needed.&lt;/li&gt;
&lt;li&gt;ExecStart points to the venv Python so pip-managed packages inside the venv are used.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F796tnbxi5owi5df53zs8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F796tnbxi5owi5df53zs8.png" width="800" height="435"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;systemd service&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  5) Create the systemd timer unit
&lt;/h3&gt;

&lt;p&gt;Create /etc/systemd/system/devops_digest.timer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Run DevOps Digest every morning at 07:00

[Timer]
# Run daily at 07:00 local time
OnCalendar=*-*-* 07:00:00
# If the machine was off at scheduled time, run at bootup
Persistent=true
# Randomized delay (optional) to avoid thundering herd if many timers exist
RandomizedDelaySec=1m
[Install]
WantedBy=timers.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9vj21l9rwyfukadgkp6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9vj21l9rwyfukadgkp6.png" width="800" height="306"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;systemd timer&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  6) Enable and start the timer
&lt;/h3&gt;

&lt;p&gt;Reload systemd, enable and start the timer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl enable --now devops_digest.timer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify timer is active and next run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl list-timers devops_digest.timer --all
# or
systemctl status devops_digest.timer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pcycecvja0v025lky67.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pcycecvja0v025lky67.png" width="800" height="273"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;enabling timer&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  7) Test the service manually
&lt;/h3&gt;

&lt;p&gt;Run the service once to test behaviour and logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl start devops_digest.service
sudo systemctl status devops_digest.service --no-pager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8libvovn3s5zgw0idset.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8libvovn3s5zgw0idset.png" width="800" height="137"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Testing&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;View logs from the last run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;journalctl -u devops_digest.service -n 200 --no-pager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the job runs as a non-root user and writes files in the user's home, check file ownership and that the User= is correct.&lt;/p&gt;

&lt;h3&gt;
  
  
  8) Inspect timer runs and history
&lt;/h3&gt;

&lt;p&gt;To see when the timer last ran and when it will run next:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl list-timers --all | grep devops_digest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For recent timer and service logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;journalctl -u devops_digest.timer -u devops_digest.service --since "2 days ago" --no-pager
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws3r0ai98how5wdtbtfy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fws3r0ai98how5wdtbtfy.png" width="800" height="88"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;next schedule&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  9) Handling environment/secrets
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Do &lt;strong&gt;not&lt;/strong&gt; put secrets in unit files. Use environment files with strict permissions:&lt;/li&gt;
&lt;li&gt;Create /etc/devops_digest/env with KEY=value lines.&lt;/li&gt;
&lt;li&gt;Protect it: sudo chown root:root /etc/devops_digest/env &amp;amp;&amp;amp; sudo chmod 600 /etc/devops_digest/env&lt;/li&gt;
&lt;li&gt;In the service unit: add EnvironmentFile=/etc/devops_digest/env.&lt;/li&gt;
&lt;li&gt;Alternatively, source venv and have the script read secrets from ~/.config/devops_digest/ with 600 perms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example addition to [Service]:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EnvironmentFile=/etc/devops_digest/env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  10) Recovering from failures
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Add retry behaviour by using a small wrapper script that retries transient failures, or use Restart=on-failure (not typically useful for oneshot service).&lt;/li&gt;
&lt;li&gt;Check logs: journalctl -u devops_digest.service -b&lt;/li&gt;
&lt;li&gt;If your script requires a network, ensure network-online.target in [Unit] and Wants=network-online.target are present.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  11) Optional: systemd user timer (runs as user)
&lt;/h3&gt;

&lt;p&gt;If you prefer not to create root-owned units, you can use user-level systemd timers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# as admin user (no sudo)
mkdir -p ~/.config/systemd/user
# copy the .service and .timer into ~/.config/systemd/user/
systemctl --user daemon-reload
systemctl --user enable --now devops_digest.timer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Advantages: no root required. Disadvantages: user systemd must be running (e.g., login session or lingering enabled with loginctl enable-linger admin).&lt;/p&gt;

&lt;h3&gt;
  
  
  12) Security considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Run service as non-root user (User=).&lt;/li&gt;
&lt;li&gt;Use PrivateTmp=yes, NoNewPrivileges=yes, ProtectSystem=full where applicable.&lt;/li&gt;
&lt;li&gt;Keep venv and script permissions restricted to the running user.&lt;/li&gt;
&lt;li&gt;Store secrets in protected files (chmod 600) or in a secrets manager.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  13) Sample files recap
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;/etc/systemd/system/devops_digest.service — service unit (see step 4).&lt;/li&gt;
&lt;li&gt;/etc/systemd/system/devops_digest.timer — timer unit (see step 5).&lt;/li&gt;
&lt;li&gt;(Optional) /etc/devops_digest/env — env file for secrets (owner root, mode 600).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  14) Troubleshooting checklist
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;systemctl status devops_digest.service → immediate errors.&lt;/li&gt;
&lt;li&gt;journalctl -u devops_digest.service -e → full logs.&lt;/li&gt;
&lt;li&gt;Check the script shebang and that ExecStart uses the venv Python.&lt;/li&gt;
&lt;li&gt;Ensure User has permissions to read/write files referenced by the script.&lt;/li&gt;
&lt;li&gt;If feed access fails, test curl from the server to verify network/DNS.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  15) Quick blog-friendly conclusion
&lt;/h3&gt;

&lt;p&gt;Systemd timers are the correct choice for reliable scheduled automation on modern Linux servers. They provide robust scheduling, built-in logging, easy management, and options for secure, user-scoped execution. For a daily digest job that runs Python from a virtualenv and emails results, use a one-shot service + timer, set Persistent=true and test with manual runs and journalctl before relying on automation.&lt;/p&gt;

</description>
      <category>systemd</category>
      <category>systemdservice</category>
      <category>automation</category>
      <category>crontab</category>
    </item>
    <item>
      <title>Just Upgraded My Raspberry Pi Kubernetes Cluster to v1.34!

I’ve documented the entire process of upgrading my Raspberry Pi–based Kubernetes cluster from v1.29 v1.34, including ETCD backup, incremental version upgrades, and some practical troubleshooting</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Thu, 16 Oct 2025 14:44:13 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/just-upgraded-my-raspberry-pi-kubernetes-cluster-to-v134-ive-documented-the-entire-process-of-hm</link>
      <guid>https://forem.com/mahinshanazeer/just-upgraded-my-raspberry-pi-kubernetes-cluster-to-v134-ive-documented-the-entire-process-of-hm</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/mahinshanazeer" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3048938%2F24e92f7a-bcda-4167-a1fd-52dbc7eab1cd.jpg" alt="mahinshanazeer"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/mahinshanazeer/upgrading-our-raspberry-pi-kubernetes-cluster-from-v129-to-v134-etcd-backup-guide-28" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Upgrading Our Raspberry Pi Kubernetes Cluster: From v1.29 to v1.34 &amp;amp; ETCD Backup Guide&lt;/h2&gt;
      &lt;h3&gt;Mahinsha Nazeer ・ Oct 16 '25&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#raspberrypi&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#etcd&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetescluster&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>raspberrypi</category>
      <category>kubernetes</category>
      <category>etcd</category>
      <category>kubernetescluster</category>
    </item>
    <item>
      <title>Upgrading Our Raspberry Pi Kubernetes Cluster: From v1.29 to v1.34 &amp; ETCD Backup Guide</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Thu, 16 Oct 2025 14:35:59 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/upgrading-our-raspberry-pi-kubernetes-cluster-from-v129-to-v134-etcd-backup-guide-28</link>
      <guid>https://forem.com/mahinshanazeer/upgrading-our-raspberry-pi-kubernetes-cluster-from-v129-to-v134-etcd-backup-guide-28</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwu5p2naukvs3i2n7de0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwu5p2naukvs3i2n7de0.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keeping your cluster up-to-date is crucial for security, stability, and access to new features. In this blog, we’ll take a closer look at our home lab Kubernetes cluster running on Raspberry Pi devices. The setup includes 1 master node and 3 worker nodes, and it has been running smoothly for over four months without any major issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw08kevqkmj35sxnlaqmi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw08kevqkmj35sxnlaqmi.png" width="702" height="171"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;k8s home lab configuration — nodes&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;master2 — master node/ control plane&lt;br&gt;&lt;br&gt;
master1 — worker node 3&lt;br&gt;&lt;br&gt;
worker 1 — worker node 1&lt;br&gt;&lt;br&gt;
worker 2 — worker node 2&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71l4j326hhw4q63xe3xh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71l4j326hhw4q63xe3xh.png" width="800" height="388"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;k8s home lab configuration — system components&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As part of regular maintenance and to stay up-to-date with the latest features and security patches, we decided to upgrade the cluster from Kubernetes v1.29 to v1.34. Alongside the upgrade, we’ll also walk through the process of backing up the ETCD datastore, which is crucial for fault tolerance and disaster recovery.&lt;/p&gt;

&lt;p&gt;Here’s what we’ll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⬆️ Step-by-step guide to upgrading Kubernetes components.&lt;/li&gt;
&lt;li&gt;🛡️ Safely backing up ETCD in the master node.&lt;/li&gt;
&lt;li&gt;✅ Post-upgrade verification.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  A. Backing Up ETCD
&lt;/h3&gt;

&lt;p&gt;Before diving into the upgrade process, it’s essential to create a backup of ETCD, the key-value store that holds all cluster state data. This step is crucial because if anything goes wrong during the upgrade — whether it’s a misconfiguration or a failed component — we need a reliable way to restore the cluster to its current stable state.&lt;/p&gt;

&lt;p&gt;Think of the ETCD backup as your safety net. It ensures that even in the worst-case scenario, you won’t lose your cluster’s configuration, workloads, or networking setup.&lt;/p&gt;

&lt;p&gt;Run the following command to confirm the ETCD pod name running on your control plane:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n kube-system | grep etcd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rap63eiwe1sooahn0zj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rap63eiwe1sooahn0zj.png" width="800" height="66"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;k8s configuration — ETCD setup&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Suppose your control plane (master node) runs ETCD as a static pod (which is the default in kubeadm-based clusters, including Raspberry Pi setups). In that case, you can run etcdctl directly on the master node host — because the ETCD data directory and certificates are mounted locally at &lt;em&gt;/etc/kubernetes/pki/etcd/&lt;/em&gt; and &lt;em&gt;/var/lib/etcd/.&lt;/em&gt; In that case, you can use the following command to take the backup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo ETCDCTL_API=3 etcdctl snapshot save /var/lib/etcd/backup.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If ETCD runs as a container (inside the pod) or you don’t have the etcdctl binary available on the host, then you’ll need to exec into the pod to run the backup from inside the container.&lt;/p&gt;

&lt;p&gt;In my setup, where &lt;strong&gt;etcd&lt;/strong&gt; runs as a container, I need to access the pod directly to execute the backup. There is no etcdctl binary available on the host.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -n kube-system &amp;lt;node-label&amp;gt;-- \
  etcdctl snapshot save /var/lib/etcd/backup.db \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

#edit the &amp;lt;node-label&amp;gt; with etcd node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0vxw69c8reo1hmz19nz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0vxw69c8reo1hmz19nz.png" width="800" height="290"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Creating backup&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now, verify the snapshot using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -n kube-system &amp;lt;node-label&amp;gt;-- \
  etcdctl snapshot status /var/lib/etcd/backup.db -w table
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow92f5m98koxinvw3dql.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fow92f5m98koxinvw3dql.png" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;In the above screenshot, you can seee a deprecation warning. ETCD v3.6+ is deprecated &lt;em&gt;etcdctl snapshot status&lt;/em&gt; and recommends using &lt;em&gt;etcdutl snapshot status&lt;/em&gt;. However, in my kubeadm static pod setup, the &lt;em&gt;etcdutl&lt;/em&gt; binary is not included in the ETCD container. Only &lt;em&gt;etcdctl&lt;/em&gt; exists inside the pod. The warning is simply a deprecation notice, not a failure. You can safely continue using it &lt;em&gt;etcdctl snapshot status&lt;/em&gt; — it works perfectly.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In my setup, I came across a deprecation warning while using etcdctl. To verify, I checked the current version of etcd using the following command.&lt;/p&gt;

&lt;p&gt;Since my cluster is kubeadm-managed, even though etcd v3.6+ recommends using etcdutl for snapshot management, etcdctl inside the etcd pod still works perfectly for backup and restore. etcdutl is a newer utility introduced in etcd v3.6+ as a replacement for certain snapshot operations, but in my case, continuing with etcdctl is fully sufficient.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl exec -n kube-system etcd-master2 -- etcdctl version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2twmjlvobru6n5v615x9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2twmjlvobru6n5v615x9.png" width="800" height="92"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;etcd version checking&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In some scenarios, the etcd backup file may reside only inside the etcd pod and not in a host-accessible directory.&lt;/p&gt;

&lt;p&gt;This can happen if the etcd container is minimal and lacks tools like tar to move files around, or if the host does not have etcdctl installed to create the snapshot directly. In such cases, you can use specific commands to move the backup file to a host-accessible directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl cp kube-system/etcd-master2:/tmp/backup.tar.gz ./backup.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In my setup, I created the etcd snapshot directly on the host in a host-accessible directory (for example, /var/lib/etcd/backup.db). Since the backup is already on the host, it can be copied or archived without using kubectl cp.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo cp /var/lib/etcd/backup.db ~/backup.db
sudo chown $USER:$USER ~/backup.db
ls -l ~/backup.db
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsotg0vm7mawwk7w4kpvi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsotg0vm7mawwk7w4kpvi.png" width="800" height="82"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;backup.db file status&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Additionally, it’s a good practice to create a copy of the backup on a remote server. This ensures an extra layer of safety and helps protect against data loss in case of hardware failure or accidental deletion.&lt;/p&gt;
&lt;h3&gt;
  
  
  B. Upgrading the cluster
&lt;/h3&gt;

&lt;p&gt;Since we have backed up etcd, we can safely proceed with the kubeadm upgrade for our Raspberry Pi Kubernetes cluster. Let’s go through a step-by-step process to upgrade the cluster from v1.29 → v1.34.&lt;/p&gt;

&lt;p&gt;Kubernetes officially supports upgrading one minor version at a time, which means that to move from 1.29 to 1.34, you need to perform incremental upgrades:&lt;/p&gt;

&lt;p&gt;Step 1: v1.29 → v1.30 &lt;br&gt;
Step 2: v1.30 → v1.31&lt;br&gt;&lt;br&gt;
Step 3: v1.31 → v1.32&lt;br&gt;&lt;br&gt;
Step 4: v1.32 → v1.33&lt;br&gt;&lt;br&gt;
Step 5: v1.33 → v1.34&lt;/p&gt;

&lt;p&gt;Please refer to the following URL for k8s official documentation on skew policy:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://v1-30.docs.kubernetes.io/releases/version-skew-policy/" rel="noopener noreferrer"&gt;Version Skew Policy&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 1: Pre-Checks for upgrading the cluster
&lt;/h4&gt;

&lt;p&gt;Before upgrading the cluster, it’s essential to ensure that the cluster is healthy and stable. This helps prevent issues during the upgrade and ensures a smooth transition between versions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl version
kubectl get nodes
kubectl get cs
kubectl get pods -n kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pynghdxs65c8xhkoa0y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pynghdxs65c8xhkoa0y.png" width="800" height="817"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Current status of the cluster&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 2: Upgrade kubeadm on the Master Node
&lt;/h4&gt;

&lt;p&gt;You can also refer to the following documentation for more details regarding kubeadm upgrade:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://v1-30.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="noopener noreferrer"&gt;Upgrading kubeadm clusters&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To upgrade to v1.30, ensure that the directory /etc/apt/keyrings exists. If it doesn’t, create it before running the curl command (see the note below). Afterwards, update the package repository to target v1.30.&lt;/p&gt;

&lt;p&gt;Each Kubernetes minor release repository (v1.29, v1.30, v1.31, etc.) may use a different GPG key. Using the key from an older version, such as v1.29, can cause package verification failures when installing packages from the v1.30 repository.&lt;/p&gt;

&lt;p&gt;To prevent such errors, always use the GPG key and repository provided specifically for the minor version you are upgrading to.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before upgradingkubeadmEnsure your package lists are up to date. Then, install the required version by using the following commands&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-mark unhold kubelet kubeadm kubectl &amp;amp;&amp;amp; \
sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1 kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo apt-mark hold kubelet kubeadm kubectl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to upgrade to a specific Kubernetes version, you can use the following command to list all available minor versions in the corresponding repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apt-cache madison kubeadm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8b9p9j4vgpfgu8lr2bly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8b9p9j4vgpfgu8lr2bly.png" width="800" height="376"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;available packages in the repo&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For comprehensive guidance on upgrading your Kubernetes cluster to version &lt;strong&gt;v1.30&lt;/strong&gt; , refer to the official Kubernetes documentation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="noopener noreferrer"&gt;Installing kubeadm&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This resource provides detailed, step-by-step instructions tailored for clusters managed with &lt;strong&gt;kubeadm&lt;/strong&gt; , ensuring a smooth and efficient upgrade process.&lt;/p&gt;

&lt;p&gt;Once the upgrade is complete, verify the &lt;strong&gt;kubeadm&lt;/strong&gt; version using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubeadm version
kubectl version --client
kubelet --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6wrh1jaxir1eqpzu54g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6wrh1jaxir1eqpzu54g.png" width="800" height="154"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;current kubeadm version&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 3: Plan the Upgrade
&lt;/h4&gt;

&lt;p&gt;After upgrading kubeadm, it’s important to check the upgrade path and verify the versions of all cluster components. This ensures that all parts of your Kubernetes cluster are compatible and running the expected versions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo kubeadm upgrade plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will display:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;current cluster version&lt;/strong&gt; and control plane version.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;available target versions&lt;/strong&gt; you can upgrade to.&lt;/li&gt;
&lt;li&gt;The versions of &lt;strong&gt;kubelet&lt;/strong&gt; and &lt;strong&gt;kubectl&lt;/strong&gt; on your nodes.&lt;/li&gt;
&lt;li&gt;Any &lt;strong&gt;required steps or notes&lt;/strong&gt; for a smooth upgrade.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdpa8khrtcasuyttn82s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdpa8khrtcasuyttn82s.png" width="800" height="528"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;kubeadm upgrade plan output — 1&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pnju1qqyahtdwr9tubw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pnju1qqyahtdwr9tubw.png" width="800" height="205"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;kubeadm upgrade plan output — 2&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 4: Upgrade the Control Plane
&lt;/h4&gt;

&lt;p&gt;Once the upgrade plan is reviewed and all changes are understood, you can proceed with upgrading the cluster. Use the kubeadm upgrade apply command, which is provided in the output of kubeadm upgrade plan.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo kubeadm upgrade apply v1.30.14
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The upgrade process may take some time, depending on the size and complexity of your cluster. Once it completes successfully, you should see a message similar to the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8d40tvchwgkydiy0v5ky.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8d40tvchwgkydiy0v5ky.png" width="800" height="207"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;upgraded to “v1.30.14”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now, verify the control plane using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Step 5: Upgrade Worker Nodes
&lt;/h4&gt;

&lt;p&gt;Now repeat the same steps from ‘Step 2: Upgrade kubeadm on the Master Node’ again in master1, worker1 and worker2 nodes (worker nodes). Workers do not control the cluster state; running upgrade plan there doesn’t provide meaningful information. First drain the worker node from the control plane, so that all the resources will be taken out of the node&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl drain &amp;lt;worker-node&amp;gt; --ignore-daemonsets --delete-emptydir-data

***
kubectl drain worker1 --ignore-daemonsets --delete-emptydir-data
***
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98al0b98iy9pgq2tt2re.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F98al0b98iy9pgq2tt2re.png" width="800" height="160"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;draining resources from worker node (master1)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now you can repeat the steps we followed for control plane for upgrading the version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh to master1 node
-------------------------------
#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list

#step 2:
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#step 3:
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

#step 4:
sudo apt-mark unhold kubelet kubeadm kubectl &amp;amp;&amp;amp; \
sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1 kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo apt-mark hold kubelet kubeadm kubectl

#step 5:
kubeadm version

#step 6:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system

#step 7:
kubeadm upgrade node

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkm8w1ta7jb5c415svhg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkm8w1ta7jb5c415svhg.png" width="800" height="164"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;upgrading worker node — host ‘master2’ is a worker node here. (master2 — master node/ control planemaster1 — worker node 3worker 1 — worker node 1worker 2 — worker node 2)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Also verify the nodes from control plane:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr4xepgvt9acayq5l05i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftr4xepgvt9acayq5l05i.png" width="800" height="155"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;upgrade completed&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now we can following commmand to make the node available for schedule;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl uncordon master1
#run this command in the control plane.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1eipop8briafak1jw5tv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1eipop8briafak1jw5tv.png" width="671" height="187"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Marking node as schedulable&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;kubectl uncordon command is used to &lt;strong&gt;mark a node as schedulable again&lt;/strong&gt; after it has been drained.&lt;/p&gt;

&lt;p&gt;Similarly we can do the upgrades in other 2 worker nodes, I will mention the commands below:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;worker1:&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl drain &amp;lt;worker-node&amp;gt; --ignore-daemonsets --delete-emptydir-data

***
kubectl drain worker1 --ignore-daemonsets --delete-emptydir-data
***

ssh to worker1 node

#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list

#step 2:
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#step 3:
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

#step 4:
sudo apt-mark unhold kubelet kubeadm kubectl &amp;amp;&amp;amp; \
sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1 kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo apt-mark hold kubelet kubeadm kubectl

#step 5:
kubeadm version

#step 6:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet

#step 7:
sudo kubeadm upgrade node

ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl uncordon worker1
kubectl get nodes
kubectl get pods -n kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;worker2:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl drain &amp;lt;worker-node&amp;gt; --ignore-daemonsets --delete-emptydir-data

***
kubectl drain worker2 --ignore-daemonsets --delete-emptydir-data
***

ssh to worker2 node
-------------------------------
#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list

#step 2:
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#step 3:
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

#step 4:
sudo apt-mark unhold kubelet kubeadm kubectl &amp;amp;&amp;amp; \
sudo apt-get update
sudo apt-get install -y kubeadm=1.30.14-1.1 kubelet=1.30.14-1.1 kubectl=1.30.14-1.1
sudo apt-mark hold kubelet kubeadm kubectl

#step 5:
kubeadm version

#step 6:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system

#step 7:
sudo kubeadm upgrade node

ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl uncordon worker2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All nodes have now been successfully upgraded to &lt;strong&gt;v1.30.14&lt;/strong&gt; , as shown in the following screenshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1u1kw1kw3zc9jnbkarp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1u1kw1kw3zc9jnbkarp.png" width="617" height="170"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;kubernetes version v1.30.14&lt;/em&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Step 5: Upgrading to 1.34
&lt;/h4&gt;

&lt;p&gt;Following the same process, we’ll now proceed to upgrade the cluster from &lt;strong&gt;v1.30.14&lt;/strong&gt; to &lt;strong&gt;v1.31&lt;/strong&gt;. Then we will proceed with with&lt;/p&gt;

&lt;p&gt;Step 1: v1.29.15 → v1.30.14–1.1&lt;br&gt;&lt;br&gt;
Step 2: v1.30.14–1.1 → v1.31.13–1.1&lt;br&gt;&lt;br&gt;
Step 3: v1.31.13–1.1 → v1.32.9&lt;br&gt;&lt;br&gt;
Step 4: v1.32.9 → v1.33.5&lt;br&gt;&lt;br&gt;
Step 5: v1.33.5 → v1.34.1&lt;/p&gt;

&lt;p&gt;In control-plane (master2) use the following steps to upgrade to version 1.31:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh to controlplane (master2)

#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list

# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

#step 2:
sudo apt update
apt-cache madison kubeadm

#step 3:
sudo apt-mark unhold kubelet kubeadm kubectl &amp;amp;&amp;amp; \
sudo apt-get update
sudo apt-get install -y kubeadm kubelet kubectl #will take the latest version
sudo apt-mark hold kubelet kubeadm kubectl

#step 4:
kubeadm version
kubectl version --client
kubelet --version

#step 5:
sudo kubeadm upgrade plan

#step 6:
sudo kubeadm upgrade apply &amp;lt;version&amp;gt;
#I used the following command
sudo kubeadm upgrade apply v1.31.13

#step 7:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets setup the worker nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl drain &amp;lt;worker-node&amp;gt; --ignore-daemonsets --delete-emptydir-data

ssh to &amp;lt;worker-node&amp;gt; node
-------------------------------
#step 1:
sudo rm -rf /etc/apt/sources.list.d/kubernetes.list

#step 2:
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
#step 3:
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

#step 4:
sudo apt update
apt-cache madison kubeadm
sudo apt-mark unhold kubelet kubeadm kubectl &amp;amp;&amp;amp; \
sudo apt-get install -y kubeadm kubelet kubectl #will take the latest version
sudo apt-mark hold kubelet kubeadm kubectl

#step 5:
kubeadm version

#step 6:
sudo kubeadm upgrade node

#step 7:
#rebooting the master node or restarting the kubelete is highly recommended
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubectl get nodes
kubectl get pods -n kube-system

ssh to controlplane (master2)
-------------------------------
#step 1:
kubectl uncordon &amp;lt;worker-node&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;a href="https://v1-31.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="noopener noreferrer"&gt;Upgrading kubeadm clusters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://v1-32.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="noopener noreferrer"&gt;Upgrading kubeadm clusters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://v1-33.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/" rel="noopener noreferrer"&gt;Upgrading kubeadm clusters&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkgbd2ezpmytizepmttn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkgbd2ezpmytizepmttn.png" width="671" height="171"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;upgraded to v1.31.13&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Follow the same steps to upgrade sequentially through the remaining versions. The process remains identical — the only change is the repository you add in &lt;strong&gt;Step 2&lt;/strong&gt; , where the Kubernetes repository is updated.&lt;/p&gt;

&lt;p&gt;Simply replace it with the corresponding repository for each target version before proceeding with the upgrade.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;For v1.32:
------------
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

For v1.33:
------------
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

For v1.34:
------------
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;so far all good, if you want to restore the ETCD with the backup file you can follow below mentioned steps, you can use this incase if you need to revert back the changes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl stop kube-apiserver
etcdctl snapshot restore /var/lib/etcd/backup.db \
  --data-dir /var/lib/etcd
#mention the exact path -/var/lib/etcd/backup.db
sudo systemctl start kube-apiserver

#now verify the installation using following command
etcdctl endpoint health \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key

kubectl get nodes
kubectl get pods --all-namespaces
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpmnieilmtvmx07y4xe4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpmnieilmtvmx07y4xe4.png" width="617" height="171"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;version v1.34.1&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We have now successfully completed the cluster upgrade. 🎉&lt;/p&gt;

&lt;p&gt;If you have any questions, feel free to reach out — I’m not a specialist, but I’ll be happy to research and share the most accurate updates I can find.&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>kubernetes</category>
      <category>etcd</category>
      <category>kubernetescluster</category>
    </item>
    <item>
      <title>A Hands-On Kubernetes Project for Starters</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Sun, 07 Sep 2025 12:21:58 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/a-hands-on-kubernetes-project-for-starters-2g4</link>
      <guid>https://forem.com/mahinshanazeer/a-hands-on-kubernetes-project-for-starters-2g4</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/mahinshanazeer" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3048938%2F24e92f7a-bcda-4167-a1fd-52dbc7eab1cd.jpg" alt="mahinshanazeer"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/mahinshanazeer/deploying-a-simple-app-on-k3s-in-aws-ec2-with-github-actions-ecr-520j" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Deploying a Simple App on K3S in AWS EC2 with GitHub Actions &amp;amp; ECR&lt;/h2&gt;
      &lt;h3&gt;Mahinsha Nazeer ・ Sep 7 '25&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#github&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetes&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kubernetescluster&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#githubactions&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>github</category>
      <category>kubernetes</category>
      <category>kubernetescluster</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Deploying a Simple App on K3S in AWS EC2 with GitHub Actions &amp; ECR</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Sun, 07 Sep 2025 12:15:31 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/deploying-a-simple-app-on-k3s-in-aws-ec2-with-github-actions-ecr-520j</link>
      <guid>https://forem.com/mahinshanazeer/deploying-a-simple-app-on-k3s-in-aws-ec2-with-github-actions-ecr-520j</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcdq4zhq3d93sh0etvku.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcdq4zhq3d93sh0etvku.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this session, we’ll walk through the configuration of K3S on an EC2 instance and deploy a multi-container application with a frontend, backend, and database. The application will run inside a Kubernetes cluster using Deployments and StatefulSets in headless mode. For the setup, we’ll use EC2 to host the cluster, GitHub as our code repository, and GitHub Actions to implement CI/CD.&lt;/p&gt;

&lt;p&gt;If you’re an absolute beginner and not familiar with configuring EC2, I recommend checking out my blog here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/mahinshanazeer/step-by-step-guide-to-launching-an-ec2-instance-on-aws-for-beginners-1ak8"&gt;Step-by-Step Guide to Launching an EC2 Instance on AWS : For Beginners&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will be an end-to-end project deployment designed for those learning K3S, CI/CD, and Docker. You’ll gain hands-on experience in setting up CI/CD pipelines, writing Dockerfiles, and using Docker Compose. We’ll then move on to deploying the application in K3S, working with Kubernetes manifests, and exploring key components such as Deployments, Services (NodePort and ClusterIP), ConfigMaps, Persistent Volumes (PV), Persistent Volume Claims (PVC), and StatefulSets.&lt;/p&gt;

&lt;p&gt;K3S is a lightweight Kubernetes distribution developed by Rancher (now SUSE). It’s designed to be:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Lightweight — small binary, minimal dependencies.&lt;/p&gt;

&lt;p&gt;Easy to install — single command installation.&lt;/p&gt;

&lt;p&gt;Optimized for edge, IoT, and small clusters — runs well on low-resource machines like Raspberry Pi or small EC2 instances.&lt;/p&gt;

&lt;p&gt;Fully compliant — supports all standard Kubernetes APIs and workloads.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In short, K3S simplifies Kubernetes and makes it resource-efficient, making it ideal for single-node clusters, test environments, and learning purposes.&lt;/p&gt;

&lt;p&gt;Log login to the EC2 machine and install k3s first:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;K3s&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can install K3S on your machine using the following single command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt update -y &amp;amp;&amp;amp; sudo apt upgrade -y
curl -sfL https://get.k3s.io | sh - 
# Check for Ready node, takes ~30 seconds 
sudo k3s kubectl get node 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7byvfgaf6ywdbhcp0yqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7byvfgaf6ywdbhcp0yqr.png" width="800" height="350"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Installation of k3s&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the installation is completed, the output should be similar to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nhx8en59s4xevsut36m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nhx8en59s4xevsut36m.png" width="800" height="87"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Kubectl node status&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the cluster is up and running, we can move on to the application. You can refer to the following repository for the demo To-Do List app. Before cloning the repository, make sure Docker is installed on the machine to build and test the application. For installing Docker, refer to the following URL:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.docker.com/engine/install/ubuntu/" rel="noopener noreferrer"&gt;Ubuntu&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#run the following command first to remove conficting packages

for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release &amp;amp;&amp;amp; echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list &amp;gt; /dev/null

sudo apt-get update

#Installing Docker
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

#Now verify the installtion.
sudo docker run hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, let’s dive into the demo application. The Application Stack:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Frontend: React.js
&lt;/li&gt;
&lt;li&gt;Backend API: Node.js + Express
&lt;/li&gt;
&lt;li&gt;Database: MongoDB
&lt;/li&gt;
&lt;li&gt;Containerization &amp;amp; Registry: Docker + AWS ECR
&lt;/li&gt;
&lt;li&gt;Orchestration &amp;amp; Service Management: Kubernetes (K3s)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, let’s clone the application repository to your local machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/mahinshanazeer/docker-frontend-backend-db-to_do_app.git" rel="noopener noreferrer"&gt;GitHub - mahinshanazeer/docker-frontend-backend-db-to_do_app: Simple Application with Frontend + Backened + DB&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/mahinshanazeer/docker-frontend-backend-db-to_do_app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj11yqu3pclcjjhj8szkf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj11yqu3pclcjjhj8szkf.png" width="800" height="180"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Clone the github application&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the repository is cloned, switch to the application directory and check for the Docker Compose file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32ic1wexe1z5w1xquslu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F32ic1wexe1z5w1xquslu.png" width="800" height="427"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Directory structure&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: "3.8"
services:
  web:
    build:
      context: ./frontend
      args:
        REACT_APP_API_URL: ${REACT_APP_API_URL}
    depends_on:
      - api
    ports:
      - "3000:80"
    networks:
      - network-backend
    env_file:
      - ./frontend/.env

  api:
    build: ./backend
    depends_on:
      - mongo
    ports:
      - "3001:3001"
    networks: 
      - network-backend

  mongo:
    build: ./backend-mongo  
    image: docker-frontend-backend-db-mongo
    restart: always
    volumes: 
      - ./backend-mongo/data:/data/db
    environment: 
      MONGO_INITDB_ROOT_USERNAME: admin
      MONGO_INITDB_ROOT_PASSWORD: adminhackp2025
    networks: 
      - network-backend

networks:
  network-backend:

volumes: 
  mongodb_data:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the Docker Compose file, you’ll see sections for web, api, and mongo. Let’s dive into each directory and review the Dockerfiles. The Docker Compose file builds the Docker images using the Dockerfiles located in their respective directories.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/frontend# cd /home/ubuntu/docker-frontend-backend-db-to_do_app/frontend
root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/frontend# cat Dockerfile 
# ---------- Build Stage ----------
FROM node:16-alpine AS build

WORKDIR /app

# Copy dependency files first
COPY package*.json ./

# Install dependencies
RUN npm install --legacy-peer-deps

# Copy rest of the app
COPY . .

# Build the React app
RUN npm run build

# ---------- Production Stage ----------
FROM nginx:alpine

# Copy custom nginx config if you have one
# COPY nginx.conf /etc/nginx/conf.d/default.conf

# Copy build output from build stage
COPY --from=build /app/build /usr/share/nginx/html

# Expose port 80
EXPOSE 80

# Start nginx
CMD ["nginx", "-g", "daemon off;"]

***

root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/backend# cd /home/ubuntu/docker-frontend-backend-db-to_do_app/backend
root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/backend# cat Dockerfile 
FROM node:10-alpine

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3001

***

root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/backend# cd /home/ubuntu/docker-frontend-backend-db-to_do_app/backend-mongo/
root@ip-172-31-22-24:/home/ubuntu/docker-frontend-backend-db-to_do_app/backend-mongo# cat Dockerfile 
FROM mongo:6.0
EXPOSE 27017
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the .env file in the frontend directory and update the IP address to your EC2 public IP. This environment variable is used by the frontend to connect to the backend, which runs on port 3001.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vi /home/ubuntu/docker-frontend-backend-db-to_do_app/frontend/.env

#edit the IP address, I have updated my EC2 public IP
~~~
REACT_APP_API_URL=http://54.90.185.176:3001/
~~~
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can also cross-check the total number of APIs using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;grep -R "router." backend/ | grep "("
grep -R "app." backend/ | grep "("
grep -R "app." backend/ | grep "(" | wc -l
grep -R "router." backend/ | wc -l
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s test the application by spinning up the containers. Navigate back to the project’s root directory and run the Docker Compose command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd /home/ubuntu/docker-frontend-backend-db-to_do_app
docker compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you run the command, Docker will start building the images and spin up the containers as soon as the images are ready&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04rzpk8ox7r0qz9f9sol.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04rzpk8ox7r0qz9f9sol.png" width="800" height="463"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;building docker containers&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Wait until you see the ‘built’ and ‘created’ messages. Once the containers are up and running, use docker ps -a to verify the status.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu69ulkztce7fg297y15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu69ulkztce7fg297y15.png" width="800" height="142"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;build completed and containers started.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker ps -a
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnixfome8650bjobiodvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnixfome8650bjobiodvw.png" width="800" height="142"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;docker processes&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the Docker containers are up and running, verify that the application is working as expected.&lt;/p&gt;

&lt;p&gt;Once the Docker containers are up and running, verify that the application is working as expected. Open the server’s IP address on port 3000. You can confirm the mapped ports in the Docker Compose file or by checking the docker ps -a output. Here, port 3000 is for the frontend web app, port 3001 is for the backend, and MongoDB runs internally on port 27017 without public access. In this example, load the website by entering 54.90.185.176:3000 in your browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbtsbzgjvajcvjy9e16i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbtsbzgjvajcvjy9e16i.png" width="" height=""&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Application interface&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you’re using Chrome, right-click anywhere on the page and open Inspect &amp;gt; Network. Then click on Add Todo to verify that the list updates correctly, and the network console shows a 200 status response&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fch5o3futrft0bh0tujre.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fch5o3futrft0bh0tujre.png" width="800" height="417"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;checking the network&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxtcs6yazugvfvemr34y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdxtcs6yazugvfvemr34y.png" width="800" height="607"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Application testing&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Click on the buttons and try to add new file, and verify the status codes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1zlkfbs6ncvg1nta94p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1zlkfbs6ncvg1nta94p.png" width="800" height="412"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Testing&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So far, everything looks good. Now, let’s proceed with the Kubernetes deployment. To configure resources in Kubernetes, we’ll need to create manifest files in YAML format. You can create these files as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir /home/ubuntu/manifest
touch api-deployment.yaml api-service.yaml image_tag.txt mongo-secret.yaml mongo-service.yaml mongo-statefulset-pv-pvc.yaml web-deployment.yaml web-env-configmap.yaml web-service.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now edit each file and add the following contents:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;api-deployment.yaml:&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Defines how the backend API should run inside the cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Creates 2 replicas of the API for reliability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses environment variables from secrets for MongoDB authentication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensures the API pods always restart if they fail.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Importance: Provides scalability and fault tolerance for the backend service.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Rolling Update: Gradually replaces old pods with new ones. Uses fewer resources, minimal downtime if tuned, but users may hit bad pods if the new version is faulty.&lt;/p&gt;

&lt;p&gt;👉 Rolling = efficient and native.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  labels:
    app: api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: api
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 3
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        - name: api
          image: 495549341534.dkr.ecr.us-east-1.amazonaws.com/hackp2025:api-20250907111542
          ports:
            - containerPort: 3001
          env:
            - name: MONGO_INITDB_ROOT_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongo-secret
                  key: username
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongo-secret
                  key: password
      restartPolicy: Always
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;api-service.yaml&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Exposes the API deployment to the outside world.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Type NodePort makes the service reachable via :31001.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensures frontend or external clients can communicate with the backend.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Importance: Acts as a bridge between users/frontend and the backend API.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: api
  labels:
    app: api
spec:
  type: NodePort
  selector:
    app: api
  ports:
    - port: 3001 # internal cluster port
      targetPort: 3001 # container port
      nodePort: 31001 # external port on the node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;mongo-secret.yaml&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Stores sensitive information (username &amp;amp; password) in base64-encoded format.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Used by both the API and MongoDB.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keeps credentials out of plain-text manifests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Importance: Secure way to handle database credentials.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Secret
metadata:
  name: mongo-secret
type: Opaque
data:
  # Base64 encoded values
  username: YWRtaW4= # "admin"
  password: YWRtaW5oYWNrcDIwMjU= # "adminhackp2025"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;mongo-service.yaml&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Defines the MongoDB service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- ClusterIP: None&lt;/strong&gt; makes it a &lt;strong&gt;headless service&lt;/strong&gt; , required for StatefulSets.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Allows pods to connect to MongoDB by DNS (e.g., mongo-0.mongo).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Importance: Provides stable networking for MongoDB StatefulSet pods.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: mongo
  labels:
    app: mongo
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    app: mongo
  clusterIP: None # headless service for StatefulSet
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;mongo-statefulset-pv-pvc.yaml&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Handles the &lt;strong&gt;database persistence and StatefulSet definition&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- PersistentVolume (PV):&lt;/strong&gt; Reserves storage (5Gi).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- PersistentVolumeClaim (PVC):&lt;/strong&gt; Ensures pods can claim storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- StatefulSet:&lt;/strong&gt; Guarantees stable network identity and persistent storage for MongoDB.&lt;/p&gt;

&lt;p&gt;👉 Importance: Ensures &lt;strong&gt;MongoDB data is preserved&lt;/strong&gt; even if the pod restarts.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Blue/Green Deployment: Runs two environments (Blue = live, Green = new). Traffic is switched instantly once Green is ready. Near-zero downtime and easy rollback, but requires double resources and is more complex for stateful apps.&lt;/p&gt;

&lt;p&gt;👉 Blue/Green = safer cutover, higher cost.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# PersistentVolume for Green MongoDB
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-green-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /root/hackpproject/data-green # separate path for green
  persistentVolumeReclaimPolicy: Retain
  storageClassName: "" # Must match PVC in StatefulSet

---

# StatefulSet for Green MongoDB
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongo-green
  labels:
    app: mongo
    version: green
spec:
  serviceName: mongo # existing headless service
  replicas: 1
  selector:
    matchLabels:
      app: mongo
      version: green
  template:
    metadata:
      labels:
        app: mongo
        version: green
    spec:
      containers:
        - name: mongo
          image: 495549341534.dkr.ecr.us-east-1.amazonaws.com/hackp2025:db-20250907111542
          ports:
            - containerPort: 27017
          env:
            - name: MONGO_INITDB_ROOT_USERNAME
              valueFrom:
                secretKeyRef:
                  name: mongo-secret
                  key: username
            - name: MONGO_INITDB_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: mongo-secret
                  key: password
          volumeMounts:
            - name: mongo-data
              mountPath: /data/db
  volumeClaimTemplates:
    - metadata:
        name: mongo-data
      spec:
        accessModes: ["ReadWriteOnce"]
        resources:
          requests:
            storage: 5Gi
        storageClassName: "" # binds to the pre-created PV
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;web-deployment.yaml&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Defines how the &lt;strong&gt;frontend (React.js app)&lt;/strong&gt; should run.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Runs &lt;strong&gt;2 replicas&lt;/strong&gt; for high availability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pulls &lt;strong&gt;API endpoint from ConfigMap&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resource requests/limits ensure fair scheduling.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Importance: Deploys the &lt;strong&gt;UI&lt;/strong&gt; and links it to the backend API via config.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Rolling Update: Gradually replaces old pods with new ones. Uses fewer resources, minimal downtime if tuned, but users may hit bad pods if the new version is faulty.&lt;/p&gt;

&lt;p&gt;👉 Rolling = efficient and native.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
  labels:
    app: web
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 3         
      maxUnavailable: 0   
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: web
          image: 495549341534.dkr.ecr.us-east-1.amazonaws.com/hackp2025:web-20250907111542
          ports:
            - containerPort: 3000
              protocol: TCP
          env:
            - name: REACT_APP_API_URL
              valueFrom:
                configMapKeyRef:
                  name: web-env
                  key: REACT_APP_API_URL
          resources:
            requests:
              cpu: "200m"
              memory: "1024Mi"
            limits:
              cpu: "2"
              memory: "2Gi"
      restartPolicy: Always
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;web-env-configmap.yaml&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Stores non-sensitive environment variables.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Defines the &lt;strong&gt;API endpoint&lt;/strong&gt; for the frontend (REACT_APP_API_URL).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Can be updated easily without rebuilding Docker images.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Importance: Provides flexibility to change configuration without redeploying code.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: ConfigMap
metadata:
  name: web-env
  labels:
    app: web
data:
  REACT_APP_API_URL: http://98.86.216.31:31001
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;web-service.yaml&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Exposes the frontend to users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Type NodePort&lt;/strong&gt; makes it available externally at :32000.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maps port 3000 (service) → port 80 (container).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 Importance: Allows &lt;strong&gt;end-users to access the web app&lt;/strong&gt; from their browser.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: web
  labels:
    app: web
spec:
  type: NodePort
  selector:
    app: web # Must match Deployment labels
  ports:
    - name: http
      port: 3000 # Service port inside cluster
      targetPort: 80 # Container port
      nodePort: 32000 # External port accessible from outside
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have now moved all the manifest files to /root/hackpproject/manifestfiles.&lt;/p&gt;

&lt;p&gt;Once the manifests are finalised, the next step is to create a repository in ECR to push the build artefact images.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Steps to Create an ECR Repository:&lt;br&gt;&lt;br&gt;
1.Log in to AWS Console → Go to the ECR service.  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create Repository
&lt;/li&gt;
&lt;li&gt;Click Create repository.
&lt;/li&gt;
&lt;li&gt;Select Private repository.
&lt;/li&gt;
&lt;li&gt;Enter repository names — prodimage. (In this case, we are creating a single repository for all those 3 images)
&lt;/li&gt;
&lt;li&gt;Leave others as default and click Create repository.
&lt;/li&gt;
&lt;li&gt;Authenticate Docker with ECR&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsgrnfp2h9p1sccz1me2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsgrnfp2h9p1sccz1me2.png" width="800" height="396"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Step 1: Finding the ECR&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntib5020m31hc0g7o29t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntib5020m31hc0g7o29t.png" width="800" height="299"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Step 2: Creating Repository&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg581l0728qamyfuw0bxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg581l0728qamyfuw0bxm.png" width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Step 3: onfiguring Repository&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the registry is created, you can proceed with the CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu67h33kqqlx81594yqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxu67h33kqqlx81594yqg.png" width="800" height="100"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Reposiroty end point&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now, let’s create a GitHub Actions pipeline to deploy the code to the EC2 K3S cluster. The first step is to configure GitHub Actions with access to the repository, ECR, and the EC2 instance via SSH.&lt;/p&gt;

&lt;p&gt;Navigate to the project directory and create a folder named ‘.github’. Inside this folder, create a file named ‘ci-cd.yml’.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir .github
cd .github
touch ci-cd.yml
vi ci-cd.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The ci-cd.yml file is the core configuration file for GitHub Actions that defines your CI/CD pipeline. Now use the following script in that ci-cd.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: CI/CD Pipeline

on:
  push:
    branches:
      - main

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    env:
      ECR_REGISTRY: ${{ secrets.ECR_REGISTRY }}

    steps:
      - name: Pulling the repository
        uses: actions/checkout@v3

      - name: Pre-Build Checks (optional)
        run: |
          echo "Checking Docker installation..."
          docker --version
          echo "Checking Docker Compose installation..."
          docker compose version

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Login to AWS ECR
        uses: aws-actions/amazon-ecr-login@v2
        env:
          AWS_REGION: ${{ secrets.AWS_REGION }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

      - name: Set Image Tag (Timestamp)
        id: set_image_tag
        run: |
          IMAGE_TAG=$(date +'%Y%m%d%H%M%S')
          echo "image_tag=$IMAGE_TAG" &amp;gt;&amp;gt; $GITHUB_OUTPUT
          echo "$IMAGE_TAG" &amp;gt; image_tag.txt
          echo "IMAGE_TAG=$(cat image_tag.txt)" &amp;gt;&amp;gt; $GITHUB_ENV
          cat image_tag.txt

      - name: Upload image_tag.txt to K3s server
        uses: appleboy/scp-action@v0.1.7
        with:
          host: ${{ secrets.EC2_HOST }}
          username: ${{ secrets.EC2_USER }}
          key: ${{ secrets.EC2_SSH_KEY }}
          port: 22
          source: image_tag.txt
          target: /root/hackpproject/manifestfiles



      - name: Clean up old ECR images
        continue-on-error: true
        env:
          AWS_REGION: ${{ secrets.AWS_REGION }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        run: |
            echo "Listing all images in ECR..."
            aws ecr list-images --repository-name hackp2025 --region $AWS_REGION \
              --query 'imageIds[*]' --output json &amp;gt; image_ids.json || echo "No images to delete"

            if [-s image_ids.json]; then
              echo "Deleting old images from ECR..."
              aws ecr batch-delete-image --repository-name hackp2025 --region $AWS_REGION \
                --image-ids file://image_ids.json
            else
              echo "No images found, skipping deletion."
            fi

      - name: Install Trivy
        run: |
          wget https://github.com/aquasecurity/trivy/releases/download/v0.18.3/trivy_0.18.3_Linux-64bit.deb
          sudo dpkg -i trivy_0.18.3_Linux-64bit.deb


      - name: Build and Push Docker Images
        run: |
          echo "Building Docker images..."
          docker compose -f docker-compose.yml build
          docker images

          echo "Tagging images with timestamp: $IMAGE_TAG"
          docker tag hackkptask1-api:latest $ECR_REGISTRY/hackp2025:api-$IMAGE_TAG
          trivy image $ECR_REGISTRY/hackp2025:api-$IMAGE_TAG || echo "⚠️ Vulnerabilities found in API image. Proceeding anyway."
          docker push $ECR_REGISTRY/hackp2025:api-$IMAGE_TAG

          docker tag hackkptask1-web:latest $ECR_REGISTRY/hackp2025:web-$IMAGE_TAG
          trivy image $ECR_REGISTRY/hackp2025:web-$IMAGE_TAG || echo "⚠️ Vulnerabilities found in Web image. Proceeding anyway."
          docker push $ECR_REGISTRY/hackp2025:web-$IMAGE_TAG

          docker tag docker-frontend-backend-db-mongo:latest $ECR_REGISTRY/hackp2025:db-$IMAGE_TAG
          trivy image $ECR_REGISTRY/hackp2025:db-$IMAGE_TAG || echo "⚠️ Vulnerabilities found in Mongo image. Proceeding anyway."
          docker push $ECR_REGISTRY/hackp2025:db-$IMAGE_TAG


      - name: Deploy to K3s via SSH
        uses: appleboy/ssh-action@v0.1.7
        with:
          host: ${{ secrets.EC2_HOST }}
          username: ${{ secrets.EC2_USER }}
          key: ${{ secrets.EC2_SSH_KEY }}
          port: 22
          script: |
            IMAGE_TAG=$(cat /root/hackpproject/manifestfiles/image_tag.txt)
            ECR_REGISTRY="495549341534.dkr.ecr.us-east-1.amazonaws.com"
            MANIFEST_DIR="/root/hackpproject/manifestfiles"

            echo "IMAGE_TAG=$IMAGE_TAG"
            echo "ECR_REGISTRY=$ECR_REGISTRY"

            # Replace only the part after "image: "
            sudo sed -i "s|image: .*hackp2025:web.*|image: ${ECR_REGISTRY}/hackp2025:web-${IMAGE_TAG}|g" $MANIFEST_DIR/web-deployment.yaml
            sudo sed -i "s|image: .*hackp2025:api.*|image: ${ECR_REGISTRY}/hackp2025:api-${IMAGE_TAG}|g" $MANIFEST_DIR/api-deployment.yaml
            sudo sed -i "s|image: .*hackp2025:db.*|image: ${ECR_REGISTRY}/hackp2025:db-${IMAGE_TAG}|g" $MANIFEST_DIR/mongo-statefulset-pv-pvc.yaml

            aws ecr get-login-password --region us-east-1 \
              | sudo docker login --username AWS --password-stdin 495549341534.dkr.ecr.us-east-1.amazonaws.com

            # Apply manifests
            sudo kubectl delete all --all -n hackpproject
            sudo kubectl apply -f $MANIFEST_DIR
            sudo kubectl rollout status deployment/web
            sudo kubectl rollout status deployment/api

            #push file to manifest repo
            cd $MANIFEST_DIR
            cd $MANIFEST_DIR
            # Configure git identity for commits
            git config user.name "hackp25project"
            git config user.email "github-push@hackp25"

            # Ensure we’re on main
            git checkout main

            # Fetch latest changes
            git pull --rebase origin main

            # Commit and push if changes exist
            if [-n "$(git status --porcelain)"]; then
                git add .
                git commit -m "Update manifests for image tag: ${ECR_REGISTRY}/hackp2025:web-${IMAGE_TAG}"
                git push origin main
            else
                echo "No changes to commit."
            fi

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we need to configure the secrets. Follow the screenshots below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkudlysvd0t71rwi9b6zu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkudlysvd0t71rwi9b6zu.png" width="800" height="246"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Step 1: Configuring secrets&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw5xyyplbqtujwe5d4vp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw5xyyplbqtujwe5d4vp.png" width="800" height="493"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Step 2: Configuring secrets&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzqc6c52kzl48m9ab0lc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzqc6c52kzl48m9ab0lc.png" width="800" height="607"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Step 3: Adding Repository secret&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note: You can ignore the KUBECONFIG and GH_SSH_KEY secrets, as they are not required for this specific use case. For reference, please see the screenshot I created using ChatGPT, which outlines each secret, its purpose, and how to obtain it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7w8ruu57cgmkr5wsql6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7w8ruu57cgmkr5wsql6z.png" width="800" height="772"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more details on configuring and using the AWS CLI, you can refer to my previous blog. I’ll include the link below for your reference:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/mahinshanazeer/configuring-aws-cli-for-terraform-script-automation-44em"&gt;Configuring AWS CLI for Terraform / Script Automation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have not included the SSH key generation and key management steps, as they are common knowledge. Including them would make the guide unnecessarily long. It is expected that users are familiar with configuring SSH keys; otherwise, I recommend reviewing Linux fundamentals before starting with Kubernetes.&lt;/p&gt;

&lt;p&gt;Next, we need to authorise the EC2 instance to access ECR using the AWS CLI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password --region us-east-1 &amp;gt; ecr_pass
aws ecr get-login-password --region us-east-1 \
  | sudo docker login --username AWS --password-stdin 495549341534.dkr.ecr.us-east-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, create a secret for MongoDB credentials. This command creates a Kubernetes Secret &lt;strong&gt;mongo-secret&lt;/strong&gt; that securely stores the MongoDB username and password. Instead of hardcoding credentials in manifests, applications like the backend API or MongoDB deployment can reference this secret for authentication, ensuring better security and centralised management of sensitive data. We are using the secret in mongo-deploying yaml file, please refer to the manifest file on the top&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create secret generic mongo-secret \
  --from-literal=username=admin \
  --from-literal=password=adminhackp2025
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the configuration is completed, once you commit the changes and push to the github, the pipeline will start working:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
git add .github/
git commit -m "CICD configuration updated"
git push origin main

#my remote is configured as origin and branch is main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once pushed to remote, come back to github repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtlbk0aqnyfhqmyc69g8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbtlbk0aqnyfhqmyc69g8.png" width="800" height="385"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Github Actions&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here, you can see some jobs marked with a red cross and others with a green tick. The red cross indicates failed jobs, while the green tick indicates successful ones.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vvqnj34iligyii9xpv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vvqnj34iligyii9xpv0.png" width="800" height="465"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Github Actions failure&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Click on the job, then select option (4) to view the complete details. Troubleshooting should be performed based on the errors identified in the logs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcslxdff4k48rlvflmnq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcslxdff4k48rlvflmnq.png" width="800" height="387"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Debugging&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If you need to make changes to the ci-cd.yml file, follow the steps below and rerun all jobs. By default, the pipeline will automatically restart whenever you commit changes to the ci-cd.yml from the GitHub repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c03qti4i6ys8m6knjy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2c03qti4i6ys8m6knjy9.png" width="800" height="397"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Editing CICD yml file&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once the CI/CD jobs execute successfully, you will see a green tick mark on the left.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxhno7r3dz1ijh4bg8v4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxhno7r3dz1ijh4bg8v4.png" width="800" height="439"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Job successfull&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now, return to the server and run kubectl get all to verify the list of all deployed components.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw0gyti9prrva2qh712r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw0gyti9prrva2qh712r.png" width="800" height="493"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;kubectl get all&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Instead of port 3000, the application is exposed on port 32000. Open the application URL &lt;a href="http://98.86.216.31:32000/" rel="noopener noreferrer"&gt;http://98.86.216.31:32000/&lt;/a&gt; and verify that it is working correctly, just as we did earlier with Docker Compose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9anaykmk1qj3vh7tg26w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9anaykmk1qj3vh7tg26w.png" width="800" height="433"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Demo Application loading fine&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We successfully set up a CI/CD pipeline using GitHub Actions to deploy the application on an EC2-hosted K3S cluster. The workflow automated the build, scan, and push of Docker images to AWS ECR, followed by deployment to the cluster.&lt;/p&gt;

&lt;p&gt;After verifying the deployments with kubectlWe confirmed the application is accessible via &lt;a href="http://98.86.216.31:32000/" rel="noopener noreferrer"&gt;http://98.86.216.31:32000/&lt;/a&gt; and is working as expected.&lt;/p&gt;

</description>
      <category>github</category>
      <category>kubernetes</category>
      <category>kubernetescluster</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Sun, 07 Sep 2025 02:55:58 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/-9gl</link>
      <guid>https://forem.com/mahinshanazeer/-9gl</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/mahinshanazeer/run-ai-locally-creating-a-local-ai-chat-assistant-for-targeted-workflows-19o9" class="crayons-story__hidden-navigation-link"&gt;Run AI Locally: Creating a Local AI Chat Assistant for Targeted Workflows&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/mahinshanazeer" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3048938%2F24e92f7a-bcda-4167-a1fd-52dbc7eab1cd.jpg" alt="mahinshanazeer profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/mahinshanazeer" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Mahinsha Nazeer
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Mahinsha Nazeer
                
              
              &lt;div id="story-author-preview-content-2622468" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/mahinshanazeer" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3048938%2F24e92f7a-bcda-4167-a1fd-52dbc7eab1cd.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Mahinsha Nazeer&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/mahinshanazeer/run-ai-locally-creating-a-local-ai-chat-assistant-for-targeted-workflows-19o9" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jun 25 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/mahinshanazeer/run-ai-locally-creating-a-local-ai-chat-assistant-for-targeted-workflows-19o9" id="article-link-2622468"&gt;
          Run AI Locally: Creating a Local AI Chat Assistant for Targeted Workflows
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/llm"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;llm&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ollama"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ollama&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/chatbotdevelopment"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;chatbotdevelopment&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/ai"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;ai&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
            &lt;a href="https://dev.to/mahinshanazeer/run-ai-locally-creating-a-local-ai-chat-assistant-for-targeted-workflows-19o9#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            10 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>llm</category>
      <category>ollama</category>
      <category>chatbotdevelopment</category>
      <category>ai</category>
    </item>
    <item>
      <title>Step-by-Step Guide to Launching an EC2 Instance on AWS : For Beginners</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Sun, 31 Aug 2025 06:31:48 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/step-by-step-guide-to-launching-an-ec2-instance-on-aws-for-beginners-1ak8</link>
      <guid>https://forem.com/mahinshanazeer/step-by-step-guide-to-launching-an-ec2-instance-on-aws-for-beginners-1ak8</guid>
      <description>&lt;h3&gt;
  
  
  Step-by-Step Guide to Launching an EC2 Instance on AWS: For Beginners
&lt;/h3&gt;

&lt;p&gt;This guide is designed for beginners stepping into AWS. Amazon EC2 (Elastic Compute Cloud) is a virtual machine in the cloud, enabling you to launch servers and run your applications with ease. To set up an EC2 instance, you’ll need to configure a few essential parameters, which we’ll walk through step by step.&lt;/p&gt;

&lt;p&gt;Getting started with cloud computing is easier than ever with &lt;strong&gt;AWS Free Tier&lt;/strong&gt;. AWS offers a &lt;strong&gt;free tier program&lt;/strong&gt; that allows beginners to explore many AWS services, including EC2, without incurring charges for up to 12 months. This is ideal for learning, experimenting, or running small-scale projects.&lt;/p&gt;

&lt;p&gt;In this guide, we’ll walk you through &lt;strong&gt;launching your first EC2 instance&lt;/strong&gt;  — a virtual server in the cloud — using parameters and settings suitable for beginners. By the end of this tutorial, you’ll have a fully functional EC2 instance ready for deploying applications, all while staying within the free tier limits.&lt;/p&gt;

&lt;p&gt;You can check out AWS Free Tier here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/free/" rel="noopener noreferrer"&gt;Free Cloud Computing Services - AWS Free Tier&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start by logging in to your AWS Management Console. In the search bar at the top, type &lt;em&gt;EC2&lt;/em&gt; and select it from the results. This will take you to the EC2 Dashboard, where you can manage and launch your virtual servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwzoadiv93pyyzc9a8je.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwzoadiv93pyyzc9a8je.png" width="800" height="407"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;AWS Console search bar&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once you are on the EC2 Dashboard, click on the &lt;strong&gt;‘Launch Instance’&lt;/strong&gt; button. This will initiate the process of creating a new virtual server, guiding you through configuring instance details, selecting an Amazon Machine Image (AMI), choosing an instance type, and setting up security options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5eqn6xh6g0g0586xxyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5eqn6xh6g0g0586xxyk.png" width="800" height="390"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Launching new EC2 instance&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Names and tags:
&lt;/h4&gt;

&lt;p&gt;Give a suitable name for your EC2 machine&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Choose an Amazon Machine Image (AMI)
&lt;/h4&gt;

&lt;p&gt;An AMI is a pre-configured template for your instance. Options include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Linux&lt;/strong&gt;  — Lightweight and optimised for AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ubuntu, Red Hat, Windows Server&lt;/strong&gt;  — Depending on your application needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom AMIs&lt;/strong&gt;  — If you have a pre-configured image.&lt;/li&gt;
&lt;li&gt;Choose the architecture (For beginners, 64-bit x86 is fine, which is there by default)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2kjnwvgsf0netx2e3k3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2kjnwvgsf0netx2e3k3.png" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Configuration&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Choose an Instance Type
&lt;/h4&gt;

&lt;p&gt;This defines the compute resources for your server:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CPU, memory, storage, and network capacity&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Common types: t2.micro (free tier eligible), t3.medium, etc.&lt;/li&gt;
&lt;li&gt;Choose based on your workload requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. Configure Instance Details
&lt;/h4&gt;

&lt;p&gt;Key settings in this section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Number of Instances&lt;/strong&gt;  — How many identical servers to launch at once. Default is 1.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network &amp;amp; Subnet&lt;/strong&gt;  — Select your VPC and subnet.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-assign Public IP&lt;/strong&gt;  — Useful if you need internet access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Role&lt;/strong&gt;  — Assign permissions for AWS services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring &amp;amp; Shutdown Behaviour&lt;/strong&gt;  — Optional settings for CloudWatch and automatic stop/terminate actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  5. &lt;strong&gt;Create or Select a Key Pair&lt;/strong&gt;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;strong&gt;Review and Launch&lt;/strong&gt; step, you can &lt;strong&gt;select an existing key pair&lt;/strong&gt; or &lt;strong&gt;create a new one&lt;/strong&gt;. ( Easiest method for beginners)&lt;/li&gt;
&lt;li&gt;If creating a new key pair:&lt;/li&gt;
&lt;li&gt;Give it a descriptive name (e.g., WebServerKey).&lt;/li&gt;
&lt;li&gt;Download the .pem file immediately—AWS will not allow downloading it later.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  6. Add Storage
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Define &lt;strong&gt;EBS volumes&lt;/strong&gt; for your instance.&lt;/li&gt;
&lt;li&gt;Configure &lt;strong&gt;size&lt;/strong&gt; , &lt;strong&gt;volume type&lt;/strong&gt; , and &lt;strong&gt;encryption&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;You can add additional volumes if required.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  7. Add Tags
&lt;/h4&gt;

&lt;p&gt;Tags are &lt;strong&gt;key-value pairs&lt;/strong&gt; to identify your instance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Example: Key = Name, Value = WebServer1.&lt;/li&gt;
&lt;li&gt;Tags make managing multiple instances easier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  8. Configure Security Group
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Security groups act as a &lt;strong&gt;firewall&lt;/strong&gt; for your instance.&lt;/li&gt;
&lt;li&gt;Add rules for inbound traffic, e.g.:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH (22)&lt;/strong&gt; — for Linux access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP (80)&lt;/strong&gt; and &lt;strong&gt;HTTPS (443)&lt;/strong&gt; — for web servers.&lt;/li&gt;
&lt;li&gt;You can create a new security group or select an existing one.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  9. Review and Launch
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Verify all settings before launching.&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Launch&lt;/strong&gt; , and select or create a &lt;strong&gt;key pair&lt;/strong&gt; for SSH access.&lt;/li&gt;
&lt;li&gt;Download the key pair and store it securely — it’s required to connect to your instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F271ld1jq4zh3bpt7s5hi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F271ld1jq4zh3bpt7s5hi.png" width="800" height="759"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Network and SSH configuration&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkogspp9spj67ev8sjavi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkogspp9spj67ev8sjavi.png" width="800" height="346"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Storage configuration&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Step: Launch Your EC2 Instance
&lt;/h3&gt;

&lt;p&gt;After reviewing all settings, configuring the &lt;strong&gt;SSH key&lt;/strong&gt; , and ensuring your &lt;strong&gt;security groups&lt;/strong&gt; are properly set:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the &lt;strong&gt;“Launch”&lt;/strong&gt;  button.&lt;/li&gt;
&lt;li&gt;Your instance will start initialising. You can monitor its &lt;strong&gt;status&lt;/strong&gt; in the EC2 Dashboard under &lt;strong&gt;Instances&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Once the instance state changes to &lt;strong&gt;“running”&lt;/strong&gt; , note the &lt;strong&gt;public IP&lt;/strong&gt; or &lt;strong&gt;DNS name&lt;/strong&gt;  — you will use this to connect via SSH.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Security Reminder:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Never share your .pem key file publicly.&lt;/li&gt;
&lt;li&gt;Only allow SSH access from trusted IPs in your security group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now your EC2 instance is ready, and you can deploy applications or configure it as required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2x6vaar7t2f8o2js0m8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2x6vaar7t2f8o2js0m8k.png" width="800" height="512"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Launching instance&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Once your EC2 instance is in the &lt;strong&gt;“running”&lt;/strong&gt; state, you can connect to it from your local machine using the SSH key you created:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Locate the Public IP&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;In the EC2 Dashboard, select your instance and copy its &lt;strong&gt;public IP address&lt;/strong&gt; or &lt;strong&gt;public DNS name&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Set Permissions for Your Key&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On Linux/macOS, run:&lt;/li&gt;
&lt;li&gt;chmod 400 your-key.pem&lt;/li&gt;
&lt;li&gt;This ensures the SSH key file is secure and usable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Connect via SSH&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run the following command from your terminal:&lt;/li&gt;
&lt;li&gt;ssh -i /path/to/your-key.pem ec2-user@&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Ubuntu AMIs&lt;/strong&gt; , replace ec2-user with ubuntu.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Successful Connection&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After connecting, you’ll see the command prompt of your EC2 instance, indicating you can now manage it remotely and deploy applications.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>awsec2</category>
      <category>awsforbeginner</category>
    </item>
    <item>
      <title>Raspberry Pi Homelab setup with Kubernetes</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Sun, 31 Aug 2025 02:38:46 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/raspberry-pi-homelab-setup-with-kubernetes-1pdn</link>
      <guid>https://forem.com/mahinshanazeer/raspberry-pi-homelab-setup-with-kubernetes-1pdn</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/mahinshanazeer/raspberry-pi-k8s-cluster-setup-for-home-lab-with-cilium-3kd8" class="crayons-story__hidden-navigation-link"&gt;Raspberry Pi K8S Cluster Setup for Home Lab with Cilium&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/mahinshanazeer" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3048938%2F24e92f7a-bcda-4167-a1fd-52dbc7eab1cd.jpg" alt="mahinshanazeer profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/mahinshanazeer" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Mahinsha Nazeer
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Mahinsha Nazeer
                
              
              &lt;div id="story-author-preview-content-2571032" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/mahinshanazeer" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3048938%2F24e92f7a-bcda-4167-a1fd-52dbc7eab1cd.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Mahinsha Nazeer&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/mahinshanazeer/raspberry-pi-k8s-cluster-setup-for-home-lab-with-cilium-3kd8" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Jun 6 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/mahinshanazeer/raspberry-pi-k8s-cluster-setup-for-home-lab-with-cilium-3kd8" id="article-link-2571032"&gt;
          Raspberry Pi K8S Cluster Setup for Home Lab with Cilium
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/kubernetes"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;kubernetes&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/raspberrypi"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;raspberrypi&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/docker"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;docker&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/kubernetescluster"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;kubernetescluster&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/mahinshanazeer/raspberry-pi-k8s-cluster-setup-for-home-lab-with-cilium-3kd8" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;2&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/mahinshanazeer/raspberry-pi-k8s-cluster-setup-for-home-lab-with-cilium-3kd8#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            14 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>kubernetes</category>
      <category>raspberrypi</category>
      <category>docker</category>
      <category>kubernetescluster</category>
    </item>
    <item>
      <title>Sending Email Notifications on SSH Login Events</title>
      <dc:creator>Mahinsha Nazeer</dc:creator>
      <pubDate>Mon, 07 Jul 2025 12:10:48 +0000</pubDate>
      <link>https://forem.com/mahinshanazeer/sending-email-notifications-on-ssh-login-events-283a</link>
      <guid>https://forem.com/mahinshanazeer/sending-email-notifications-on-ssh-login-events-283a</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuw6596oftww7yl44zbj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuw6596oftww7yl44zbj.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Monitoring SSH logins is a simple yet powerful way to stay informed about access to your Linux server. Whether you’re managing production infrastructure or personal devices, receiving real-time email alerts for SSH logins can help you detect unauthorised access and maintain better operational visibility. In this guide, we’ll walk through configuring your server to automatically send email notifications whenever a user logs in via SSH, using native tools like Postfix and PAM.&lt;/p&gt;

&lt;p&gt;For configuring Postfix to use Gmail as an SMTP relay, you may refer to the following blog. In this article, we’ll focus solely on setting up the script to monitor and send email alerts for SSH logins.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/mahinshanazeer/configuring-postfix-notification-using-gmail-smtp-server-5bea"&gt;Configuring Postfix notification using Gmail SMTP server.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s create a Bash script that automatically sends an email notification whenever a user accesses the server via SSH.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vi /usr/local/bin/ssh-login-alert.sh

#now add the following content to the file:
~~~
#!/bin/bash

HOSTNAME=$(hostname)
IP=${PAM_RHOST:-$(who | awk '/pts/{print $5}' | tr -d '()' | head -n1)}
USER=${PAM_USER:-$(whoami)}
TIME=$(date '+%Y-%m-%d %H:%M:%S')
echo -e "SSH Login Alert:\nUser: $USER\nIP: $IP\nTime: $TIME\nHost: $HOSTNAME" | mail -s "WARNING: SSH Login on $HOSTNAME from $IP" test@gmail.com
~~~
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrdwqfsgbbd82zhlo77t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrdwqfsgbbd82zhlo77t.png" width="800" height="197"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;script&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Ensure that you’ve correctly specified the email subject line and recipient address in the designated sections of the script before deploying it. Also, we need to provide executable permissions for the script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x /usr/local/bin/ssh-login-alert.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the script is prepared and tested, the next step is to integrate it into the PAM configuration to trigger on SSH logins.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vim /etc/pam.d/sshd
#now add the following content on the top of sshd configuration inside pam.d

~~~
session optional pam_exec.so /usr/local/bin/ssh-login-alert.sh
~~~
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1cj6pncq7fulc6m2l1k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1cj6pncq7fulc6m2l1k.png" width="676" height="77"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;pam.d configuration&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With the script in place, simply log out and log back in via SSH to verify that the email notification is triggered as expected.&lt;/p&gt;

</description>
      <category>postfix</category>
      <category>linux</category>
      <category>security</category>
      <category>cybersecurity</category>
    </item>
  </channel>
</rss>
