<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jayanth Dasari</title>
    <description>The latest articles on Forem by Jayanth Dasari (@jayanth_dasari_7).</description>
    <link>https://forem.com/jayanth_dasari_7</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jayanth_dasari_7"/>
    <language>en</language>
    <item>
      <title>Day-41 Back to the Keyboard: Re-rooting My Career in the Clouds</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Wed, 18 Feb 2026 17:06:57 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-41-back-to-the-keyboard-re-rooting-my-career-in-the-clouds-4ldk</link>
      <guid>https://forem.com/jayanth_dasari_7/day-41-back-to-the-keyboard-re-rooting-my-career-in-the-clouds-4ldk</guid>
      <description>&lt;p&gt;Lessons learned from a long break and my first day back with AWS Cloud Essentials.&lt;br&gt;
There’s a specific kind of silence that happens when you step away from the tech world for a while. You start to wonder: Has everything moved too fast? Will I still "get" it?&lt;/p&gt;

&lt;p&gt;Today, I officially ended that silence.&lt;/p&gt;

&lt;p&gt;After a much-needed long break, I decided that the best way to dust off the cobwebs wasn’t just to code, but to look at the "where" and "how" of modern software: The Cloud. I spent my first day back diving into the AWS Cloud Essentials path, and honestly, it felt like coming home to a house that had been renovated while I was gone.&lt;/p&gt;

&lt;p&gt;Why Cloud Essentials?&lt;br&gt;
Coming back, I didn't want to get bogged down in the syntax of a single language. I wanted to understand the global infrastructure. Learning about things like the Shared Responsibility Model and the difference between Region vs. Availability Zone gave me a fresh perspective on how we build resilient systems.&lt;/p&gt;

&lt;p&gt;Key Takeaways from Day 1:&lt;/p&gt;

&lt;p&gt;Scalability vs. Agility: It’s not just about having a big server; it’s about having the right amount of server at the right time.&lt;/p&gt;

&lt;p&gt;The "Pay-as-you-go" Mindset: A great reminder that in the cloud, efficiency is literally cost-saving.&lt;/p&gt;

&lt;p&gt;The Power of Managed Services: Realizing how much heavy lifting AWS does (like RDS or S3) so we can focus on building features.&lt;/p&gt;

&lt;p&gt;If you’ve been on a break, my advice is this: don’t start with the hardest problem. Start with the foundation. It feels good to be back.&lt;br&gt;
Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>learning</category>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Day-40 Bridging the Gap: My Day Modernizing CI/CD and Traffic Management</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Mon, 26 Jan 2026 16:16:54 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-40-bridging-the-gap-my-day-modernizing-cicd-and-traffic-management-11f5</link>
      <guid>https://forem.com/jayanth_dasari_7/day-40-bridging-the-gap-my-day-modernizing-cicd-and-traffic-management-11f5</guid>
      <description>&lt;p&gt;It’s easy to get comfortable with the tools we know. For the longest time, Kubernetes Ingress and standard Jenkins freestyle jobs were my bread and butter. But today was about breaking out of that comfort zone and looking at the next generation of infrastructure management.&lt;/p&gt;

&lt;p&gt;My day was split between two distinct but critical areas of DevOps: exploring the future of Kubernetes networking and getting my hands dirty with Jenkins pipeline-as-code.&lt;/p&gt;

&lt;p&gt;The Gateway to Better Networking&lt;br&gt;
I finally dove into the Kubernetes Gateway API. If you’ve been using standard Ingress resources, you know the pain points: annotation sprawl, non-standard implementations, and the difficulty of splitting responsibilities between cluster operators and application developers.&lt;br&gt;
The Gateway API feels like the mature successor we’ve been waiting for. What stood out to me today was the role-oriented design. The separation of GatewayClass, Gateway, and HTTPRoute resources means:&lt;/p&gt;

&lt;p&gt;Cluster Ops manage the infrastructure (Load Balancers).&lt;br&gt;
Developers manage the routing logic.&lt;br&gt;
It’s cleaner, more expressive, and honestly, a relief to see standard portable features that used to require vendor-specific annotations.&lt;/p&gt;

&lt;p&gt;Wrangling Jenkins Pipelines&lt;br&gt;
The second half of my day was spent in the trenches of CI/CD. I moved away from UI-configured jobs to building robust Jenkins Pipelines (Jenkinsfiles).&lt;/p&gt;

&lt;p&gt;There is something satisfying about committing your build logic alongside your application code. I focused on a declarative pipeline structure today, setting up stages for Build, Test, and a conditional Deploy. Debugging Groovy syntax can be tricky, but the visibility you get from the stage view makes it worth it.&lt;/p&gt;

&lt;p&gt;Takeaway&lt;br&gt;
Today reinforced that our ecosystem is constantly moving toward “Configuration as Data.” Whether it’s routing traffic in K8s or defining build steps in Jenkins, explicit, version-controlled configuration is the only way forward.&lt;/p&gt;

&lt;p&gt;Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>jenkins</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Day-39 From Bloated to Lightweight: How I Dockerized a React Portfolio using Multi-Stage Builds</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Sun, 25 Jan 2026 11:21:25 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-39-from-bloated-to-lightweight-how-i-dockerized-a-react-portfolio-using-multi-stage-builds-4bb</link>
      <guid>https://forem.com/jayanth_dasari_7/day-39-from-bloated-to-lightweight-how-i-dockerized-a-react-portfolio-using-multi-stage-builds-4bb</guid>
      <description>&lt;p&gt;A journey of turning a friend's local project into a production-ready container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9onw6x85cltgr0l5us9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl9onw6x85cltgr0l5us9.png" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1yp5bxx8zyiut5e4rgt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm1yp5bxx8zyiut5e4rgt.png" alt=" " width="800" height="355"&gt;&lt;/a&gt;&lt;br&gt;
A friend of mine recently finished building an awesome portfolio website using the MERN stack (specifically React). It looked great on their local machine, but they were struggling with how to deploy it efficiently.&lt;/p&gt;

&lt;p&gt;As someone diving deep into DevOps and Cloud Engineering, I saw this as the perfect opportunity to get my hands dirty. I offered to containerize the application for them.&lt;/p&gt;

&lt;p&gt;My goal? Create a Docker image that was secure, fast, and incredibly small. Here is how I went from a massive 1GB+ image to a tiny 40MB production-ready container using Multi-Stage Builds.&lt;br&gt;
The "Naive" Approach&lt;br&gt;
When I first started, my instinct was to just wrap the application in a standard Node.js environment.&lt;/p&gt;

&lt;p&gt;Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:22-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "run", "dev"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Problem: While this worked, it was terrible for production.&lt;br&gt;
Size: It included the entire node_modules folder, the source code, and development tools. The image size was huge (over 1GB!).&lt;br&gt;
Security: The source code was sitting right there in the container.&lt;br&gt;
Performance: We were using the development server (npm run dev) to serve the site, which isn't optimized for traffic.&lt;/p&gt;

&lt;p&gt;The Solution: Multi-Stage Builds&lt;br&gt;
I decided to refactor the Dockerfile using a multi-stage approach. The concept is simple: use one heavy image to build the app, and a second, lighter image to serve it.&lt;/p&gt;

&lt;p&gt;Stage 1: The Builder (Node.js) I used the Node image strictly to install dependencies and run the build script. This compiles the React code into a static dist folder.&lt;/p&gt;

&lt;p&gt;Stage 2: The Runner (Nginx) For the final image, I ditched Node.js entirely and used Nginx. Nginx is an industry-standard web server that is incredibly lightweight and faster at serving static HTML/CSS/JS files than Node.&lt;br&gt;
The Final Dockerfile&lt;br&gt;
Here is the optimized code I ended up with:&lt;/p&gt;

&lt;p&gt;Dockerfile&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Stage 1: Build the React Application
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json .
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Serve with Nginx
FROM nginx:alpine
# Copy only the build output to replace the default Nginx contents
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Results&lt;br&gt;
The difference was night and day.&lt;br&gt;
Before: ~900 MB (Node image + Source Code + Node Modules)&lt;br&gt;
After: ~200 MB (Alpine Nginx + Static Assets)&lt;/p&gt;

&lt;p&gt;By using multi-stage builds, I stripped away everything that wasn't strictly necessary for the user to see the website. We didn't ship the tools used to build the house; we just shipped the house.&lt;br&gt;
Key Takeaways&lt;br&gt;
If you are learning Docker, don't stop at "it works." Always ask "is this efficient?" Moving from single-stage to multi-stage builds is one of the easiest wins you can get in terms of performance and security.&lt;br&gt;
Now, my friend's portfolio is ready for the cloud, and I've got another tool in my DevOps arsenal.&lt;/p&gt;

&lt;p&gt;Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>Day-38 Automating the Cloud: My Deep Dive into AWS CLI, CloudFormation, and Jenkins</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Sat, 24 Jan 2026 17:19:48 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-38-automating-the-cloud-my-deep-dive-into-aws-cli-cloudformation-and-jenkins-5c42</link>
      <guid>https://forem.com/jayanth_dasari_7/day-38-automating-the-cloud-my-deep-dive-into-aws-cli-cloudformation-and-jenkins-5c42</guid>
      <description>&lt;p&gt;A look at my latest progress in mastering the DevOps toolchain.&lt;br&gt;
As I continue my journey into Cloud Computing and DevOps, I've realized that clicking buttons in a console isn't enough. Real scalability comes from automation. Today, I took a major step away from the GUI and into the terminal and pipelines.&lt;/p&gt;

&lt;p&gt;Here is a breakdown of what I learned and implemented today.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Back to Basics: Refreshed on AWS CLI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While the AWS Management Console is great for beginners, the Command Line Interface (CLI) is where speed happens. I spent some time refreshing my memory on configuring the AWS CLI and managing resources directly from the terminal.&lt;br&gt;
It's empowering to spin up EC2 instances or list S3 buckets without ever opening a browser. It reminded me that to be a great Cloud Engineer, you need to be comfortable in the shell.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Infrastructure as Code with CloudFormation (CFT)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After brushing up on the CLI, I moved on to AWS CloudFormation. This was a game-changer for me. I learned how to model and provision all my cloud infrastructure resources through code (JSON/YAML).&lt;br&gt;
Understanding the concept of "Stacks" and how to automate the creation and deletion of environments gave me a much clearer picture of how enterprise-grade infrastructure is managed. No more manual configuration drift!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Heart of Automation: Jenkins CI/CD&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The highlight of the day was getting hands-on with Jenkins. I didn't just read about it; I set up a Continuous Integration/Continuous Deployment (CI/CD) pipeline.&lt;br&gt;
I learned how to:&lt;br&gt;
Install and configure Jenkins&lt;br&gt;
Create a build job.&lt;br&gt;
Automate the process of testing and deploying code changes.&lt;/p&gt;

&lt;p&gt;Seeing the "Build Success" green status after an automated run was incredibly satisfying. It bridged the gap between writing code and actually deploying it.&lt;/p&gt;

&lt;p&gt;Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>jenkins</category>
      <category>cicd</category>
      <category>learning</category>
    </item>
    <item>
      <title>Day-37 Why I Spent Today Deep-Diving into Amazon S3</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Fri, 23 Jan 2026 16:56:41 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-37-why-i-spent-today-deep-diving-into-amazon-s3-3f9o</link>
      <guid>https://forem.com/jayanth_dasari_7/day-37-why-i-spent-today-deep-diving-into-amazon-s3-3f9o</guid>
      <description>&lt;p&gt;Even in a world of Kubernetes and Serverless, the simple storage bucket remains the backbone of the cloud.&lt;/p&gt;

&lt;p&gt;As I continue my journey in Cloud Computing and DevOps, it’s easy to get distracted by the shiny new tools — Docker containers, Kubernetes clusters, or complex CI/CD pipelines. But today, I decided to hit the brakes and go back to the absolute foundation of AWS: Simple Storage Service (S3).&lt;/p&gt;

&lt;p&gt;With my sights set on the AWS Solutions Architect certification, I realized that “knowing” S3 isn’t enough. You have to understand the nuances. Here is a breakdown of what I refreshed today and why it matters.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;It’s More Than Just a Folder When I first started, I treated S3 like Google Drive. Today, I looked deeper into Storage Classes. Understanding the difference between S3 Standard, Intelligent-Tiering, and Glacier is crucial — not just for passing exams, but for actual cost optimization in the real world. As a future Cloud Engineer, saving a company money on storage bills is a superpower.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Versioning is a Lifesaver I spent some time playing around with Bucket Versioning. In my previous Terraform projects, I used S3 to store state files (terraform.tfstate). Enabling versioning there isn't just a "nice to have"—it's a safety net. If a state file gets corrupted or deleted, versioning allows you to roll back. Today solidified why this should be a default setting for critical data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Security at the Bucket Level Finally, I reviewed Bucket Policies and ACLs (and why AWS recommends disabling ACLs nowadays!). Writing JSON policies to strictly control who can PutObject or GetObject is excellent practice for understanding IAM concepts.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Verdict Refreshing S3 might not sound as “cool” as deploying a microservice, but strong foundations build stable architectures. Tomorrow, I plan to integrate this by trying to host a static website or managing a lifecycle policy via CLI.&lt;/p&gt;

&lt;p&gt;Follow my journey as I build my way to the AWS Solutions Architect certification!&lt;br&gt;
Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>aws</category>
      <category>learning</category>
    </item>
    <item>
      <title>Day-36 Docker, Dependencies, and the AI Dilemma</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Thu, 22 Jan 2026 17:11:33 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-36-docker-dependencies-and-the-ai-dilemma-3m0i</link>
      <guid>https://forem.com/jayanth_dasari_7/day-36-docker-dependencies-and-the-ai-dilemma-3m0i</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7hrnct4ka1278kckgdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7hrnct4ka1278kckgdu.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Far87g3byvjguljgb1n5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Far87g3byvjguljgb1n5z.png" alt=" " width="800" height="407"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem9ssglmdi08smemw931.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem9ssglmdi08smemw931.png" alt=" " width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8v10qwo3n3xcjb2vkif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv8v10qwo3n3xcjb2vkif.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtgzwuq8gjaa4n16ziul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtgzwuq8gjaa4n16ziul.png" alt=" " width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why spending hours on a filename error was actually worth it.&lt;/p&gt;

&lt;p&gt;Today was one of those days where the learning curve felt more like a brick wall, but breaking through it was incredibly satisfying. As I dive deeper into DevOps, I’m realizing that theory and practice are two very different beasts.&lt;/p&gt;

&lt;p&gt;Back to Basics: The OSI Model I started the day revisiting the fundamentals. I spent time really digging into the&lt;br&gt;
OSI Model. It’s easy to overlook these theoretical concepts when you just want to build things, but understanding how data moves from the physical layer up to the application layer is crucial when things break (which they inevitably did later in the day).&lt;/p&gt;

&lt;p&gt;The Docker Struggle After the theory, I switched gears to containerization. My goal was simple: write a Dockerfile, create a .dockerignore file, and get the app running.&lt;/p&gt;

&lt;p&gt;It sounds straightforward, but the execution was messy. I spent a good chunk of time troubleshooting build errors. I learned the hard way that a well-structured .dockerignore isn't just a "nice to have"—it's essential for keeping your context clean and your builds fast.&lt;/p&gt;

&lt;p&gt;The “It Was a Typo” Moment The biggest headache of the day wasn’t a complex architectural issue. It was a naming convention.&lt;/p&gt;

&lt;p&gt;I couldn’t get my code to run inside the container. I tried everything. Eventually, I realized it was a simple filename mismatch in the actual code imports versus the file system. It’s humbling how a single capitalization or spelling error can halt an entire project.&lt;/p&gt;

&lt;p&gt;The AI Crutch? Here is the honest part: I solved these bugs, but I didn’t do it alone. I leaned heavily on AI to help me debug the specific error messages.&lt;/p&gt;

&lt;p&gt;While I feel confident that I understand the solution now, I can’t help but feel a little “imposter syndrome.” Would I have found that solution without the AI pointing it out? Maybe, but it would have taken three times as long. I’m trying to shift my mindset: AI is a tool like any other. The goal isn’t to memorize every error code, but to understand the logic behind the fix.&lt;/p&gt;

&lt;p&gt;Today was a win, even if I had a little help crossing the finish line.&lt;br&gt;
Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>docker</category>
      <category>cloudcomputing</category>
      <category>troubleshooting</category>
    </item>
    <item>
      <title>Day-35 A Deep Dive into Linux System Administration and Storage Management</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Wed, 21 Jan 2026 16:52:32 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-35-a-deep-dive-into-linux-system-administration-and-storage-management-1b8k</link>
      <guid>https://forem.com/jayanth_dasari_7/day-35-a-deep-dive-into-linux-system-administration-and-storage-management-1b8k</guid>
      <description>&lt;p&gt;How I learned to manage processes and mount persistent storage in Linux.&lt;br&gt;
By Dasari Jayanth | 2nd Year B.Sc. Computer Science &amp;amp; Cloud Computing&lt;br&gt;
As I continue my journey toward becoming a Cloud and DevOps Engineer, I've realized that understanding the underlying operating system is non-negotiable. Today, I took a deep dive into Linux system administration, focusing on three critical areas: Process Management, System Monitoring, and Disk Management.&lt;br&gt;
Here is a breakdown of what I learned and the commands I used.&lt;/p&gt;



&lt;ol&gt;
&lt;li&gt;Process Management ⚙️
Understanding how software runs on a server is crucial. In Linux, everything is a process. I learned how to view, manage, and terminate processes directly from the terminal.
Viewing Processes
The most basic command I used was ps (Process Status).
ps: Shows processes running in the current shell.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;ps aux: The holy grail command. It shows all running processes from all users.&lt;/p&gt;

&lt;p&gt;Managing Processes&lt;br&gt;
Sometimes a process hangs or consumes too much memory. I learned how to stop them gracefully or forcefully.&lt;br&gt;
kill : Sends a signal to terminate the process with the specific Process ID (PID).&lt;/p&gt;

&lt;p&gt;kill -9 : Forcefully kills the process (use with caution!).&lt;/p&gt;



&lt;ol&gt;
&lt;li&gt;System Monitoring 📊
As a future Cloud Engineer, knowing the health of your server is key. I explored tools to monitor CPU, memory, and disk usage in real-time.
top: Displays real-time system processes, CPU load, and memory usage.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;htop: A more user-friendly, colorful version of top that allows scrolling and easier process management.&lt;/p&gt;

&lt;p&gt;df -h: "Disk Free" - shows available disk space in a human-readable format (GB/MB).&lt;/p&gt;

&lt;p&gt;free -h: Shows used and free RAM.&lt;/p&gt;



&lt;ol&gt;
&lt;li&gt;Disk Management: Adding &amp;amp; Mounting Storage 💾
This was the most hands-on part of my day. In the cloud (like AWS EC2), you often need to add extra storage volumes (EBS) to your instances. I learned how to attach, format, and mount a raw disk to a Linux system manually.
Here is the step-by-step workflow I followed:
Step 1: List Block Devices
First, I checked the available disks attached to the system.
Bash
lsblk
I identified the new disk (e.g., /dev/xvdf or /dev/sdb) that was attached but not yet mounted.
Step 2: Partitioning the Disk
I used fdisk to create a partition on the raw disk.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo fdisk /dev/xvdf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Typed n for a new partition.&lt;/p&gt;

&lt;p&gt;Selected p for primary.&lt;/p&gt;

&lt;p&gt;Pressed Enter for defaults (using the full disk).&lt;/p&gt;

&lt;p&gt;Typed w to write the changes.&lt;/p&gt;

&lt;p&gt;Step 3: Formatting the Filesystem&lt;br&gt;
Before Linux can store files, the partition needs a filesystem (like NTFS in Windows, but usually ext4 or xfs in Linux).&lt;br&gt;
Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkfs.ext4 /dev/xvdf1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 4: Mounting the Disk&lt;br&gt;
I created a directory to serve as the mount point and attached the partition to it.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir /mnt/mydrive
sudo mount /dev/xvdf1 /mnt/mydrive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, any file I save to /mnt/mydrive is physically stored on the new disk!&lt;br&gt;
Conclusion 🚀&lt;br&gt;
Mastering these Linux fundamentals gives me the confidence to handle cloud infrastructure more effectively. Knowing how to mount volumes and manage runaway processes is essential when debugging servers in a production environment.&lt;br&gt;
Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
    </item>
    <item>
      <title>Day-34 Why I Paused My DevOps Projects to Play “Bandit”</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Tue, 20 Jan 2026 16:51:50 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-34-why-i-paused-my-devops-projects-to-play-bandit-3nk7</link>
      <guid>https://forem.com/jayanth_dasari_7/day-34-why-i-paused-my-devops-projects-to-play-bandit-3nk7</guid>
      <description>&lt;p&gt;Sometimes, the best way to move forward in Cloud Engineering is to go back to the command line basics.&lt;/p&gt;

&lt;p&gt;As a second-year B.Sc. student deep in the trenches of learning DevOps, my days are usually filled with complex tools. Lately, I’ve been wrestling with Ansible playbooks, debugging Terraform states, and preparing for my AWS Solutions Architect certification.&lt;/p&gt;

&lt;p&gt;But today, I decided to hit pause on the “big tools” and go back to the roots. I spent the day refreshing my Linux knowledge by playing the OverTheWire Bandit wargames.&lt;/p&gt;

&lt;p&gt;Why Linux Basics Matter for DevOps It’s easy to get caught up in high-level abstractions like Docker and Kubernetes. But at the end of the day, everything runs on Linux. If you can’t comfortably manipulate text streams, manage permissions, or SSH into a remote server without breaking a sweat, the fancy tools becomes much harder to manage.&lt;/p&gt;

&lt;p&gt;The Bandit Experience For those who haven’t tried it, Bandit is a “CTF-style” game where you solve levels to get the password for the next level. It starts easy — basic SSH access — but quickly forces you to think like a sysadmin.&lt;/p&gt;

&lt;p&gt;Here are a few highlights from my session today:&lt;/p&gt;

&lt;p&gt;Handling “Weird” Files: I brushed up on handling files with tricky names (like those starting with dashes - or containing spaces). It sounds simple, but knowing when to use ./- instead of just - is a lifesaver when you encounter it in a real script.&lt;br&gt;
The Power of Pipes: Levels 8 and 9 were a great reminder of how powerful piping commands like sort, uniq, and strings can be. Finding unique lines in a massive text file is a one-line job in Linux if you know your tools.&lt;br&gt;
Hidden in Plain Sight: Several levels required finding “human-readable” files hidden among binary data. It was a good exercise in using the file command and understanding data types.&lt;br&gt;
Takeaway Revisiting these labs didn’t just refresh my memory; it boosted my confidence. It reminded me that the command line isn’t just a place to type git push—it's a powerful environment where you can solve complex data problems with simple, chained commands.&lt;/p&gt;

&lt;p&gt;If you’re a student like me, or even a seasoned engineer, I highly recommend taking a weekend to run through the first 10–15 levels of Bandit. It’s a fun way to keep your blade sharp.&lt;/p&gt;

&lt;p&gt;Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>linux</category>
      <category>learning</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Day-33 Linux File Systems &amp; Advanced User Management 🐧</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Mon, 19 Jan 2026 17:09:37 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-33-linux-file-systems-advanced-user-management-3dp</link>
      <guid>https://forem.com/jayanth_dasari_7/day-33-linux-file-systems-advanced-user-management-3dp</guid>
      <description>&lt;p&gt;Today was a heavy Linux focus day. I spent time exploring the internal directory structure and leveling up my user management skills. Here is a breakdown of what I covered and the commands I explored.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Linux Internal Folder Structure
I explored the root directory (/) to better understand the Filesystem Hierarchy Standard (FHS).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;/boot: Static files of the boot loader.&lt;/p&gt;

&lt;p&gt;/etc: Host-specific system-wide configuration files.&lt;/p&gt;

&lt;p&gt;/home: User home directories.&lt;/p&gt;

&lt;p&gt;/proc: Virtual filesystem providing process and kernel information as files.&lt;/p&gt;

&lt;p&gt;Tip: Using the tree -L 1 / command is a great way to visualize this.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User Management: The Essentials
I refreshed my memory on the daily driver commands for managing access:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Adding a new user
sudo useradd -m newuser

# Assigning a password
sudo passwd newuser

# Deleting a user
sudo userdel -r newuser
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;What's New: Advanced User Management
I dug deeper into some specific flags and concepts I hadn't used much before:&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Locking an account: Instead of deleting a user, you can lock them out.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -L username
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Checking Password Status: Using passwd -S to see if an account is locked, has no password, or is active.&lt;/p&gt;

&lt;p&gt;Understanding /etc/shadow: I learned how to read the fields in the shadow file to understand password aging and encryption methods.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Solidifying these basics is crucial for any Cloud or DevOps role. Understanding the difference between /bin and /sbin or knowing how to manually expire a user account gives you much more control over the system.&lt;/p&gt;

&lt;p&gt;What are some obscure Linux user management commands you use? Let me know in the comments!&lt;/p&gt;

&lt;p&gt;Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>linux</category>
      <category>cloudnative</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Day-32 Hosting a Static Website Manually on AWS EC2</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Fri, 16 Jan 2026 08:06:02 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-32-hosting-a-static-website-manually-on-aws-ec2-4kc8</link>
      <guid>https://forem.com/jayanth_dasari_7/day-32-hosting-a-static-website-manually-on-aws-ec2-4kc8</guid>
      <description>&lt;p&gt;A step-by-step walkthrough of deploying a web app using Git, Apache, and Linux commands.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxavjkuvjz4e3ygk7jczn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxavjkuvjz4e3ygk7jczn.png" alt=" " width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Introduction&lt;br&gt;
As I continue my journey into Cloud Computing and DevOps, I believe it is crucial to understand the manual processes before automating them. Today, I decided to test my core skills by taking a raw static website and hosting it on the cloud from scratch.&lt;/p&gt;

&lt;p&gt;In this post, I’ll walk you through how I set up an AWS EC2 instance, configured the Apache web server, and deployed my code using Git.&lt;/p&gt;

&lt;p&gt;The Tech Stack&lt;br&gt;
AWS EC2: For our virtual server.&lt;br&gt;
Git/GitHub: For version control.&lt;br&gt;
Apache (httpd): To serve our web files.&lt;br&gt;
Linux: The operating system where the magic happens.&lt;br&gt;
Step 1: Version Control Setup&lt;br&gt;
First, I needed a repository to hold my source code. I created a simple static HTML web page locally.&lt;/p&gt;

&lt;p&gt;Created a new empty repository on GitHub.&lt;br&gt;
Initialized Git locally and pushed my code.&lt;br&gt;
Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git init
git add .
git commit -m "Initial commit for static site"
git branch -M main
git remote add origin &amp;lt;your-repo-link&amp;gt;
git push -u origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Step 2: Provisioning the Infrastructure&lt;br&gt;
Next, I headed to the AWS Console to launch a server.&lt;/p&gt;

&lt;p&gt;Service: EC2 (Elastic Compute Cloud)&lt;br&gt;
AMI: RHEL (You can also use Ubuntu)&lt;br&gt;
Instance Type: t2.micro (Free tier eligible!)&lt;br&gt;
Security Group: Opened port 22 (SSH) for access and port 80 (HTTP) so the world can see the website.&lt;br&gt;
Step 3: Setting Up the Environment&lt;br&gt;
Once the instance was running, I SSH’d into it. The first order of business was to install the web server.&lt;/p&gt;

&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Update the package repository
sudo yum update -y
# Install the Apache web server
sudo yum install httpd -y
After installation, I had to make sure the service was running and would restart automatically if the server rebooted.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Start the service
sudo systemctl start httpd
# Enable it to start on boot
sudo systemctl enable httpd
Step 4: Deploying the Application
With the server ready, I needed to get my code onto the machine. I installed Git on the instance and cloned my repository.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install git -y
git clone &amp;lt;your-repo-link&amp;gt;
The Apache web server looks for files in the /var/www/html directory by default. I moved my web app files from the cloned repository folder into this directory.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bash&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mv my-web-app/* /var/www/html/
Step 5: Verification
The final test! I copied the Public IP address of my EC2 instance and pasted it into my browser.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🎉 Success: My static web page loaded up perfectly.&lt;/p&gt;

&lt;p&gt;What I Learned&lt;br&gt;
This mini-project was a great refresher on the fundamental relationship between source code, servers, and deployment directories.&lt;/p&gt;

&lt;p&gt;Linux Skills: Navigating file systems and managing services with systemctl.&lt;br&gt;
Networking: Understanding Security Groups and opening Port 80.&lt;br&gt;
Troubleshooting: Verifying file permissions in /var/www/html.&lt;br&gt;
While tools like Terraform and Docker are powerful, there is something satisfying about doing it the “hard way” to truly understand what is happening under the hood.&lt;/p&gt;

&lt;p&gt;Thanks for reading! Have you tried hosting your own site on EC2? Let me know in the comments.&lt;br&gt;
Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloudcomputing</category>
      <category>linux</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Day-31 Kubernetes Hit Me Hard Today: RBAC, CRDs, and Imposter Syndrome</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Fri, 09 Jan 2026 17:05:50 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-31-kubernetes-hit-me-hard-today-rbac-crds-and-imposter-syndrome-59h8</link>
      <guid>https://forem.com/jayanth_dasari_7/day-31-kubernetes-hit-me-hard-today-rbac-crds-and-imposter-syndrome-59h8</guid>
      <description>&lt;p&gt;Today was tough, and I’m questioning if I’m ready.&lt;br&gt;
The Technical Recap: Moving Beyond the Basics&lt;br&gt;
Up until yesterday, my Kubernetes journey felt manageable. Pods, ReplicaSets, Services—I could visualize them easily. But today, as I dove into the configuration and security layers of K8s, the complexity spiked.&lt;/p&gt;

&lt;p&gt;Here is what I covered today:&lt;/p&gt;

&lt;p&gt;ConfigMaps: I learned how to decouple configuration artifacts from image content to keep containerized applications portable. It’s essentially injecting configuration data (like database URLs) into Pods as environment variables or files.&lt;/p&gt;

&lt;p&gt;Secrets: Similar to ConfigMaps but intended for sensitive information like passwords and OAuth tokens. Key takeaway: Kubernetes Secrets are, by default, stored as unencrypted base64-encoded strings. They aren't magical vaults unless configured properly!&lt;/p&gt;

&lt;p&gt;RBAC (Role-Based Access Control): This is where it got heavy. Managing who can access the K8s API and what permissions they have. Understanding the relationship between Roles (permissions), ServiceAccounts (identity for processes), and RoleBindings (connecting them) is a mental workout.&lt;/p&gt;

&lt;p&gt;CRDs (Custom Resource Definitions): Extending Kubernetes with our own API objects. This allows us to make Kubernetes more modular.&lt;/p&gt;

&lt;p&gt;Hitting the "Wall"&lt;br&gt;
I have to be honest: I am struggling.&lt;/p&gt;

&lt;p&gt;In the beginning, I was flying through concepts. Today, I found myself re-reading the same documentation three times and still feeling unsure. The speed at which I was digesting concepts has slowed down significantly.&lt;/p&gt;

&lt;p&gt;It feels like the more I learn, the more I realize I don't know.&lt;/p&gt;

&lt;p&gt;The Big Doubts: Am I Ready?&lt;br&gt;
This complexity has triggered a wave of imposter syndrome.&lt;/p&gt;

&lt;p&gt;Am I actually learning, or just memorizing commands?&lt;/p&gt;

&lt;p&gt;Am I ready to build real projects?&lt;/p&gt;

&lt;p&gt;Is it too early to apply for internships?&lt;/p&gt;

&lt;p&gt;When you look at a YAML file for a complex RBAC setup, it’s easy to feel like you aren’t "smart enough" for this engineering path. The logic isn't just about running a container anymore; it's about securing it, configuring it, and managing its lifecycle.&lt;/p&gt;

&lt;p&gt;Why I’m Not Quitting&lt;br&gt;
I realized something while staring at a failed kubectl apply error today: This feeling is part of the job.&lt;/p&gt;

&lt;p&gt;If it were easy, everyone would be a Cloud Engineer in a week. The fact that it is getting tougher means I am finally stepping out of the "tutorial hell" and into the real engineering concepts.&lt;/p&gt;

&lt;p&gt;My plan is to slow down. I don't need to master CRDs in an hour.&lt;br&gt;
To anyone else learning DevOps who feels like their brain is full: You are not alone. We digest, we rest, and we go again.&lt;/p&gt;

&lt;p&gt;Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Day-30 Kubernetes Networking Decoded: Why We Need Ingress and Ingress Controllers</title>
      <dc:creator>Jayanth Dasari</dc:creator>
      <pubDate>Thu, 08 Jan 2026 16:21:17 +0000</pubDate>
      <link>https://forem.com/jayanth_dasari_7/day-30-kubernetes-networking-decoded-why-we-need-ingress-and-ingress-controllers-4kep</link>
      <guid>https://forem.com/jayanth_dasari_7/day-30-kubernetes-networking-decoded-why-we-need-ingress-and-ingress-controllers-4kep</guid>
      <description>&lt;p&gt;Moving beyond NodePort and LoadBalancers to smarter traffic routing.&lt;br&gt;
Today’s exploration into Kubernetes networking took me a step deeper than the standard Service types. After getting comfortable with ClusterIP, NodePort, and LoadBalancer, I ran into a logical question: What happens when I have 50 microservices? Do I really want to pay for 50 Cloud Load Balancers or open 50 random ports on my nodes?&lt;/p&gt;

&lt;p&gt;The answer, thankfully, is no. That is where Ingress and Ingress Controllers come in. Here is what I learned today about how they work and why they are essential for production clusters.&lt;/p&gt;

&lt;p&gt;The Problem: Why Standard Services Aren't Enough&lt;br&gt;
Before understanding Ingress, I had to understand the limitations of the other methods:&lt;/p&gt;

&lt;p&gt;NodePort: It opens a specific port on every Node in the cluster. It's messy to manage, has security implications, and you are limited to a specific port range (30000-32767). It’s fine for testing, but bad for production.&lt;/p&gt;

&lt;p&gt;LoadBalancer: This creates a distinct external IP address (usually a cloud load balancer from AWS, GCP, or Azure) for each service. If you have 20 microservices, that’s 20 separate bills for 20 load balancers. It’s expensive and inefficient.&lt;/p&gt;

&lt;p&gt;The Solution: Ingress&lt;br&gt;
I learned that Ingress is essentially a smart router for your cluster. Instead of exposing every service directly to the internet, you expose one entry point, and that entry point decides where the traffic goes based on rules you define.&lt;/p&gt;

&lt;p&gt;Think of it like an office building:&lt;/p&gt;

&lt;p&gt;NodePort is like giving everyone their own key to a side door.&lt;/p&gt;

&lt;p&gt;LoadBalancer is like building a separate main entrance for every single employee.&lt;/p&gt;

&lt;p&gt;Ingress is having one main reception desk. You walk in, tell the receptionist who you are looking for ("I need the Billing Department"), and they direct you to the right room.&lt;/p&gt;

&lt;p&gt;In technical terms, Ingress allows you to do Path-Based Routing or Host-Based Routing.&lt;/p&gt;

&lt;p&gt;example.com/api -&amp;gt; Routes to the Backend Service&lt;/p&gt;

&lt;p&gt;example.com/shop -&amp;gt; Routes to the Frontend Service&lt;/p&gt;

&lt;p&gt;The Missing Piece: The Ingress Controller&lt;br&gt;
Here was the "aha!" moment for me today: Ingress by itself does nothing.&lt;/p&gt;

&lt;p&gt;If you create an Ingress resource (the YAML file), it’s just a piece of paper with rules on it. It’s a configuration request. For those rules to actually work, you need an implementation. This is called the Ingress Controller.&lt;/p&gt;

&lt;p&gt;The Ingress Controller is the actual software (a Pod) running in your cluster that reads your Ingress rules and processes the traffic.&lt;/p&gt;

&lt;p&gt;Ingress = The Rules (The Map)&lt;/p&gt;

&lt;p&gt;Ingress Controller = The Enforcer (The Traffic Cop)&lt;/p&gt;

&lt;p&gt;The most popular controller is NGINX, but there are others like Traefik, HAProxy, and Istio.&lt;/p&gt;

&lt;p&gt;Why They Are Needed (Summary)&lt;br&gt;
Cost Efficiency: You only pay for one Cloud Load Balancer (which sits in front of the Ingress Controller) regardless of how many services you have inside.&lt;/p&gt;

&lt;p&gt;Clean URLs: You can route traffic based on domains (app.com, api.app.com) or paths (/app, /login) rather than weird port numbers like 192.168.1.5:32044.&lt;/p&gt;

&lt;p&gt;SSL/TLS Termination: You can manage your security certificates in one place (the Ingress) rather than configuring SSL on every single microservice application.&lt;/p&gt;

&lt;p&gt;Learning about Ingress feels like graduating from "making things work" to "making things scalable." It separates the routing logic from the application logic and saves massive amounts of cloud resources.&lt;/p&gt;

&lt;p&gt;Linkedin: &lt;a href="https://www.linkedin.com/in/dasari-jayanth-b32ab9367/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/dasari-jayanth-b32ab9367/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>learning</category>
    </item>
  </channel>
</rss>
