<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kene Ojiteli</title>
    <description>The latest articles on Forem by Kene Ojiteli (@keneojiteli).</description>
    <link>https://forem.com/keneojiteli</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/keneojiteli"/>
    <language>en</language>
    <item>
      <title>From Ingress to Gateway API: A Hands-On Walkthrough with NGINX Gateway Fabric</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Sat, 24 Jan 2026 01:37:39 +0000</pubDate>
      <link>https://forem.com/keneojiteli/from-ingress-to-gateway-api-a-hands-on-walkthrough-with-nginx-gateway-fabric-5dn7</link>
      <guid>https://forem.com/keneojiteli/from-ingress-to-gateway-api-a-hands-on-walkthrough-with-nginx-gateway-fabric-5dn7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Kubernetes&lt;/strong&gt; is a container orchestration platform designed to run distributed applications reliably and efficiently. At its core, it schedules workloads (Pods), keeps them running, and gives primitives to scale and heal systems automatically.&lt;/p&gt;

&lt;p&gt;But Kubernetes does not give an application architecture for free, especially when it comes to networking.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Pod&lt;/strong&gt; is the smallest deployable unit in Kubernetes, and it is intentionally ephemeral. Pods can be recreated at any time, which means their IP addresses change frequently. This design is great for resilience, but terrible if you try to talk to pods directly.&lt;/p&gt;

&lt;p&gt;To solve this, Kubernetes introduced &lt;strong&gt;services&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Services: Stable Networking, Limited Scope&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A service provides a stable virtual IP and DNS name that routes traffic to a group of Pods. This solves the Pod IP problem cleanly.&lt;/p&gt;

&lt;p&gt;However, Services are fundamentally cluster-internal abstractions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ClusterIP works only inside the cluster.&lt;/li&gt;
&lt;li&gt;NodePort exposes ports, but with poor ergonomics and security concerns.&lt;/li&gt;
&lt;li&gt;LoadBalancer depends heavily on cloud provider integrations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most importantly, services do not give expressive HTTP routing, TLS control, or multi-team ownership boundaries.&lt;/p&gt;

&lt;p&gt;This is where &lt;strong&gt;Ingress&lt;/strong&gt; came into play.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ingress: A Necessary Step, but a Compromise&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ingress introduced HTTP concepts, such as hosts and paths, into Kubernetes. With an Ingress controller (NGINX, Traefik, HAProxy), applications could be exposed externally in a structured way.&lt;/p&gt;

&lt;p&gt;Ingress solved real problems, but over time, its limitations became obvious:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The API is underspecified and relies on controller-specific annotations.&lt;/li&gt;
&lt;li&gt;One Ingress object often becomes a shared choke point for many teams.&lt;/li&gt;
&lt;li&gt;Infrastructure and application concerns are tightly coupled.&lt;/li&gt;
&lt;li&gt;Advanced use cases (TCP, gRPC, multi-protocol) feel bolted on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ingress works, but it does not scale well organizationally. This is the context in which the &lt;strong&gt;Gateway API&lt;/strong&gt; was created.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gateway API: A Cleaner Separation of Concerns&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Gateway API is an evolution of Ingress, not a replacement-by-default. One of Gateway API’s core ideas is ownership separation: platform teams manage Gateways, while application teams define Routes.&lt;/p&gt;

&lt;p&gt;Its key idea is role separation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GatewayClass - defined by platform teams, shows who implements the Gateway.&lt;/li&gt;
&lt;li&gt;Gateway - infrastructure-level entry points, show where traffic enters the cluster.&lt;/li&gt;
&lt;li&gt;Routes (HTTPRoute, GRPCRoute, etc.) - owned by application teams, show how traffic is routed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instead of one overloaded resource doing everything, responsibilities are explicit. A Gateway does not route traffic by itself. It only defines entry points (listeners). All routing logic lives in Route resources.&lt;/p&gt;

&lt;p&gt;To understand whether this actually improves things in practice, I built a small project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project goal&lt;/strong&gt;: This project demonstrates how Kubernetes Gateway API improves traffic management compared to Ingress by deploying a multi-service application and exposing it externally using NGINX Gateway Fabric.&lt;/p&gt;

&lt;p&gt;NGINX Gateway Fabric is an implementation of the Kubernetes Gateway API built and maintained by NGINX. It plays the same role that an Ingress controller plays for Ingress, but for Gateway API.&lt;/p&gt;

&lt;p&gt;Physically, NGINX Gateway Fabric runs inside the cluster as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A controller that watches Gateway API resources.&lt;/li&gt;
&lt;li&gt;One or more NGINX data-plane pods that act as the actual traffic proxy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The controller translates Gateway API objects into live NGINX configuration and keeps it in sync.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A running Kubernetes cluster (kind, minikube, or managed).&lt;/li&gt;
&lt;li&gt;kubectl.&lt;/li&gt;
&lt;li&gt;helm.&lt;/li&gt;
&lt;li&gt;Basic understanding of Kubernetes YAML.&lt;/li&gt;
&lt;li&gt;Willingness to debug errors(permission issues, version mismatches).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Walkthrough&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I carried out this walkthrough on an EC2 instance to simulate a realistic cloud environment. I launched an instance with sufficient memory and storage, then connected to it remotely using VS Code over SSH.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87semjgivvwu4r9xzp2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87semjgivvwu4r9xzp2x.png" alt="ec2" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before installing any tools, I updated the system’s package index to ensure I was working with the latest available versions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd26x352do9p1cx11hpqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd26x352do9p1cx11hpqn.png" alt="update-packages" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;I installed the core tools needed for this walkthrough:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker – to run containers.&lt;/li&gt;
&lt;li&gt;Kind – to run Kubernetes locally inside Docker.&lt;/li&gt;
&lt;li&gt;kubectl – to interact with the Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;Helm – to install the NGINX Gateway Fabric controller.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt98ucaguuoxn3gsrh0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjt98ucaguuoxn3gsrh0m.png" alt="kind" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hdlpos28i77e9qo9kkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1hdlpos28i77e9qo9kkm.png" alt="docker" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjptv7hffrl25qiv6j5q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjptv7hffrl25qiv6j5q.png" alt="docker-version" width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kb6m5jcug1o8xx1ljyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kb6m5jcug1o8xx1ljyz.png" alt="helm" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof8e435pwjxov3xqf2ac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fof8e435pwjxov3xqf2ac.png" alt="kubectl" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I created a Kubernetes cluster named &lt;code&gt;gateway-api-demo&lt;/code&gt; using Kind and a configuration file. This cluster will host all Gateway API resources and workloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qgg364rsi783no308dm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qgg364rsi783no308dm.png" alt="k8s-cluster" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I installed the Gateway API CRDs, which introduced new Kubernetes resource types such as &lt;code&gt;GatewayClass&lt;/code&gt;, &lt;code&gt;Gateway&lt;/code&gt;, and &lt;code&gt;HTTPRoute&lt;/code&gt;. These are definitions only; they do not route traffic by themselves. They simply tell Kubernetes what kinds of objects are allowed to exist.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fray524g7u394oh9vkomr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fray524g7u394oh9vkomr.png" alt="gateway-api-crd" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With the CRDs installed, I installed NGINX Gateway Fabric using Helm. This component is the actual controller that monitors Gateway API resources and converts them into actual NGINX configurations. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56t5mlz9pn00z05n0fdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56t5mlz9pn00z05n0fdp.png" alt="ngf-install" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l3zln4plh1wq0p8y3hj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0l3zln4plh1wq0p8y3hj.png" alt="ngf-resources" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As part of its startup process, NGINX Gateway Fabric automatically created a GatewayClass named nginx. This GatewayClass declares that NGINX Gateway Fabric is responsible for implementing any Gateway that references it. Installing Gateway API CRDs only defines the resource types. The GatewayClass is created by the controller (NGINX Gateway Fabric), not by Kubernetes itself.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To demonstrate routing behaviour, I deployed three simple Python-based HTTP servers, each representing a different device-specific frontend. All applications were deployed into the same namespace.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwc6sgscq3wmajvfnk0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwc6sgscq3wmajvfnk0u.png" alt="deploy-apps" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;At this stage, pods are running, but no external traffic can reach them yet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I then created a Gateway resource. The Gateway defines where traffic enters the cluster by specifying listeners (ports and protocols) and referencing the nginx GatewayClass.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5welofhvw61efch277p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5welofhvw61efch277p.png" alt="gateway" width="800" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Describing the Gateway shows that the NGINX Gateway fabric accepted the Gateway, the Gateway was successfully programmed, a service was created and exposed, and no routes are attached yet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This implies that the Gateway is live, listening for traffic, but has no routing rules.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmudq1co0jlwtpvamb4c7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmudq1co0jlwtpvamb4c7.png" alt="describe-gateway" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw3dz430xqxrl1yt7lis.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdw3dz430xqxrl1yt7lis.png" alt="describe-gateway1" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The GatewayClass confirms that NGINX Gateway Fabric is the active controller responsible for handling Gateways that reference it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes6fmlyskx4vd7x8f1wf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fes6fmlyskx4vd7x8f1wf.png" alt="gatewayclass" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At this point, the &lt;code&gt;ngf-gatewayapi-ns&lt;/code&gt; namespace contains NGINX Gateway Fabric controller pods and the Gateway and its supporting resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61540fzty4t7q7gzotq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F61540fzty4t7q7gzotq6.png" alt="ngf-gatewayapi-ns-ns" width="800" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlo95p6h7fpc8n1u2g5f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqlo95p6h7fpc8n1u2g5f.png" alt="more-details" width="800" height="52"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I attempted to access the application via: &lt;code&gt;&amp;lt;NodeIP&amp;gt;:&amp;lt;NodePort&amp;gt;&lt;/code&gt; (NodePort is used here for simplicity in a demo environment to make the Gateway reachable from outside the EC2 instance. In production, this would be replaced by a cloud LoadBalancer or external traffic manager). I also updated the EC2 security group to allow inbound traffic on the NodePort.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr39i8e9czu2oelunpc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr39i8e9czu2oelunpc2.png" alt="inbound-rule" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The request failed. This behaviour is expected because the gateway accepts traffic, but there is no backend defined, and no route exists to forward traffic to any service. This is where the HTTPRoute comes in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0psynqfpf5adu7zy5ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq0psynqfpf5adu7zy5ao.png" alt="access-app" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The HTTPRoute defines how requests are matched, specifies which backend service should receive traffic and attaches itself to a gateway using &lt;code&gt;parentRefs&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I created three services and a single HTTPRoute that forwards traffic to the appropriate backend based on request rules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After applying the HTTPRoute, traffic could flow freely. Traffic follows a clear path: Client → Gateway → NGINX Gateway Fabric → HTTPRoute → Service → Pod.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6q7ckq04zarjs153pgt9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6q7ckq04zarjs153pgt9.png" alt="HTTPRoute" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I was able to access the applications successfully as traffic was routed through the gateway and its proxy pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fek7ablf89ojh6v4htm37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fek7ablf89ojh6v4htm37.png" alt="desktop-route" width="800" height="206"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhplcd1x05l2cqxrgbt35.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhplcd1x05l2cqxrgbt35.png" alt="android-route" width="800" height="188"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksvzvfofrm6qfk37n6uz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fksvzvfofrm6qfk37n6uz.png" alt="iphone-route" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges Encountered and Fixes&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Error1&lt;/strong&gt;: I encountered a permission denied error while trying to create a kind cluster. This was caused by insufficient user privileges to interact with the Docker daemon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3c2fc9kawsl6rym24wk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3c2fc9kawsl6rym24wk.png" alt="cluster-error" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix&lt;/strong&gt;: A temporary fix would be adding &lt;code&gt;sudo&lt;/code&gt; to the command for creating the cluster (this is not recommended), but I permanently resolved this error by adding my current user to the Docker group, as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj7insq144s9loe3yoao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj7insq144s9loe3yoao.png" alt="cluster-error-fix" width="800" height="24"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error2&lt;/strong&gt;: I encountered a &lt;code&gt;CrashLoopBackOff&lt;/code&gt; error in the NGINX Gateway Fabric controller pod. The pod failed immediately on startup, and traffic handling never initialised.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtiuv6j48arnvwb65cvf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtiuv6j48arnvwb65cvf.png" alt="crd-version-mismatch-error" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Actions taken&lt;/strong&gt;: I inspected the controller pod logs and observed repeated startup failures referencing &lt;code&gt;BackendTLSPolicy&lt;/code&gt;, with errors indicating that the API server could not find the resource kind. &lt;/p&gt;

&lt;p&gt;This indicated that the controller was attempting to register an informer for BackendTLSPolicy during startup. I verified that my installed Gateway API CRDs did not include the BackendTLSPolicy definition, even though I was not explicitly creating or using this resource in my manifests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu06gwcsfkognu0razmqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu06gwcsfkognu0razmqw.png" alt="pod-logs" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix&lt;/strong&gt;: The issue was caused by a version mismatch between the Gateway API CRDs and the NGINX Gateway Fabric controller.&lt;/p&gt;

&lt;p&gt;The installed controller version expected the BackendTLSPolicy CRD to exist as part of the Gateway API it was built against. Although I did not intend to use BackendTLSPolicy, the controller still attempted to register an informer for it during startup. Since the CRD was missing, the Kubernetes API server rejected the informer registration, causing the controller to crash.&lt;/p&gt;

&lt;p&gt;I resolved the issue by upgrading the Gateway API CRDs to a release that includes the BackendTLSPolicy resource, ensuring compatibility with the installed NGINX Gateway Fabric controller. Once the CRD existed in the cluster, the controller started successfully even without any BackendTLSPolicy objects being created. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ncrq794vzcyvpoueuom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ncrq794vzcyvpoueuom.png" alt="v2.1.0" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g9kyynszkrziceeybv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g9kyynszkrziceeybv8.png" alt="v2.3.0" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wrapping It All Up&lt;/strong&gt;&lt;br&gt;
This walkthrough moves from Kubernetes fundamentals to a modern, production-grade traffic management model using Gateway API and NGINX Gateway Fabric.&lt;/p&gt;

&lt;p&gt;It shows how:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pods are ephemeral and unreliable entry points.&lt;/li&gt;
&lt;li&gt;Services provide stable in-cluster access, but don’t solve external routing.&lt;/li&gt;
&lt;li&gt;Ingress improved the situation, but centralised too much responsibility.&lt;/li&gt;
&lt;li&gt;Gateway API splits concerns cleanly:

&lt;ul&gt;
&lt;li&gt;GatewayClass defines who implements networking.&lt;/li&gt;
&lt;li&gt;Gateway defines where traffic enters.&lt;/li&gt;
&lt;li&gt;HTTPRoute defines how traffic is routed.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Most importantly, this walkthrough shows that Gateway API is not just “Ingress v2”. It is a deliberate redesign that establishes clear ownership boundaries, enhances extensibility, and provides improved operational visibility for Kubernetes networking.&lt;/p&gt;

&lt;p&gt;If you are building multi-team platforms, managing multiple routes and protocols, or preparing for service mesh and mTLS-heavy environments, Gateway API is no longer optional knowledge; it is the direction Kubernetes is heading.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What This Project Solves&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This project demonstrates how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expose applications externally without coupling routing logic to workloads.&lt;/li&gt;
&lt;li&gt;Safely evolve routing rules without redeploying Gateways.&lt;/li&gt;
&lt;li&gt;Use a standards-based API instead of vendor-specific annotations.&lt;/li&gt;
&lt;li&gt;Understand why traffic flows the way it does, not just that it works.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If Ingress ever felt magical or fragile to you, Gateway API replaces that magic with explicit contracts.&lt;/p&gt;

&lt;p&gt;All manifests, configurations, and steps used in this walkthrough are available &lt;a href="https://github.com/keneojiteli/gateway-api-implementation" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Did this help you understand how Gateway API works? Drop a comment!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>kubernetes</category>
      <category>gatewayapi</category>
      <category>aws</category>
    </item>
    <item>
      <title>Automating Infrastructure Provisioning with Terraform, AWS S3 Remote Backend, and GitHub Actions</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Sun, 26 Oct 2025 01:01:11 +0000</pubDate>
      <link>https://forem.com/keneojiteli/automating-infrastructure-provisioning-with-terraform-aws-s3-remote-backend-and-github-actions-2k5b</link>
      <guid>https://forem.com/keneojiteli/automating-infrastructure-provisioning-with-terraform-aws-s3-remote-backend-and-github-actions-2k5b</guid>
      <description>&lt;p&gt;Infrastructure automation is at the heart of modern DevOps. I moved beyond just running Terraform apply locally and created a fully automated, modular, and version-controlled infrastructure workflow using Terraform, AWS, and GitHub Actions.&lt;/p&gt;

&lt;p&gt;This project provisions AWS infrastructure through custom Terraform modules, manages Terraform state securely with S3 as a remote backend, leverages S3’s native state locking mechanism, and automates the provisioning and destruction process through GitHub Actions.&lt;/p&gt;

&lt;p&gt;This project simulates a production-ready Infrastructure as Code (IaC) workflow that teams can use for scalable, consistent, and automated deployments.&lt;/p&gt;

&lt;p&gt;It also prepares the foundation for the next phase: Automated Multi-Environment Deployment with Terraform &amp;amp; CI/CD, which I am currently building.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic understanding of Terraform (providers, modules, state, variables).&lt;/li&gt;
&lt;li&gt;AWS account with:

&lt;ul&gt;
&lt;li&gt;An S3 bucket for remote backend storage.&lt;/li&gt;
&lt;li&gt;Programmatic access via IAM user or role.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;GitHub repository for automation.&lt;/li&gt;

&lt;li&gt;GitHub Actions runner permissions to deploy to AWS.&lt;/li&gt;

&lt;li&gt;Terraform.&lt;/li&gt;

&lt;li&gt;IDE (VS Code recommended).&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Terms to Know&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt;: used to manage and provision cloud infrastructure through code instead of manual processes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Terraform Module&lt;/strong&gt;: A reusable, version-controlled block of Terraform configurations that defines one piece of infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Remote Backend&lt;/strong&gt;: A centralised storage for Terraform state files (in this case, AWS S3).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;State Locking&lt;/strong&gt;: Prevents concurrent updates to your infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GitHub Actions&lt;/strong&gt;: CI/CD tool that automates workflows like terraform plan and terraform apply.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Project Overview&lt;/strong&gt;&lt;br&gt;
The project is built to demonstrate how to structure Terraform code into custom modules, manage state remotely, and automate the provisioning/destruction process through CI/CD.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7k1vp1wzrz0slfktp9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7k1vp1wzrz0slfktp9m.png" alt="arch-diagram" width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Modules Communicate (Data Exchange Between Modules)&lt;/strong&gt;&lt;br&gt;
Terraform does not allow referencing one child module’s resources (each child module is meant to be self-contained and reusable) directly inside another child module. Instead, outputs and inputs are used with the root module acting as the &lt;strong&gt;connector&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In a child module, variables are declared. In the root module, when the child module is called, you must provide values for those variables (unless the child module defines defaults).&lt;/p&gt;

&lt;p&gt;For example, consider a VPC child module and an EC2 child module; the EC2 child module will require a subnet ID, which is generated by the VPC module. The best way to do this is to: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pass the subnet ID as an output of the VPC module in the &lt;code&gt;output.tf&lt;/code&gt; file of the VPC module.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk28pi0rmdyawnmr6jp5x.png" alt="vpc-module-output.tf" width="800" height="100"&gt; &lt;/li&gt;
&lt;li&gt;Create EC2 child module and put in the argument and attribute refs it expects, including the ones it requires from other child modules (eg, subnet_id); don't forget to parameterise.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1elsniv5elon12gdo7eq.png" alt="variables.tf-ec2-module" width="800" height="101"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1h4cjrkepf3aqr78m5p.png" alt="main.tf-ec2-module" width="800" height="268"&gt;
&lt;/li&gt;
&lt;li&gt;Since the root module is the connector, create the EC2 module in the &lt;code&gt;main.tf&lt;/code&gt; file of the root module.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupb02gm5zvsplcguj797.png" alt="ec2-module-main.tf-root-module" width="711" height="62"&gt;
&lt;/li&gt;
&lt;li&gt;Pass the output from the VPC module to the root module's &lt;code&gt;output.tf&lt;/code&gt; file (in the image below, &lt;strong&gt;pub_subnet_id is the name of the output from the VPC child module&lt;/strong&gt;)
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ict16wyjr7cfhl64ew9.png" alt="output.tf-root-module" width="747" height="83"&gt;
&lt;/li&gt;
&lt;li&gt;Reference the output as an input to the EC2 module.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ts6407ooyfj75vjwb7z.png" alt="ec2-module-main.tf-root-module" width="606" height="225"&gt;
&lt;/li&gt;
&lt;li&gt;This means that when Terraform wants to create the EC2 resource, it goes into the EC2 module, reads the whole block, noting that some arguments are dependent on other module arguments, and uses the source to locate the EC2 child module, and starts creating the EC2 instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Solutions My Project Solved&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Terraform state conflicts or lost files&lt;/strong&gt;: I used S3 remote backend for centralised state management, and also leveraged S3's native state locking feature that allows S3 to manage state locks directly, eliminating the need for a separate DynamoDB table. This simplifies the backend configuration and reduces the associated costs and infrastructure complexity. S3's native state locking feature manages the lock so only one &lt;code&gt;terraform plan&lt;/code&gt; or &lt;code&gt;apply&lt;/code&gt; can run at a time.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda45maa0krhzzgakt2ps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda45maa0krhzzgakt2ps.png" alt="tf-state-locking-illustration" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code duplication&lt;/strong&gt;: I modularised my Terraform resources into reusable custom modules.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco8y1opdaj1hwkvdfowc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco8y1opdaj1hwkvdfowc.png" alt="file-structure" width="497" height="588"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Manual provisioning of resources&lt;/strong&gt;: I automated resource provisioning with GitHub Actions workflow. To make the workflow seamless, I added the needed credentials as repository secrets.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3tk1v6xvo4489l3l707.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs3tk1v6xvo4489l3l707.png" alt="gha-secrets" width="651" height="202"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lack of CI/CD integration&lt;/strong&gt;: I initialised Terraform, validated my code, then implemented Terraform plan and apply pipelines in GitHub actions. I also added a manual trigger to destroy the resources.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodnlp92eimbudy5m0kz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodnlp92eimbudy5m0kz8.png" alt="tf-create-workflow" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvuw5iecejnnaa7mh9rc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvuw5iecejnnaa7mh9rc.png" alt="tf-destroy-workflow" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges and Fixes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Error&lt;/strong&gt;: I encountered a permission error while trying to use an S3 bucket as a remote backend.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi60ibmv3bmfmxoa8vygm.png" alt="s3-permission-issue" width="800" height="248"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fix&lt;/strong&gt;: I added an inline policy obeying the principle of least privilege (giving the IAM user permissions necessary to carry out the role). 
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kadk31oeq75torwabq5.png" alt="fix-add-inline-policy" width="800" height="240"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lessons Learned&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Modularisation makes scaling Terraform projects easier.&lt;/li&gt;
&lt;li&gt;Remote backends are crucial for team collaboration.&lt;/li&gt;
&lt;li&gt;Outputs and variables are the lifeblood of inter-module communication.&lt;/li&gt;
&lt;li&gt;Automation does not equal speed alone; it also adds consistency and traceability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This project laid the foundation for a fully automated Infrastructure as Code workflow, from custom modules to remote state management and CI/CD automation.&lt;/p&gt;

&lt;p&gt;In my next article, I will automate multi-environment deployment with Terraform &amp;amp; CI/CD, where environments like Dev, Staging, and Production are automatically provisioned using the same pipeline.&lt;/p&gt;

&lt;p&gt;Need Terraform code? &lt;a href="https://github.com/keneojiteli/aws-infrastructure-with-terraform-modules" rel="noopener noreferrer"&gt;Check it here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Did this help you understand how Terraform modules communicate? Drop a comment!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>terraform</category>
      <category>githubactions</category>
      <category>aws</category>
    </item>
    <item>
      <title>Building and Deploying a Cloud-Native FastAPI Student Tracker App with MongoDB, Kubernetes, and GitOps</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Wed, 24 Sep 2025 06:13:16 +0000</pubDate>
      <link>https://forem.com/keneojiteli/building-and-deploying-a-cloud-native-fastapi-student-tracker-app-with-mongodb-kubernetes-and-1m8</link>
      <guid>https://forem.com/keneojiteli/building-and-deploying-a-cloud-native-fastapi-student-tracker-app-with-mongodb-kubernetes-and-1m8</guid>
      <description>&lt;p&gt;I once carried out a skills gap analysis on myself as a DevOps engineer, looking for my next challenging opportunity. I realised that although I had built smaller projects, I hadn’t yet executed a production-grade, full-blown cloud-native project that combined all the essential DevOps practices.&lt;/p&gt;

&lt;p&gt;That became my mission: to design, deploy, and manage a FastAPI-based Student Tracker application with a MongoDB backend, but with a strong focus on DevOps functionalities rather than frontend appearance.&lt;/p&gt;

&lt;p&gt;This project allowed me to bring together containerization, Kubernetes, Helm, CI/CD, GitOps, monitoring, and observability into one workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outline.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites.&lt;/li&gt;
&lt;li&gt;Key Terms &amp;amp; Components.&lt;/li&gt;
&lt;li&gt;Step-by-Step Process.&lt;/li&gt;
&lt;li&gt;Challenges and Fixes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A code editor (I used VS Code).&lt;/li&gt;
&lt;li&gt;A terminal.&lt;/li&gt;
&lt;li&gt;Optionally, a cloud provider to provision an instance.&lt;/li&gt;
&lt;li&gt;Knowledge of Docker and Kubernetes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Terms &amp;amp; Components.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt;: a tool used to build, test, and deploy applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes&lt;/strong&gt;: used for automating the deployment, scaling, and management of containerised applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ingress&lt;/strong&gt;: a resource type similar to a Kubernetes service, that allows easy routing of HTTP and HTTPS traffic entering the cluster through a single entry point to different services inside the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm &amp;amp; Helm charts&lt;/strong&gt;:  Helm acts as a package manager for Kubernetes; it is used to manage Kubernetes applications. Helm Charts are used to define, install, and upgrade even the most complex Kubernetes applications.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitOps&lt;/strong&gt;: a framework for managing cloud-native infrastructure and applications by using Git as the single source of truth for the desired state of your system. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ArgoCD&lt;/strong&gt; (deployment platform for k8s): a Kubernetes controller which continuously monitors running applications and compares the current, live state against the desired target state (as specified in the Git repo)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring &amp;amp; Observability&lt;/strong&gt;: Monitoring involves tracking known system metrics to detect when something is wrong, while observability is a deeper, more advanced capability that allows you to understand the internal state of a system by correlating logs, metrics, and traces to diagnose the why and how behind an issue. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vault server&lt;/strong&gt;: A tool that allows you to manage secrets safely. Secrets mean sensitive information, such as digital certificates, database credentials, passwords, and API encryption keys.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Process.&lt;/strong&gt;&lt;br&gt;
This is a full-blown project where I update my progress in each stage, and the process includes the following phases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Testing the application locally&lt;/strong&gt;: This stage can be carried out locally, or you can leverage the use of a VM on any cloud provider (I explored both methods, but I will use an AWS EC2 instance throughout the project). &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;While provisioning the instance, I added a script to install every tool I need for the project, which includes: git, docker, kubectl, helm, kind, etc I verified the installation by checking their versions.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4v3o0art1rc7b2s8riam.png" alt="verify version" width="800" height="104"&gt;
&lt;/li&gt;
&lt;li&gt;I cloned the application's repository and navigated to the folder.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuo2ye7v5tmrdof5nwwfi.png" alt="clone repo" width="800" height="90"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn00azac7yj9qebh953o0.png" alt="navigate to folder" width="800" height="112"&gt;
&lt;/li&gt;
&lt;li&gt;To test locally, I installed Python and created a virtual environment (an isolated environment on my computer, to run and test my Python app). After creating a virtual environment, I activated it and installed the necessary dependencies for my application from the &lt;code&gt;requirements.txt&lt;/code&gt; file.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcxinmmp75lf8ec1usjr.png" alt="install python" width="800" height="290"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2am1u5xbxwnzptygek0n.png" alt="activate_virtual_env" width="800" height="312"&gt;
&lt;/li&gt;
&lt;li&gt;I exported my vault credentials via the CLI and ran the app with &lt;code&gt;uvicorn&lt;/code&gt;.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4max2df2wkow4fkqnw6e.png" alt="vault_credentials" width="800" height="55"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqfw0dteih4rvcthiyqr.png" alt="run_app" width="800" height="112"&gt;
&lt;/li&gt;
&lt;li&gt;I accessed the application on my browser with &lt;code&gt;http://&amp;lt;EC2_public_ip&amp;gt;:8000&lt;/code&gt; and registered.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7o5f9owfosxyeb95pf1i.png" alt="app" width="800" height="218"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcvcfzf3g8b5kt8qzdrm.png" alt="register-on-app" width="800" height="51"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Containerising the application and pushing to Dockerhub&lt;/strong&gt;: This stage involves building a Docker image through a Dockerfile and pushing to a repository (Dockerhub).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I created a Dockerfile, which serves as a set of instructions to build a Docker image.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6axt7rh5wura6f2014h.png" alt="Dockerfile" width="800" height="233"&gt;
&lt;/li&gt;
&lt;li&gt;I built an image from the Dockerfile and created a container (a running instance of the built image) from the image with the required credentials.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbqt0olxz6semxukxsjh.png" alt="build_image" width="800" height="303"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8livnr670mabzphit0v.png" alt="create_container" width="800" height="44"&gt;
&lt;/li&gt;
&lt;li&gt;I accessed my app and successfully updated my progress.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bonprhpu44d1ly5bpfp.png" alt="access_app" width="800" height="90"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx76dh74zih2aml6ee8zh.png" alt="update_progress" width="800" height="135"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5d2a0yluyz095h94ok0.png" alt="success_message" width="800" height="49"&gt;
&lt;/li&gt;
&lt;li&gt;Pushing to Dockerhub requires creating a repository on my Dockerhub account, and successful login from my CLI to my Dockerhub account before pushing to the repository.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmvbbhqetplrdq61xeaeu.png" alt="docker_login" width="800" height="109"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2swe4syk5qbwedh3bij.png" alt="push_image" width="800" height="89"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2woj5tzgo3effooklpbi.png" alt="dockerhub_repo" width="800" height="616"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzyqlmfca4y3n7utzmb2.png" alt="dockerhub_repo" width="800" height="81"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Setting up a Kubernetes cluster (using Kind)&lt;/strong&gt;: Here, I used Kind (a local Kubernetes cluster, which stands for Kubernetes in Docker, that uses Docker containers as nodes) to create a cluster. Working with Kind requires Docker to be installed and a Kind configuration file. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjryhunno22bqqu46b1d4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjryhunno22bqqu46b1d4.png" alt="config-file" width="800" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I created a cluster named &lt;code&gt;kene-demo-cluster&lt;/code&gt; that has a control plane and a worker node.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1m1f10glbsq1flwwr9a.png" alt="k8s-cluster" width="800" height="112"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2jh8nccvz3kn9mj54el.png" alt="show-nodes" width="800" height="31"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvkm414nd75ykqeqowwj.png" alt="show-context" width="800" height="46"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Deploying the application to the Kubernetes cluster&lt;/strong&gt;: I exposed my application via an ingress and created an ingress controller for my kind cluster.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, I created my manifest files (namespace, secret, deployment, service and ingress files).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fni0u4xx3qpef1p65tsaa.png" alt="secret" width="800" height="132"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqzknlma37fipqz15p91.png" alt="deployment" width="800" height="302"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5j1spmuccz1vvke3f7de.png" alt="ingress" width="800" height="168"&gt;
&lt;/li&gt;
&lt;li&gt;Then, I applied the configurations to a resource and created an &lt;code&gt;nginx ingress controller&lt;/code&gt;.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzxxsgtxv6ep86wilcek.png" alt="kubectl-apply" width="800" height="51"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7z00qydeb3opzqb19fud.png" alt="nginx-controller" width="800" height="169"&gt;
&lt;/li&gt;
&lt;li&gt;I retrieved all the resources I created in each namespace using the &lt;code&gt;kubectl get all -n &amp;lt;namespace&amp;gt;&lt;/code&gt; command.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F52ubmu8dyzf3lk1vvs5c.png" alt="my-app-resources" width="800" height="142"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foeb10oexx43j0u79br9y.png" alt="ingress-nginx-resources" width="800" height="101"&gt;
&lt;/li&gt;
&lt;li&gt;I accessed the application with the ingress host and updated my progress as usual.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1et6abdjyxri88af9rsh.png" alt="app-via-ingress" width="800" height="155"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r2p4twh9af1wncr2wac.png" alt="update-progress" width="800" height="129"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzn8drzckqscpcfbret6i.png" alt="success-message" width="800" height="46"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Deploying the application with Helm&lt;/strong&gt;: This stage focuses on deploying the student tracker application with Helm charts. I created the Helm chart from scratch.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I installed Helm, and I verified the installation by checking Helm's version with the &lt;code&gt;helm version&lt;/code&gt; command. 
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjhzj5hhcxzlzvjq8aih.png" alt="check-helm's-version" width="800" height="22"&gt;
&lt;/li&gt;
&lt;li&gt;I created my Helm chart and navigated to the chart's directory. Note that the chart has the default structure of a typical Helm chart.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fazlxh7n83nz9n4t064pu.png" alt="create-chart" width="800" height="47"&gt;
&lt;/li&gt;
&lt;li&gt;I deleted all the default template files and created new files to customise my chart.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6fo36w334ijxs7ykrnk.png" alt="template-files" width="800" height="32"&gt;
&lt;/li&gt;
&lt;li&gt;I added the Nginx ingress controller repository, updated the Helm repository and installed the Nginx ingress controller with Helm.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16mp5n2o1fqqqllmnqdy.png" alt="add-update-repo" width="800" height="45"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xvf71775zepllk6zy74.png" alt="install-ingress-controller" width="800" height="322"&gt;
&lt;/li&gt;
&lt;li&gt;I went ahead to update the &lt;code&gt;Chart.yaml&lt;/code&gt; file using my app details and my template files using the values I specified in the &lt;code&gt;my-values.yaml&lt;/code&gt; file.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcfk68omi08v6kk5yyqs.png" alt="Chart.yaml" width="800" height="222"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffddxuu8tmoifc4qmezce.png" alt="my-values.yaml" width="800" height="286"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6wv8m15eqfxg26xp2p9.png" alt="secret.yaml" width="800" height="124"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fskdoi5c4bur618q42v0t.png" alt="template-files" width="800" height="139"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wp9stu8tn2viz18r62w.png" alt="ingress.yaml" width="800" height="181"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtyeo66vpta0vhp4cpo1.png" alt="deploy.yaml" width="800" height="260"&gt;
&lt;/li&gt;
&lt;li&gt;Then, I installed the helm chart (I went out of the chart's directory, and specified the path of the chart's directory, the namespace the chart will be created in, and the values file to use).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ivgn5mnrradiaz1b6nq.png" alt="install-chart" width="800" height="79"&gt;
&lt;/li&gt;
&lt;li&gt;I accessed the app and updated my progress.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgh4zsjbwm3ukessb210.png" alt="update-app" width="800" height="169"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kez1ye1o1z758t9ryn1.png" alt="success-message" width="800" height="45"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Implement CI/CD with GitHub Actions&lt;/strong&gt;: In this stage, I roleplay as a DevOps engineer (implementing CI/CD with GitHub actions to deploy my application to an EC2 instance) and a developer (adding an admin feature to the application to view the progress of all registered students).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To implement a CI/CD pipeline on GitHub actions, I created my workflow (a yaml file) in &lt;code&gt;.github/workflows&lt;/code&gt; folder, added an event to trigger the pipeline, a deploy job and steps to deploy the application.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5n5ta6o6u6sjbl0hv7w.png" alt="pipeline" width="800" height="398"&gt;
&lt;/li&gt;
&lt;li&gt;I also added the required credentials as secrets and referenced them where necessary in my workflow, as shown above. The credentials are the details of the instance to which I will deploy my application.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6noe0dgg57whi2ippupv.png" alt="secrets" width="771" height="348"&gt;
&lt;/li&gt;
&lt;li&gt;Based on my workflow, a push event to the main branch triggers the workflow to deploy the application to an EC2 instance (in my case, it is deployed to an EC2 instance on a different AWS account).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4p0mrgo5e1b05de6p92.png" alt="successful-run" width="800" height="301"&gt;
&lt;/li&gt;
&lt;li&gt;I logged into the account my app was deployed to and verified the application was successfully deployed.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvz31twj72jqk7moj4mk.png" alt="successful-deployment" width="800" height="29"&gt;
&lt;/li&gt;
&lt;li&gt;I used the public IP and port of the account my app was deployed to to access the app and update my progress. I ensured that my port was added to my instance security group, allowing inbound rule.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fen4xatcnhfzxqr5d3q0t.png" alt="access-app" width="800" height="154"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuyese73f32mgz9n4yw4p.png" alt="update-progress" width="800" height="160"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fglerinmrqg8w0k5p90f8.png" alt="success-message" width="800" height="59"&gt;
&lt;/li&gt;
&lt;li&gt;As a developer, I added an &lt;code&gt;admin.html&lt;/code&gt; file, updated the &lt;code&gt;app/crud.py&lt;/code&gt;and &lt;code&gt;app/main.py&lt;/code&gt; files.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frihui6e01ph1a6x23plu.png" alt="admin.html" width="800" height="258"&gt;
&lt;/li&gt;
&lt;li&gt;I committed and pushed the changes, which triggered the workflow, and I accessed the application with &lt;code&gt;http://&amp;lt;instance-ip&amp;gt;:&amp;lt;port&amp;gt;/admin&lt;/code&gt;.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bqd8x1mp4t42q6kbldx.png" alt="admin-path" width="800" height="106"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs69b0nnju5eemwmge6z3.png" alt="kene-progress" width="800" height="94"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Implement GitOps with ArgoCD&lt;/strong&gt;: I implemented GitOps with ArgoCD using Git as the single source of truth. Following best practices, this will be done in an entirely new repository (at this point, I will have an application repository and a GitOps repository).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I installed ArgoCD with Helm. First, I added ArgoCD to the repository, updated the repository, and configured the server to run in an insecure mode (disabling TLS/SSL and potentially other security measures) before installing it in a namespace called argocd.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvcgxrs9jcf36u1xpxajm.png" alt="add-argo-to-repo" width="800" height="59"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtqnwnmbvikyzay6a174.png" alt="argocd-insecure-mode" width="800" height="54"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54b28fa27808jy09m2tm.png" alt="install-argocd-with-helm" width="800" height="192"&gt;
&lt;/li&gt;
&lt;li&gt;I verified that all resources in the argocd namespace were up and running.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ci08bb8eezqfc2sgrg8.png" alt="argocd-resources" width="800" height="246"&gt;
&lt;/li&gt;
&lt;li&gt;I will be using the ArgoCD UI. To log in, I need to obtain the initial password and port forward (specifying --address 0.0.0.0/0, which listens on all network interfaces of the machine). You can change your password from the UI.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feo3ztb623stigjfalv3t.png" alt="argocd-password" width="800" height="43"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo3p0wyk70lryer6h4yvl.png" alt="argocd-ui" width="800" height="438"&gt;
&lt;/li&gt;
&lt;li&gt;Currently, ArgoCD has no record or knowledge of my app; to add my app, I will use an application YAML file (I could also add it from the UI or use the CLI).
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9ydgb7y5rcd2uqx4che.png" alt="add-app-with-yaml" width="800" height="243"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2jfwc1ipw0oi7sjm3uk.png" alt="apply-to-add-app" width="800" height="35"&gt;
&lt;/li&gt;
&lt;li&gt;In addition to using ArgoCD as a Kubernetes controller that will monitor my Git repository for changes to application and infrastructure configurations, I will also create a workflow to build and push a Docker image to Dockerhub, such that the push will update the Helm values.yaml file with a new image repository and tag, and ArgoCD auto-syncs the commit.&lt;/li&gt;
&lt;li&gt;I created a repository and a personal access token in my Docker Hub account, then added the PAT and my username as secrets in my project's repo.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97wdlwdlvzg2ifa995qv.png" alt="dockerhub-repo" width="800" height="482"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0gt7i3qv2po3ay8btzj.png" alt="repo-secrets" width="740" height="217"&gt;
&lt;/li&gt;
&lt;li&gt;I triggered my workflow by pushing changes to my main branch. My workflow will check out my repo, log in to Docker Hub, build the image, scan the image with trivy (a scanning tool), push the scanned image to my Docker Hub account, update the image tag and push the new update to GitHub.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpogil2trs1erldmmp2x5.png" alt="cd-workflow" width="800" height="359"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhvj5pqvc3o8gphiy508.png" alt="new-image" width="800" height="75"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i8gg671vlgkm7d4ql5k.png" alt="old-tag" width="800" height="114"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmoqrum5toh0gflv1kp0.png" alt="new-tag" width="800" height="65"&gt;
&lt;/li&gt;
&lt;li&gt;The commit step from the pipeline above causes ArgoCD to auto-sync with the Git repo. 
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1zq0byojcvgotxvdfpu.png" alt="port-forward" width="800" height="148"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0zb8p898g02zbxltmva.png" alt="argo-cd-auto-sync" width="800" height="407"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Implement Monitoring with the LGTP (Loki, Grafana, Tempo and Prometheus) stack&lt;/strong&gt;: in this stage, I modified my source code to allow metrics scraping by Prometheus, and deployed monitoring tools to be monitored by argocd and also created the monitoring workloads in a different namespace. I defined Argo CD Applications that point to the Helm charts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I used the &lt;code&gt;app of apps&lt;/code&gt; pattern where a single, parent ArgoCD application resource manages other child application resources (instead of manually deploying each application), which then manages the actual Kubernetes workloads.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnwcg3fu7wsepk1r373o.png" alt="app-of-apps" width="800" height="45"&gt;
&lt;/li&gt;
&lt;li&gt;I port-forwarded to argocd to view the applications created by the single parent application with &lt;code&gt;kubectl -n &amp;lt;namespace&amp;gt; port-forward &amp;lt;argocd-service-name&amp;gt; 8000:80 --address 0.0.0.0&lt;/code&gt;.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3p3t2okhmflot1e4irxq.png" alt="monitoring-stack" width="800" height="404"&gt;
&lt;/li&gt;
&lt;li&gt;Here are my complete applications, which consist of: kube-prometheus-stack (which includes Prometheus, node exporter and Grafana), Tempo, Loki and my student-tracker application.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yh065qfcbr2snkzl4zd.png" alt="argocd-deployment" width="800" height="406"&gt;
&lt;/li&gt;
&lt;li&gt;I port-forwarded to access Grafana using &lt;code&gt;kubectl -n &amp;lt;namespace&amp;gt; port-forward &amp;lt;grafana-service-name&amp;gt; 3000:80 --address 0.0.0.0&lt;/code&gt; command.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9nuoacghuuhxmaypme8p.png" alt="Grafana" width="800" height="408"&gt;
&lt;/li&gt;
&lt;li&gt;I added an extra configuration to add Tempo, Loki and Prometheus as my datasources automatically.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2mq7m0caqr15etsbw4hj.png" alt="data-sources" width="800" height="396"&gt;
&lt;/li&gt;
&lt;li&gt;I went ahead to test the data sources and create dashboards.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqajqilme1aucxx42hfx.png" alt="api-test" width="800" height="88"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7jle6vb3tjd9jxpl5wbv.png" alt="grafana-dashboard" width="800" height="366"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft09qdajejtudft1q2007.png" alt="metrics" width="800" height="201"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;NB&lt;/strong&gt;: I made sure to allow inbound traffic for all the ports used in my EC2 instance security group.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Challenges and Fixes&lt;/strong&gt;: working on this project exposed me to a whole lot of errors, most of which I was able to resolve after so much research.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Error 1&lt;/strong&gt;: I had issues accessing my application via the browser, and I debugged by doing an nmap scan to see my open and closed ports; it turned out my ports were closed except port 22.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkqj38dmgjmrib9eqckhx.png" alt="nmap-scan" width="800" height="96"&gt;
&lt;/li&gt;
&lt;li&gt;I patched my deployment to &lt;code&gt;kubectl patch deployment ingress-nginx-controller -n ingress-nginx -p '{"spec":{"template":{"spec":{"hostNetwork":true}}}}'&lt;/code&gt;, this is because the kind cluster runs in a container, so exposing ports to the EC2 host and beyond won't work unless the pod is directly attached to the host's network.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3j6sba8aihar4yg8rd1j.png" alt="patch-service-type" width="800" height="19"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdoi8pe35qr6a5qzsnjt.png" alt="open-ports" width="800" height="85"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error 2&lt;/strong&gt;: I had issues accessing my application on the argocd UI This was caused because by default, helm fetches values from the &lt;code&gt;values.yaml&lt;/code&gt; file, but I have a custom file named &lt;code&gt;my-values.yaml&lt;/code&gt;. 
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzawhedsejp6rdvj4peb7.png" alt="nil-pointer-error" width="800" height="304"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwcn7phxwat87svwbzl0k.png" alt="nil-pointer-error" width="800" height="199"&gt;
&lt;/li&gt;
&lt;li&gt;I fixed this error by adding default values to the default &lt;code&gt;values.yaml&lt;/code&gt; file and specifying the exact Helm file to be used to deploy the application to ArgoCD.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls46g2g5o3q7imsilfc2.png" alt="helm-value-file" width="800" height="50"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error 3&lt;/strong&gt;: I had a module not found error on my student tracker log on Argocd. This was caused because of a wrong file path in my Dockerfile because I rearranged my application's directory and failed to change the file path.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2For9ntg2yzjouhqr0oeuw.png" alt="module-not-found" width="800" height="33"&gt;
&lt;/li&gt;
&lt;li&gt;I fixed this error by modifying the file to make it easier for Python to look for the module.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fibv8zxnldolnxhstiywh.png" alt="change-path-on-dockerfile" width="800" height="43"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;: This project was a turning point in my DevOps journey. By building a cloud-native FastAPI Student Tracker and deploying it with Docker, Kubernetes, Helm, CI/CD, GitOps, and monitoring, I gained hands-on experience with the full DevOps lifecycle. It taught me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How to design end-to-end workflows from local testing to production-ready deployments.&lt;/li&gt;
&lt;li&gt;How GitOps principles simplify cluster management with ArgoCD.&lt;/li&gt;
&lt;li&gt;How monitoring (logs, metrics, and traces) ties DevOps and observability together.&lt;/li&gt;
&lt;li&gt;That errors are part of the process, debugging taught me more than smooth deployments ever could.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;GitHub Repositories: &lt;a href="https://github.com/keneojiteli/student-tracker-devops-project" rel="noopener noreferrer"&gt;Application repo&lt;/a&gt;, &lt;a href="https://github.com/keneojiteli/student-tracker-app-with-gitops-and-monitoring" rel="noopener noreferrer"&gt;GitOps repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudnative</category>
      <category>aws</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Cloud Networking in Practice: Building a Highly Available VPC on AWS with Terraform</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Wed, 25 Jun 2025 02:24:10 +0000</pubDate>
      <link>https://forem.com/keneojiteli/cloud-networking-in-practice-building-a-highly-available-vpc-on-aws-with-terraform-2ae8</link>
      <guid>https://forem.com/keneojiteli/cloud-networking-in-practice-building-a-highly-available-vpc-on-aws-with-terraform-2ae8</guid>
      <description>&lt;p&gt;After grasping the concepts of Networking, Subnetting, and IP addressing in &lt;a href="https://dev.to/keneojiteli/networking-basics-understanding-subnets-ip-addresses-subnet-masks-1e8i"&gt;Part 1 of my series&lt;/a&gt;, I was eager to move beyond theory. In this second part, I’ll guide you through the process of building a highly available VPC on AWS using Terraform, featuring public and private subnets, NAT gateways, and proper routing. Whether you're building your first cloud infrastructure or refining your core networking practices, this hands-on guide has something for you.&lt;/p&gt;

&lt;p&gt;This article assumes you are familiar with &lt;strong&gt;Terraform&lt;/strong&gt;, &lt;strong&gt;AWS CLI&lt;/strong&gt; and &lt;strong&gt;networking basics&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outline.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites.&lt;/li&gt;
&lt;li&gt;Key Terms &amp;amp; Components.&lt;/li&gt;
&lt;li&gt;Step-by-Step Terraform Setup.&lt;/li&gt;
&lt;li&gt;Provisioning the resources on AWS.&lt;/li&gt;
&lt;li&gt;Testing the setup.&lt;/li&gt;
&lt;li&gt;Cleaning up resources.&lt;/li&gt;
&lt;li&gt;Challenges and Fixes.&lt;/li&gt;
&lt;li&gt;Suggested Improvements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS account with programmatic access (access key + secret key)&lt;/strong&gt; - This is where the network resources will be created.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS CLI&lt;/strong&gt; - to interact with AWS on your local machine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; - installed on your local machine to provision a highly available network infrastructure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Terms &amp;amp; Components&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Private Cloud (VPC)&lt;/strong&gt; - a logically isolated section of a public cloud service where resources can be launched in a virtual network.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subnet&lt;/strong&gt; - a logical subdivision of an IP network, effectively creating a network within a network. Subnets are used to break down a larger network into smaller, more manageable segments. Subnetting helps improve network efficiency, traffic management, and security.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Public Subnet&lt;/strong&gt; - a network segment within a Virtual Private Cloud (VPC) that has a direct route to the internet through an Internet Gateway.    This means that resources within the public subnet can communicate directly with the internet using public IP addresses.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Private Subnet&lt;/strong&gt; - a network segment without a direct route to the internet gateway. Private subnets do not have a public IP address, and resources in a private subnet will need a NAT gateway to have internet access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internet Gateway&lt;/strong&gt; - a horizontally scaled, highly available AWS-managed service that connects your VPC to the internet. An internet gateway allows instances in the public subnet to send traffic to the internet and receive incoming traffic. Without the internet gateway, internet access into the VPC will not be possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Elastic IP&lt;/strong&gt; - a static IPV4 address attached to a NAT gateway.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network Address Translation (NAT) Gateway&lt;/strong&gt; - allows private resources in a private subnet to access the internet (for example, to download packages, perform an OS update), without being publicly accessible. For this project, I will be using a public NAT gateway (this is an AWS-managed service launched in the public subnet with access to the internet via the internet gateway that routes traffic from the private subnet to the internet, and also with an elastic IP attached to it).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Route Table&lt;/strong&gt; - contains a set of rules called routes that determine where network traffic from a subnet or gateway is directed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Route Table Association&lt;/strong&gt; - This is the link between a route table and a subnet. This association determines which routes (rules) in the route table are used to direct network traffic from that particular subnet or gateway.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Elastic Compute Cloud Instances&lt;/strong&gt; - are virtual servers in the AWS cloud that allow users to run applications and workloads. This resource aims to test the VPC setup by placing one instance in the public subnet (called bastion host), and the other instance in the private subnet (to verify internet/NAT access).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Group&lt;/strong&gt; - acts as a virtual firewall that controls inbound and outbound traffic for your EC2 instances. It's a set of rules that specify which traffic is allowed to reach or leave an instance. Each EC2 instance will have a security group attached to it, allowing SSH access on port 22.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step-by-Step Terraform Setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I created a &lt;strong&gt;provider.tf&lt;/strong&gt; file where I configured the provider (AWS in this case) that my terraform configuration would interact with.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rebvheofd17p363gnuj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rebvheofd17p363gnuj.png" alt="provider.tf" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Then a &lt;strong&gt;variable.tf&lt;/strong&gt; file, where I declared input variables (such as region, availability zones, subnet names, CIDR ranges) used throughout the project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fythdelp5jtjvq8z38ol6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fythdelp5jtjvq8z38ol6.png" alt="variables.tf" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At this point, it is important to note that this is a &lt;strong&gt;highly available VPC architecture&lt;/strong&gt; designed to maintain uptime and service continuity even if part of the infrastructure fails.&lt;/li&gt;
&lt;li&gt;I then created each Terraform resource in a separate file (alternatively, you could define all resources in a single file, but I separated them for clarity) for better understanding and easy readability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vpc.tf&lt;/strong&gt; is the resource block for the VPC, which includes a CIDR block.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt7dj5toay94yq66aqju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzt7dj5toay94yq66aqju.png" alt="vpc.tf" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;subnets.tf&lt;/strong&gt; creates 2 subnets (a public and private subnet) in each availability zone (considering high availability); this can be done easily using Terraform's &lt;code&gt;count&lt;/code&gt; argument and &lt;code&gt;element&lt;/code&gt; function.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5snoissrj1sum8put6ka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5snoissrj1sum8put6ka.png" alt="subnets.tf" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;igw.tf&lt;/strong&gt; creates an internet gateway in the specified VPC (referencing the vpc_id).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zdogsehx2b6p1rmz5kc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zdogsehx2b6p1rmz5kc.png" alt="igw.tf" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;nat.tf&lt;/strong&gt; launches a NAT gateway in each public subnet for each availability zone (This highly available architecture uses a public NAT Gateway in each AZ), but before the NAT gateway, an elastic IP is needed (the NAT gateway is dependent on the presence of an internet gateway and is attached to an elastic IP).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswtg34j6sqw75oe95ija.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswtg34j6sqw75oe95ija.png" alt="nat.tf" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;rtb.tf&lt;/strong&gt; creates route tables for the public and private subnets (I created one route table for the public subnet because there is one internet gateway, and two route tables for the private subnet because there are two NAT gateways).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxd7bjiqe91zqur4qknv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkxd7bjiqe91zqur4qknv.png" alt="rtb.tf" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Next, I associated the route tables with a subnet.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqynvk1wfo8ytcldo1dv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqynvk1wfo8ytcldo1dv.png" alt="rtb1.tf" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ec2.tf&lt;/strong&gt; creates two instances in a public and private subnet (I will be using one Availability Zone to test this part of the project).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap0vzp0yu3jptd9l7umb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap0vzp0yu3jptd9l7umb.png" alt="ec2.tf" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;sg.tf&lt;/strong&gt; creates two security groups that will be attached to the instances above, allowing port 22 for inbound traffic and all traffic for outbound traffic. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The bastion host's security group only allows SSH from my local machine to the bastion host, while the private instance security group allows inbound SSH if it comes from any instance in the Bastion SG (meaning it only allows SSH from Bastion’s security group), hence the use of security_group instead of cidr_block.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8iiu0f1euu5a7mh2uxby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8iiu0f1euu5a7mh2uxby.png" alt="sg.tf" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3flcfnbc5scrjmu7zbpc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3flcfnbc5scrjmu7zbpc.png" alt="sg1.tf" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provisioning the resources on AWS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I configured AWS CLI on my local machine with &lt;code&gt;aws configure&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta5h8p0wpw94lutiar0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta5h8p0wpw94lutiar0h.png" alt="aws configure" width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using &lt;code&gt;terraform init&lt;/code&gt;, I initialised the working directory containing my configuration files to download all the necessary provider plugins.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj5xm61v88cxlpl9wyh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj5xm61v88cxlpl9wyh6.png" alt="terraform init" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With &lt;code&gt;terraform plan&lt;/code&gt;, I created an execution plan used to preview the changes that Terraform plans to make to my infrastructure. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfyge1yaresebv2h49p8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfyge1yaresebv2h49p8.png" alt="plan1" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6e9tbu2p3ppwlpbhxnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk6e9tbu2p3ppwlpbhxnz.png" alt="plan2" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn55mgrmyxfoy2qjyydah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn55mgrmyxfoy2qjyydah.png" alt="plan3" width="800" height="167"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwd38ysc79if1n5hflo8v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwd38ysc79if1n5hflo8v.png" alt="plan4" width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1brl5viopaqfjex4lzz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1brl5viopaqfjex4lzz.png" alt="plan5" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kbv6qi5qnyy964y1h0a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8kbv6qi5qnyy964y1h0a.png" alt="plan6" width="800" height="261"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foay3teq01p1o9if3w877.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foay3teq01p1o9if3w877.png" alt="plan7" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmt3rstsf3mbtsty6xpd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmt3rstsf3mbtsty6xpd.png" alt="plan8" width="800" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcs05cnxaev5xddewx2b6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcs05cnxaev5xddewx2b6.png" alt="plan9" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kjezvhnlllnsxqcd78o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5kjezvhnlllnsxqcd78o.png" alt="plan10" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnamapww3r3899e9knaso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnamapww3r3899e9knaso.png" alt="plan11" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyu9iy08idkvy2g0u7p8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyu9iy08idkvy2g0u7p8y.png" alt="plan12" width="800" height="585"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3snpm9d66xx42cjyc5h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3snpm9d66xx42cjyc5h.png" alt="plan13" width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frebduh58gh660smtnjjn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frebduh58gh660smtnjjn.png" alt="plan14" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhz2k3yp6ukyo08idjhy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhz2k3yp6ukyo08idjhy.png" alt="plan15" width="800" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Then, I executed the actions proposed in the plan in dependency order using the &lt;code&gt;terraform apply&lt;/code&gt; command.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1is9msp9pudepgdhvp2n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1is9msp9pudepgdhvp2n.png" alt="apply1" width="800" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zar6aovoi9nywawevse.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zar6aovoi9nywawevse.png" alt="apply2" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8ewi5furjt8vag23yx9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8ewi5furjt8vag23yx9.png" alt="apply3" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After running the terraform apply command, I verified the creation of the resources on my AWS account by checking the resource map (this shows a visual representation of my VPC's architecture and resource relationships). &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdbdi4njpkqrkfqi2g6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdbdi4njpkqrkfqi2g6u.png" alt="resource map" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;And, also the running instances (notice the networking properties attached to each instance, and the absence of a public IP on the private host).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyn5c7f7pw4382ranodf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyn5c7f7pw4382ranodf.png" alt="instance1" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3kuwbkm63zefufjbh7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3kuwbkm63zefufjbh7r.png" alt="instance2" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing the Setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect via SSH: SSH into the bastion from my machine, and SCP or SSH-agent-forward Bastion → Private EC2&lt;/li&gt;
&lt;li&gt;Test internet access: &lt;code&gt;curl&lt;/code&gt; on the private instance (this works via NAT), and remove NAT and repeat curl (fails, confirming NAT’s impact).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To securely connect to an EC2 instance, a keypair is needed (I created it on the AWS console, downloaded it to my machine and navigated to the directory via CLI).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faewmc42c48lpl98k3jyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faewmc42c48lpl98k3jyr.png" alt="keypair" width="800" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After navigating to the keypair's location using &lt;strong&gt;git bash&lt;/strong&gt; terminal, I securely connected to the bastion host via SSH using &lt;code&gt;ssh -i &amp;lt;your-key.pem&amp;gt; &amp;lt;default-user-based-on-machine&amp;gt;@&amp;lt;bastion_public_ip&amp;gt;&lt;/code&gt; command.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfsmv8ils6vjxhgrrg8c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfsmv8ils6vjxhgrrg8c.png" alt="ssh-bastion-host" width="722" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Having gained access to the bastion host, the next step is to SSH to the private host via SSH from the bastion host. This requires having a keypair on the bastion host (meaning I have to copy my keypair from my local machine to the bastion host with either &lt;strong&gt;secure copy&lt;/strong&gt; or &lt;strong&gt;SSH agent forwarding&lt;/strong&gt;, I used the former method).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyke0sn6cdo1qmxkfsbv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyke0sn6cdo1qmxkfsbv6.png" alt="copy-keypair" width="726" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Then, I verified the key’s presence in the user’s home directory (which is the destination).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqwx7elvpf8vi34ia7nf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuqwx7elvpf8vi34ia7nf.png" alt="check-keypair-copy" width="722" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Then I accessed the private host via SSH using &lt;code&gt;ssh -i &amp;lt;your-key.pem&amp;gt; &amp;lt;default-user-based-on-machine&amp;gt;@&amp;lt;private_host_private_ip&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q6mnq8dtkkjz0zaakx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q6mnq8dtkkjz0zaakx8.png" alt="ssh-to-private-host" width="725" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Without changing permission, the error below will be encountered.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2tsag092eb8oonc3k147.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2tsag092eb8oonc3k147.png" alt="permission-error" width="718" height="251"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using &lt;code&gt;curl&lt;/code&gt;, I tested internet/NAT access on the private host; I got a successful response, meaning the NAT gateway works properly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcylicix3s5i5u7ut7p6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcylicix3s5i5u7ut7p6.png" alt="verify-NAT-access" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I removed the NAT gateway and tried to update packages and ping a network, but it wasn't successful; this implies that without the NAT gateway, outbound connections cannot be initiated to the internet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26ygarg2r9lc0qg09y0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26ygarg2r9lc0qg09y0u.png" alt="no-nat" width="800" height="21"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e24yzdir2khlpikd5r4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e24yzdir2khlpikd5r4.png" alt="no-nat1" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx006yqzique1ka0whkgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx006yqzique1ka0whkgi.png" alt="no-nat2" width="800" height="41"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cleaning up Resources&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I cleaned up by running the &lt;code&gt;terraform destroy&lt;/code&gt; command to avoid incurring costs. The &lt;code&gt;terraform destroy&lt;/code&gt; command deprovisions all objects managed by a Terraform configuration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f9r85uj4pxswmls4s9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7f9r85uj4pxswmls4s9x.png" alt="destroy1" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10127yvyqidfqqmj1mzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10127yvyqidfqqmj1mzu.png" alt="destroy2" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges and Fixes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error 1- Failed to query available provider packages&lt;/strong&gt;: The timeout error was due to poor network connectivity. I switched to a better network, and it was resolved.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a3x352s8k4ns63cifc5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a3x352s8k4ns63cifc5.png" alt="error1" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error 2- No valid credential sources found&lt;/strong&gt;: this error occurred because AWS needs credentials. I fixed it by configuring the CLI with the &lt;code&gt;aws configure&lt;/code&gt; command to set up my AWS credentials and region for the Terraform provider.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsppmj4nfw2h60quoztni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsppmj4nfw2h60quoztni.png" alt="error2" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error 3- Incorrect attribute value type&lt;/strong&gt;: This error is a syntax error and was resolved by updating the attribute (shared_credential_file) from a single string ("~/.aws/credentials") to a list of strings (["~/.aws/credentials"]).&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2kjsrcrmrkepywsbwwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2kjsrcrmrkepywsbwwg.png" alt="error3" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Suggested Improvements&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remote state backend using AWS S3 (for secure, shared state).&lt;/li&gt;
&lt;li&gt;Refactor using Terraform modules to make the VPC reusable, maintainable by component.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In Part 2, I turned theory into practice, building a fault-tolerant, multi-AZ VPC with secure networking. I explored key AWS components, tested routing logic, and tackled real-world errors. With this foundation, Part 3 will focus on production-grade deployment: adding IAM, storage, monitoring, and a containerized app on top of this VPC.&lt;/p&gt;

&lt;p&gt;Need Terraform code? &lt;a href="https://github.com/keneojiteli/vpc_architecture_with_terraform" rel="noopener noreferrer"&gt;Check it here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Did this help your cloud networking skills? Drop a comment!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>terraform</category>
      <category>networking</category>
    </item>
    <item>
      <title>Networking Basics: Understanding Subnets, IP Addresses &amp; Subnet Masks.</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Wed, 04 Jun 2025 15:16:22 +0000</pubDate>
      <link>https://forem.com/keneojiteli/networking-basics-understanding-subnets-ip-addresses-subnet-masks-1e8i</link>
      <guid>https://forem.com/keneojiteli/networking-basics-understanding-subnets-ip-addresses-subnet-masks-1e8i</guid>
      <description>&lt;p&gt;&lt;strong&gt;Networking&lt;/strong&gt; was always that one topic I kept putting off until I realised it’s the backbone of almost everything I do as a &lt;strong&gt;DevOps Engineer&lt;/strong&gt;. Concepts like &lt;strong&gt;Subnetting&lt;/strong&gt;, &lt;strong&gt;CIDR notation&lt;/strong&gt;, &lt;strong&gt;IP classes&lt;/strong&gt;, and the &lt;strong&gt;OSI model&lt;/strong&gt; used to feel overwhelming and sounded like magic to me. But with some patience and practical examples, I finally cracked the code, and I want to help you do the same.&lt;/p&gt;

&lt;p&gt;In this article, I will break down everything I wish I understood from the basics of how computers communicate using IP addresses, what subnetting means, and how everything connects within the OSI model. Whether you are a complete beginner, someone brushing up,  preparing for a cloud certification, or diving into DevOps, this article will simplify those tricky networking topics in a way that just makes sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outline&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How Computers Communicate Using IP.&lt;/li&gt;
&lt;li&gt;Understanding IP addressing.&lt;/li&gt;
&lt;li&gt;Subnets, CIDR notation and calculating IP Ranges.&lt;/li&gt;
&lt;li&gt;Calculating subnet info and IP ranges with examples.&lt;/li&gt;
&lt;li&gt;OSI Model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How Computers Communicate Using IP&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Computer networking&lt;/strong&gt; is how computers and devices talk to each other and share information. For example, the way humans communicate via texts and calls with phones, computers communicate via networks to send and receive data. This communication happens through connections, like cables (wired) or Wi-Fi (wireless), using rules called protocols (like IP, TCP, HTTP).&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;IP (Internet Protocol) address&lt;/strong&gt; is a unique address that identifies a device on the Internet or a local network. Think of an IP address the same way a house address is used to locate or differentiate a house (no two houses share the same address), an IP address identifies a device on a network.&lt;/p&gt;

&lt;p&gt;It’s important to note that the computer only understands bits and bytes (0s and 1s), so it uses an IP address (for example, 192.168.10.20 converted to binary) while humans use hostnames (for example, &lt;a href="http://www.google.com" rel="noopener noreferrer"&gt;www.google.com&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Private IP addresses usually range from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;10.0.0.0 – 10.255.255.255&lt;/li&gt;
&lt;li&gt;172.16.0.0 – 172.31.255.255&lt;/li&gt;
&lt;li&gt;192.168.0.0 – 192.168.255.255&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While Public IP addresses are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Routable on the internet.&lt;/li&gt;
&lt;li&gt;Assigned by your ISP or cloud provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Understanding IP addresses&lt;/strong&gt;&lt;br&gt;
At the moment, an IP address has two versions: &lt;strong&gt;IPv4&lt;/strong&gt; (an older version that is running out of addresses, which uses a 32-bit address space with a dot-decimal notation) and &lt;strong&gt;IPv6&lt;/strong&gt; (which uses a 128-bit address space, providing a virtually limitless number of addresses using hexadecimal notation). I will be focusing on IPv4 in this article.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding IPv4 Addressing&lt;/strong&gt;&lt;br&gt;
An IPv4 address is a &lt;strong&gt;32-bit number&lt;/strong&gt; divided into &lt;strong&gt;4 octets&lt;/strong&gt; separated by a dot. This means that each octet is made up of &lt;strong&gt;8 bits&lt;/strong&gt; (which is also equivalent to &lt;strong&gt;1 byte)&lt;/strong&gt; and each octet ranges from &lt;strong&gt;0 - 255&lt;/strong&gt; (where 0 is the minimum value and 255 is the maximum value). The image below shows a typical IPv4 address in both decimal and binary format.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60rlnyxyuj5h3zxs0jp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F60rlnyxyuj5h3zxs0jp3.png" alt="ipv4" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An IP has 2 parts, namely the &lt;strong&gt;host bit&lt;/strong&gt; (identifies devices within the network) and the &lt;strong&gt;network bit&lt;/strong&gt; (identifies the network, and is the reserved or fixed portion), but this is determined by the IP address class (IP addresses are divided into classes based on their first octet).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5yup9edtbtvca5ztdp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5yup9edtbtvca5ztdp4.png" alt="Ip address classes" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are 5 main IP address classes. This article focuses on types A, B, and C, which are used by large organisations, medium-sized businesses, and small networks, respectively. Types D and E are reserved for multicast and experimental use, respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;N.B.&lt;/strong&gt;: Class A ranges from 1 - 126 and Class B ranges from 128 - 191, one would wonder if 127 was omitted? No, 127 was not omitted; it is reserved for loopback, making Class A have only 126 usable networks.&lt;/p&gt;

&lt;p&gt;From the diagram above, I will break down how to calculate the &lt;strong&gt;number of available hosts&lt;/strong&gt; and the &lt;strong&gt;number of networks&lt;/strong&gt; for each class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To calculate the number of networks&lt;/strong&gt; = 2 ^ (Number of network bits)&lt;br&gt;
The number of network bits depends on how many bits are reserved for identifying networks in that class.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Class&lt;/th&gt;
&lt;th&gt;Network Bits&lt;/th&gt;
&lt;th&gt;Number of Networks (2^network bits)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;7 bits&lt;/td&gt;
&lt;td&gt;2⁷ = 128 → 128 - 2 = &lt;strong&gt;126&lt;/strong&gt; usable&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B&lt;/td&gt;
&lt;td&gt;14 bits&lt;/td&gt;
&lt;td&gt;2¹⁴ = &lt;strong&gt;16,384&lt;/strong&gt; networks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;21 bits&lt;/td&gt;
&lt;td&gt;2²¹ = &lt;strong&gt;2,097,152&lt;/strong&gt; networks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This calculation is done based on the default subnet mask (a 32-bit value used to distinguish the network portion of an IP address from the host portion) for each class (refer to the first diagram in this article). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Class A&lt;/strong&gt;;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Subnet mask in decimal = 255.0.0.0      
Subnet mask in binary = 11111111.00000000.00000000.00000000 (the computer only understands numbers in binary format). 
Network bit = 11111111 = 2 ^ (8 - 1) = 2 ^ 7 = 128 (the first bit in class A is always 0, making 7 bits available, so we subtract 1 from the sum of the first octet)
Total Class A network = 128
Usable Class A network = 128 - 2 = 126 (0.x.x.x and 127.x.x.x are reserved for special routing and loopback, respectively).
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For Class B&lt;/strong&gt;;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Subnet mask in decimal = 255.255.0.0      
Subnet mask in binary = 11111111.11111111.00000000.00000000 
Network bit = 11111111.11111111 = 2 ^ (16 - 2) = 2 ^ 14 = 16,384 (the first 2 bits in class B are always 10, so we subtract 2 from the sum of the first 2 octets)
Total Class B network = 16,384
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For Class C&lt;/strong&gt;;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Subnet mask in decimal = 255.255.255.0      
Subnet mask in binary = 11111111.11111111.11111111.00000000 
Network bit = 11111111.11111111.11111111 = 2 ^ (24 - 3) = 2 ^ 21 = 2,097,152 (the first 3 bits in class C are always 110, so we subtract 3 from the sum of the first 3 octets)
Total Class C network = 2,097,152
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the network bits are the octets comprising 1.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To calculate the number of available hosts&lt;/strong&gt; = 2 ^ (Number of host bits) - 2, where the 2 subtracted is reserved for the &lt;strong&gt;network ID&lt;/strong&gt; and &lt;strong&gt;broadcast address&lt;/strong&gt;.&lt;br&gt;
Using the formula above is similar to calculating the network bits except that the host bits comprise zero (0) in the subnet mask, and subtracting 2 for each class.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Class A&lt;/strong&gt;;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Subnet mask in decimal = 255.0.0.0      
Subnet mask in binary = 11111111.00000000.00000000.00000000 
Host bit = 00000000.00000000.00000000 = 2 ^ 24 = 16,777,216
Total Class A hosts = 16,777,216
Usable Class A hosts = 16,777,216 - 2 = 16,777,214
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For Class B&lt;/strong&gt;;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Subnet mask in decimal = 255.255.0.0      
Subnet mask in binary = 11111111.11111111.00000000.00000000 
Host bit = 00000000.00000000 = 2 ^ 16 = 65,536
Total Class B hosts = 65,536
Usable Class B hosts = 65,536 - 2 = 65,534
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For Class C&lt;/strong&gt;;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Subnet mask in decimal = 255.255.255.0      
Subnet mask in binary = 11111111.11111111.11111111.00000000 
Host bit = 00000000 = 2 ^ 8 = 256
Total Class C hosts = 256
Usable Class C hosts = 256 - 2 = 254
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Here is a quick recap&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Class&lt;/th&gt;
&lt;th&gt;First Octet Range&lt;/th&gt;
&lt;th&gt;Default Subnet Mask&lt;/th&gt;
&lt;th&gt;Number of Hosts per Network&lt;/th&gt;
&lt;th&gt;Number of Networks&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;A&lt;/td&gt;
&lt;td&gt;1 - 126&lt;/td&gt;
&lt;td&gt;255.0.0.0 (/8)&lt;/td&gt;
&lt;td&gt;16,777,214&lt;/td&gt;
&lt;td&gt;128 (2⁷) - 2 = 126&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;B&lt;/td&gt;
&lt;td&gt;128 - 191&lt;/td&gt;
&lt;td&gt;255.255.0.0 (/16)&lt;/td&gt;
&lt;td&gt;65,534&lt;/td&gt;
&lt;td&gt;16,384 (2¹⁴)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;192 - 223&lt;/td&gt;
&lt;td&gt;255.255.255.0 (/24)&lt;/td&gt;
&lt;td&gt;254&lt;/td&gt;
&lt;td&gt;2,097,152 (2²¹)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Subnets, CIDR notation and Calculating IP Ranges&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Subnet&lt;/strong&gt; stands for sub-network. It is a smaller part of a larger network. They make networking more secure, organised and efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CIDR (Classless Inter-Domain Routing)&lt;/strong&gt; is a more flexible method for defining network size. CIDR removes the strict class rules (A, B, C) and allows more flexible subnetting.&lt;/p&gt;

&lt;p&gt;A typical example of CIDR is 192.168.1.0/24, where /24 indicates that 24 bits are reserved for the network part and the remaining 8 bits (32 - 24 = 8) are allocated for the host part (32 - CIDR).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Calculating Subnet Info and IP Ranges using examples&lt;/strong&gt;&lt;br&gt;
Before delving into examples, here is a list of how to calculate subnet info and IP ranges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calculate the network bit from the CIDR block (this is the number after the forward slash).&lt;/li&gt;
&lt;li&gt;Calculate the host bit: (32 - CIDR).&lt;/li&gt;
&lt;li&gt;Calculate the number of available hosts (the number of IP addresses): 2 ^ (number of host bits) - 2.&lt;/li&gt;
&lt;li&gt;Calculate the network address, first assignable IP, last assignable IP, and the broadcast address.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;N.B.&lt;/strong&gt;: As the subnet mask gets larger, the IP ranges get smaller.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Example 1: 192.168.1.1/24
IP address class: C
CIDR: /24
Subnet mask(decimal): 255.255.255.0
Subnet mask(binary): 11111111.11111111.11111111.00000000
IP address(binary): 11000000.10101000.00000001.00000001
Network bit: 24
Host bit: 32 - 24 = 8 (a short way to get this is to convert the subnet mask and IP to binary, then match the 1s in the subnet mask with the IP address based on the CIDR, the remaining part with 0s is the host bit)
Number of available hosts: 2 ^ (8 - 2) = 254
Network Address: 192.168.1.0
First Assignable IP: 192.168.1.1
Last Assignable IP: 192.168.1.254
Broadcast Address: 192.168.1.255
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Example 2: 205.150.65.0/26
IP address class: C
CIDR: /26
Subnet mask(decimal): 255.255.255.0 now becomes 255.255.255.192
Subnet mask(binary): 11111111.11111111.11111111.11000000 (since the CIDR is /26, the last bit becomes 11000000)
IP address(binary): 11001101.10010110.01000001.00000000
Network bit: 26
Host bit: 32 - 26 = 6 
Number of available hosts: 2 ^ (6 - 2) = 62
Network Address: 205.150.65.0
First Assignable IP: 205.150.65.1
Last Assignable IP: 205.150.65.62
Broadcast Address: 205.150.65.63
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;OSI Model&lt;/strong&gt;&lt;br&gt;
The OSI (Open Systems Interconnection) model is a conceptual framework that describes how data travels through a network. It divides network communication into seven layers, each with specific responsibilities.&lt;/p&gt;

&lt;p&gt;A mnemonic to remember the 7 layers is &lt;strong&gt;All People Seem To Need Data Processing&lt;/strong&gt; or &lt;strong&gt;Please Do Not Throw Sausage Pizza Away&lt;/strong&gt; from top to bottom or bottom-up, respectively.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;OSI Layer&lt;/th&gt;
&lt;th&gt;What Happens Here&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;Application&lt;/td&gt;
&lt;td&gt;User initiates a request to the server with a specified protocol&lt;/td&gt;
&lt;td&gt;HTTP, HTTPS, FTP, DNS, SMTP&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Presentation&lt;/td&gt;
&lt;td&gt;Translates, encrypts, compresses data&lt;/td&gt;
&lt;td&gt;SSL/TLS, JPEG, MP4, ASCII&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Session&lt;/td&gt;
&lt;td&gt;Starts, manages, and ends sessions.  Browser creates a session to avoid multiple authentication with the server.&lt;/td&gt;
&lt;td&gt;API sessions, NetBIOS, RPC&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Transport&lt;/td&gt;
&lt;td&gt;Breaks data into chunks (segments); ensures delivery&lt;/td&gt;
&lt;td&gt;TCP, UDP, ports (e.g., port 80, 443)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Network&lt;/td&gt;
&lt;td&gt;Finds path to destination; adds IP addresses&lt;/td&gt;
&lt;td&gt;IP, ICMP, Routers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Data Link&lt;/td&gt;
&lt;td&gt;Requests are sent to switches, and these switches convert data to frames, which also adds a MAC (informs the switches about other components within the network) from the IP address sent from the previous layer&lt;/td&gt;
&lt;td&gt;Ethernet, MAC address, ARP, Switches&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Physical&lt;/td&gt;
&lt;td&gt;Data is transmitted through electronic signals using optical cables&lt;/td&gt;
&lt;td&gt;Cables, Wi-Fi, Hubs, Network Interface&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is a beginner-friendly article that explains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How computers communicate using IP addressing.&lt;/li&gt;
&lt;li&gt;Subnets, CIDR notation and calculating IP ranges.&lt;/li&gt;
&lt;li&gt;How to calculate subnet info and IP ranges with real examples.&lt;/li&gt;
&lt;li&gt;The OSI model.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, you should have a solid grasp of how IP addressing works, what subnets do, how to calculate hosts and ranges, and how the OSI model fits into the bigger networking picture. These aren’t just theories, they are skills needed in cloud computing, DevOps, and system design. Whether you’re debugging network issues or configuring infrastructure on AWS, understanding these fundamentals gives you confidence and clarity.&lt;/p&gt;

&lt;p&gt;In Part 2, I will put this knowledge into practice by building real subnets on AWS and testing traffic flow within a real network infrastructure, all using Terraform.&lt;/p&gt;

&lt;p&gt;Share this with someone who is learning DevOps or networking!&lt;/p&gt;

&lt;p&gt;Until then, stay curious and happy subnetting! ✌&lt;/p&gt;

</description>
      <category>networking</category>
      <category>ipaddress</category>
    </item>
    <item>
      <title>Dockerizing a FastAPI CRUD App: Automating Builds and Pushes with GitHub Actions</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Wed, 09 Apr 2025 19:33:49 +0000</pubDate>
      <link>https://forem.com/keneojiteli/dockerizing-a-fastapi-crud-app-automating-builds-and-pushes-with-github-actions-4o9n</link>
      <guid>https://forem.com/keneojiteli/dockerizing-a-fastapi-crud-app-automating-builds-and-pushes-with-github-actions-4o9n</guid>
      <description>&lt;p&gt;FastAPI is a high-performance Python framework for building APIs, and Docker allows us to containerize applications to make them easier to deploy. In this guide, we'll containerize a FastAPI app with Docker, and automate the build and push of the Image to a private container registry using GitHub actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python and fastAPI dependencies&lt;/strong&gt; (including pydantic for validation, fastAPI, uvicorn).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; - to build the fastAPI application into an image and push to a private registry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub and GitHub Actions&lt;/strong&gt;: for version control and CICD to automate image build and push.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terminal and a code editor / IDE&lt;/strong&gt;: for code development, image building and CICD. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps taken&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testing the application locally. &lt;/li&gt;
&lt;li&gt;Manually build the image.
&lt;/li&gt;
&lt;li&gt;Automate with GitHub Actions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Testing the application locally&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install Python and fastAPI using pip, verify the installation by checking the version &lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;main.py&lt;/code&gt;, &lt;code&gt;requirements.txt&lt;/code&gt; (I did a &lt;code&gt;pip freeze&lt;/code&gt; and saved it to the requirements.txt file).&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;main.py&lt;/code&gt; contains the CRUD app built with fastAPI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frku4nvat7hj474rho78o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frku4nvat7hj474rho78o.jpg" alt="crud app" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb33t3uh4ry39nfma4dms.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb33t3uh4ry39nfma4dms.jpg" alt="crud app-1" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I also installed an &lt;code&gt;email-validator&lt;/code&gt;, since my validation with &lt;strong&gt;pydantic&lt;/strong&gt; requires validating the email field.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5v82l48gl43nefwmmmi.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz5v82l48gl43nefwmmmi.jpg" alt="Install email validator" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I ran the application with &lt;code&gt;uvicorn main:app --reload&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3yfqhyoy71ka80a2f5u.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3yfqhyoy71ka80a2f5u.jpg" alt="run with uvicorn" width="800" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I accessed the application via &lt;code&gt;http://127.0.0.1:8000/users/&lt;/code&gt; or via &lt;code&gt;http://127.0.0.1:8000/docs#/&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn4jfl02b6ne4ci29a4o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn4jfl02b6ne4ci29a4o.jpg" alt="Access API" width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevyzkofjyv9io9je54vk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevyzkofjyv9io9je54vk.png" alt="Access via swagger UI" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Manually build the image&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I built the image with the Dockerfile (as a best practice, I copied the project's dependencies in the requirements file before copying other application files needed, to leverage Docker's caching) in the project directory.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;Dockerfile&lt;/code&gt; contains the instructions to create the Docker image.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqox3w8opxrni9uanirmb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqox3w8opxrni9uanirmb.png" alt="Dockerfile" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w0d6f6lkbqamq2w2xhr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w0d6f6lkbqamq2w2xhr.jpg" alt="build image" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I created a container with the new image to test that the application worked fine. I had to &lt;strong&gt;exec&lt;/strong&gt; into the container to test with &lt;strong&gt;curl&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda48evkjdt0sypsq5uqp.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fda48evkjdt0sypsq5uqp.jpg" alt="fastapi container" width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi4ye9vtcxtxzf5tg6za.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsi4ye9vtcxtxzf5tg6za.jpg" alt="Testing post method" width="800" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F514muq2fsdz6nfnkzrbm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F514muq2fsdz6nfnkzrbm.jpg" alt="Testing get method" width="800" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automate Image build and push with GitHub Actions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Working with the GitHub container registry (a private registry) requires generating a Personal Access Token (PAT), my password to log in to the registry and push my image.&lt;/li&gt;
&lt;li&gt;To generate a Personal Access Token (PAT), navigate to &lt;strong&gt;settings&lt;/strong&gt; =&amp;gt; &lt;strong&gt;developer settings&lt;/strong&gt; =&amp;gt; &lt;strong&gt;personal access tokens(tokens classic)&lt;/strong&gt; and &lt;strong&gt;generate a new token(classic)&lt;/strong&gt;, with &lt;strong&gt;read&lt;/strong&gt;, &lt;strong&gt;write&lt;/strong&gt; and &lt;strong&gt;delete&lt;/strong&gt; packages permission, ensure to :

&lt;ul&gt;
&lt;li&gt;Give the token a name.&lt;/li&gt;
&lt;li&gt;An expiration date.&lt;/li&gt;
&lt;li&gt;Copy the token to a safe place as it is viewable only once.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtvf551dwkr658ef4l8o.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtvf551dwkr658ef4l8o.jpg" alt="generate PAT" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before setting up my workflow, I added my generated PAT and my username as a repository secret, which was used for authentication to the GitHub container registry.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foiwuem0fbda445uhlujh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foiwuem0fbda445uhlujh.jpg" alt="how to add secrets to github" width="447" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6a1vjdwf16ftrx9bk6ii.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6a1vjdwf16ftrx9bk6ii.jpg" alt="Add PAT as secret" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymn5zmno68tk44iqpagv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymn5zmno68tk44iqpagv.jpg" alt="Add username as secret" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;After adding secrets, this is what it should look like.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdaqu5s5sa9fp5sw73wak.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdaqu5s5sa9fp5sw73wak.jpg" alt="Added secrets" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To automate with GitHub actions, I created a &lt;code&gt;./github/workflows&lt;/code&gt; folder with a &lt;code&gt;build_push.yml&lt;/code&gt; file containing the workflow.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyck37tqkhge79h914fnm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyck37tqkhge79h914fnm.jpg" alt="build_push.yml file" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqjdqjkkxzos3mitdywh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqjdqjkkxzos3mitdywh.png" alt="build_push.yml file" width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the workflow file is completed, initialize the repo, add files and remote origin, commit and push code to GitHub.&lt;/li&gt;
&lt;li&gt;Once the code is pushed to GitHub, it triggers the workflow(the workflow is configured to trigger when there is a push or pull request event on the repository), which builds the image, logs into the registry and pushes the image.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfq5gw56e3rq5hqllupo.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfq5gw56e3rq5hqllupo.jpg" alt="Build succeeded" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7utnextjpv0xh5xes0fl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7utnextjpv0xh5xes0fl.jpg" alt="Breakdown of build" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpglx9lkedl231bnqrqxg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpglx9lkedl231bnqrqxg.jpg" alt="Summary of successful build" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After pushing to the registry, the image can be viewed using this URL format: &lt;code&gt;https://github.com/users/YOUR_USERNAME/packages&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6zb9nl20an3wgpt86d2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6zb9nl20an3wgpt86d2.png" alt="Image in registry" width="800" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd8cp11xqk5naaws5lgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd8cp11xqk5naaws5lgb.png" alt="Image in registry1" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges encountered and fixes&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I had a couple of unsuccessful builds, and I realised there were some rules to follow while working with GitHub container registry. I will be outlining some challenges I encountered and how I resolved them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut7t1m6rtn5psqnli6fw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut7t1m6rtn5psqnli6fw.jpg" alt="unsuccessful builds" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some of the challenges I encountered included&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I discovered that GitHub Container Registry (GHCR) requires all repository names to be in lowercase, a suggested fix was to change from &lt;code&gt;tags: ghcr.io/${{ github.actor }}/fast-api-crud:latest
&lt;/code&gt; to &lt;code&gt;tags: ghcr.io/${{ github.repository_owner }}/fast-api-crud:latest&lt;/code&gt; (this converts username to lowercase using GitHub's toLower function)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8j4el2gj5glvj8opqsq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8j4el2gj5glvj8opqsq.jpg" alt="GHCR error: repo name" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The option above did not work, so I renamed my GitHub account (this is not advisable, but I took the risk) and removed all uppercase letters, and updated my secret.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qjzgzheenc5pqqhd1a2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qjzgzheenc5pqqhd1a2.jpg" alt="update username" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5e2wcjvc4suzdcxtn0y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5e2wcjvc4suzdcxtn0y.jpg" alt="GHCR error: repo name1" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The next error is something around organizations. I do not know much about organizations, so I deleted the organizations attached to my account (it was not useful to me anymore) and renamed the secret name for my token from &lt;code&gt;TOKEN&lt;/code&gt; to &lt;code&gt;GHCR_PAT&lt;/code&gt; (I also effected this change on the workflow).&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4ytc5cqbt87brrtr29x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4ytc5cqbt87brrtr29x.jpg" alt="GHCR error: organization package" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I also forgot to allow &lt;code&gt;pip&lt;/code&gt; read the list of dependencies from the specified file (requirements.txt). I fixed it by adding the &lt;code&gt;-r&lt;/code&gt; flag.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdukivt0ckmcnsvujzkv5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdukivt0ckmcnsvujzkv5.jpg" alt="GHCR error: incomplete installation" width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This tutorial shows how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Structure a FastAPI application for seamless containerization.&lt;/li&gt;
&lt;li&gt;Write a Dockerfile to package the app into a container.&lt;/li&gt;
&lt;li&gt;Automate builds and pushes using GitHub Actions.&lt;/li&gt;
&lt;li&gt;Troubleshoot common issues with private container registries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because this setup is automated, any code changes automatically trigger a new Docker image build and push to GitHub container registry, which streamlines the deployment process.&lt;/p&gt;

&lt;p&gt;Thank you for reading, kindly check out the &lt;a href="https://github.com/keneojiteli/automating-fastAPI-with-GHA/tree/main" rel="noopener noreferrer"&gt;repository&lt;/a&gt;. Till my next project, happy building! ✌🏽&lt;/p&gt;

</description>
      <category>docker</category>
      <category>fastapi</category>
      <category>githubactions</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Configuration Management With Ansible.</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Fri, 09 Jun 2023 08:42:12 +0000</pubDate>
      <link>https://forem.com/keneojiteli/configuration-management-with-ansible-35ch</link>
      <guid>https://forem.com/keneojiteli/configuration-management-with-ansible-35ch</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
In the world of DevOps where automation is necessary, ansible is one tool used to automate tedious and manual tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Ansible?&lt;/strong&gt;&lt;br&gt;
Ansible is a configuration management tool used to automate repetitive tasks like cloud provisioning, and application deployment. &lt;/p&gt;

&lt;p&gt;Ansible connects to nodes and uses the concept of control and managed nodes and push modules to them from a centralized place. This will then execute the modules and automatically remove them when the action is complete.&lt;/p&gt;

&lt;p&gt;Ansible is agentless in the sense that no additional software is required to be installed on the target machines, ansible is just executing bash commands or actions over SSH or Windows Remote Management connections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic Ansible Terms&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hosts: a machine (either physical hardware or remote) hosted by ansible.&lt;/li&gt;
&lt;li&gt;Inventory: a collection of all the hosts and groups that Ansible manages.&lt;/li&gt;
&lt;li&gt;Group: Several hosts grouped together that share a common attribute.&lt;/li&gt;
&lt;li&gt;Module: Units of code that Ansible sends to the nodes for execution or actions run by tasks.&lt;/li&gt;
&lt;li&gt;Tasks: Units of action that combine a module and its arguments along with some other parameters.&lt;/li&gt;
&lt;li&gt;Playbooks: An ordered list of tasks along with their necessary parameters that define a recipe to configure a system.&lt;/li&gt;
&lt;li&gt;YAML: A popular and simple data format that is very clean and understandable by humans (Ansible playbooks are written in YAML format).&lt;/li&gt;
&lt;li&gt;Roles: Redistributable units of organization that allow users to share automation code easier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Demo&lt;/strong&gt;: In this article, I will demo how to use ad hoc commands (a quick way to run a task on one or more managed nodes) and how to use ansible playbook to install an application on a virtual machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A cloud provider (Azure with an active subscription) - to create virtual machines.&lt;/li&gt;
&lt;li&gt;An SSH client (I will be using Mobaxterm) to SSH into my VMs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Login to your Azure account and create a virtual machine (for now I will create 1 VM which will be my control node).&lt;/li&gt;
&lt;li&gt;Using an SSH client, I will SSH into my control node&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftaq3re6kvsg3yvk5res.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftaq3re6kvsg3yvk5res.png" alt="Mobaxterm"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczhthzza2oy4vd85fmi8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczhthzza2oy4vd85fmi8.png" alt="successful SSH"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To enable me ssh into other machines, I will need to create an ssh key on my control node using &lt;code&gt;ssh-keygen -t rsa&lt;/code&gt; command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn64hfzgrvzasgvdjofyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn64hfzgrvzasgvdjofyv.png" alt="create ssh key"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After which I will navigate to the directory and get the public key.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1j51jo355znwmcuqto5.png" alt="navigate to public key"&gt;
&lt;/li&gt;
&lt;li&gt;Go to the Azure portal, search for ssh keys and create a key with the public key gotten from the step above.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6k4n7blexedhwkjmyv6y.png" alt="upload public key"&gt;
&lt;/li&gt;
&lt;li&gt;I will create 2 VMs that will serve as my managed nodes with the public key so as to enable swift ssh connection between the control node and managed nodes.&lt;/li&gt;
&lt;li&gt;SSH into VM1 using &lt;code&gt;ssh &amp;lt;ip address of vm&amp;gt;&lt;/code&gt; on the control node.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gjdcudyas64nzlvmqhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6gjdcudyas64nzlvmqhx.png" alt="ssh to vm 1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;And use &lt;code&gt;exit&lt;/code&gt; to logout out of the machine (I repeated the same process on my second VM).
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxpstujifgn2z9egvh6i3.png" alt="exit ssh connection"&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I will install ansible on the control node (no installation will be done on the managed node because ansible is agentless).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Firstly, I will run &lt;code&gt;sudo apt-get update&lt;/code&gt; command to download information for all packages listed in the sources file.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7g1193of0bz5rjk0nby.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7g1193of0bz5rjk0nby.png" alt="update package"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then install ansible with &lt;code&gt;sudo apt install ansible&lt;/code&gt; command.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6d406yywk9tos8c53ag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6d406yywk9tos8c53ag.png" alt="install ansible"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffku5cx9ueedt1ukywswu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffku5cx9ueedt1ukywswu.png" alt="install ansible"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Checking to see if the installation was successful by using &lt;code&gt;ansible --version&lt;/code&gt; command. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fytm7crfh8tn97t6x0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fytm7crfh8tn97t6x0j.png" alt="ansible version"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I will create an inventory file to store a list of my host and group (I grouped my hosts in a test group); note that the default inventory file is located in &lt;code&gt;/etc/ansible/hosts&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jft8nl28r5le8033b98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jft8nl28r5le8033b98.png" alt="inventory"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Using ad hoc command: &lt;code&gt;ansible test -i inventory -m ping&lt;/code&gt; to execute a ping command on all hosts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlpfphxonrtcqc5e1czc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvlpfphxonrtcqc5e1czc.png" alt="ad hoc command"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Using ansible playbook to deploy a website on virtual machines&lt;/strong&gt;&lt;br&gt;
Ansible playbooks are the simplest way to automate repeating tasks in the form of reusable and consistent configuration files. They are written in YAML and contain any ordered set of steps to be executed on our managed nodes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;My playbook which will deploy a website on my target machines is in the &lt;a href="https://github.com/KeneOjiteli/getting-started-with-ansible/blob/main/main.yml" rel="noopener noreferrer"&gt;main.yml&lt;/a&gt; file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac9rcw7zraoiagsk5ed6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac9rcw7zraoiagsk5ed6.png" alt="ls command"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before executing the play book command, notice that the VMs are empty (not accessible).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx182yoci03l2neq16sdm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx182yoci03l2neq16sdm.png" alt="empty vm"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using &lt;code&gt;ansible-playbook -i inventory main.yml&lt;/code&gt; command to run playbook; notice that it runs each task for the 2 managed nodes/target machines I specified in my inventory file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nvpiwr24by6igqckdiv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nvpiwr24by6igqckdiv.png" alt="run ansible playbook"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnm1dozw5f5cfs083nrug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnm1dozw5f5cfs083nrug.png" alt="run ansible playbook1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the playbook is run, I can now access the website on both target machines (notice the different IP addresses for both machines on the images below)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9qjgkqm0t9t73ii9lgn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm9qjgkqm0t9t73ii9lgn.png" alt="target 1"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45rmol6nmbd8yqnqwjay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F45rmol6nmbd8yqnqwjay.png" alt="target 2"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Things to note&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ansible is agentless meaning that no additional software installation is required on the managed nodes.&lt;/li&gt;
&lt;li&gt;Ansible is just executing commands over SSH.&lt;/li&gt;
&lt;li&gt;The default inventory file in ansible is located in /etc/ansible/hosts.&lt;/li&gt;
&lt;li&gt;Ansible modules are idempotent which means that changes are applied only if needed; the current state is checked and nothing is done unless the current state is different from the specified final state.&lt;/li&gt;
&lt;li&gt;Ansible playbooks are written in yaml format&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Challenge Encountered&lt;/strong&gt;: the challenge I encountered was the inability to establish an ssh connection between my VMs, but this was resolved by creating an ssh key and passing the public key to the managed nodes while the control node had the private key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;: through this tutorial, we have learned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;About ansible and its basic terms.&lt;/li&gt;
&lt;li&gt;How to use a simple ad hoc command to ping host machines.&lt;/li&gt;
&lt;li&gt;How to use a playbook to deploy a website on a virtual machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Kindly visit the project &lt;a href="https://github.com/KeneOjiteli/getting-started-with-ansible" rel="noopener noreferrer"&gt;repo&lt;/a&gt; and thank you for reading. &lt;/p&gt;

</description>
      <category>ansible</category>
      <category>devops</category>
      <category>automation</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>My SCA Cloud School Experience: Cohort 4.</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Fri, 26 May 2023 12:33:16 +0000</pubDate>
      <link>https://forem.com/keneojiteli/my-sca-cloud-school-experience-cohort-4-i6e</link>
      <guid>https://forem.com/keneojiteli/my-sca-cloud-school-experience-cohort-4-i6e</guid>
      <description>&lt;p&gt;At some point in my life, I was passionate about Cloud and DevOps Engineering and decided to make a career out of it after trying my hands at Frontend, Backend, and Cybersecurity.&lt;/p&gt;

&lt;p&gt;She Code Africa (SCA) Cloud School is a cohort-style, Bootcamp program specifically targeted at ladies across Africa, looking to kick off or switch careers into the Site Reliability Engineering (SRE) field organized by She Code Africa (a registered non-profit organization aimed at empowering young girls and women across Africa with technical and soft skills needed to match or scale their careers in STEM) in partnership with Deimos.&lt;/p&gt;

&lt;p&gt;I joined the SCA Community while trying to gain experience in front-end development, and I learned about Cloud school via the community and Linkedin. I applied for the boot camp with high hopes and also had the opportunity to attend a Twitter space which talked about the program (and also we were told to learn how to work with technologies like Docker and Kubernetes) and also got to connect with ladies from past cohorts.&lt;/p&gt;

&lt;p&gt;After a daily refresh of my emails for weeks, I finally got a mail saying I passed the first stage of the selection process, with instructions for the second stage (which was to record a video of myself talking about my journey, how the program will help in my journey and why I am a good fit for the program); I finally passed both stages and I was overjoyed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gKRsAEMy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dm89c7vt9hoa5hf15xzb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gKRsAEMy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dm89c7vt9hoa5hf15xzb.png" alt="SCA success email" width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The successful 50 ladies had an onboarding call where we were briefed on how the whole program will be and also split into 2 classes (Ruby and Emerald) before the program commenced.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start of Cloud School&lt;/strong&gt;: The program started with introductions to our facilitators and also a general introduction to cloud computing, (My facilitator was really awesome because she made the class interactive); As someone who is a big fan of AWS, I started losing interest in the program because the main cloud provider taught in cloud school was Azure (before cloud school, I was never an azure fan, but thanks to cloud school and my facilitator I have a special relationship with Microsoft Azure).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some of the projects I worked on in SCA cloud school are&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/keneojiteli/deploy-a-docker-app-to-app-services-on-azure-5d3h"&gt;Hosting a container app on Azure app services&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/keneojiteli/creating-a-database-with-azure-sql-database-4kha"&gt;Creating a DB on Azure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/keneojiteli/connecting-azure-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-part-1-1ok7"&gt;Connecting an app service to a storage account and SQL db on Azure&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/keneojiteli/deploying-a-container-app-on-a-kubernetes-cluster-4bcp"&gt;Deploy a web application on a Kubernetes cluster&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During the program we also had to do some evaluation to test our progress, and a final project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Project&lt;/strong&gt;: We had one week to work on our project which was either on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Containerizing an application and deploying on Kubernetes.&lt;/li&gt;
&lt;li&gt;Deploying an application to azure using web app service and SQL.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I went for the docker and kubernetes &lt;a href="https://github.com/KeneOjiteli/sca-final-project"&gt;project&lt;/a&gt; and wrote a detailed &lt;a href="https://dev.to/keneojiteli/deploying-an-application-on-kubernetes-3c27"&gt;article&lt;/a&gt; because it was more interesting and challenging (I encountered some errors which halted my progress and I almost didn't submit my project) at the same time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I gained from SCA cloud school&lt;/strong&gt;&lt;br&gt;
Through the SCA Cloud school, I gained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge on Azure, got conversant with various azure services by working on hands-on projects.&lt;/li&gt;
&lt;li&gt;Knowledge on Microservices, Containerization and Container orchestration which is must know for a DevOps engineer.&lt;/li&gt;
&lt;li&gt;How to share my learning by constantly writing articles.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
I would like to thank the founder of She Code Africa &lt;strong&gt;Ada Nduka Oyom&lt;/strong&gt; and the entire team for coming up with this initiative, My facilitators &lt;strong&gt;Oluwadamilola Aremu&lt;/strong&gt;, &lt;strong&gt;Chiamaka Obitube&lt;/strong&gt; for their effort and patience, and also Ruby class ladies.&lt;/p&gt;

&lt;p&gt;I highly recommend this program for cloud computing beginners and also intermediates who will like to learn with peers.&lt;/p&gt;

&lt;p&gt;I look forward to getting the internship opportunity with Deimos as this will be a great plus in my career.&lt;/p&gt;

&lt;p&gt;Thank you for reading, you can follow me on &lt;a href="https://twitter.com/kenealfayeed"&gt;Twitter&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/kenechukwuojiteli/"&gt;Linkedin&lt;/a&gt;, and &lt;a href="https://github.com/KeneOjiteli"&gt;Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>learning</category>
      <category>careerdevelopment</category>
    </item>
    <item>
      <title>Deploying an Application on Kubernetes.</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Sun, 21 May 2023 20:48:21 +0000</pubDate>
      <link>https://forem.com/keneojiteli/deploying-an-application-on-kubernetes-3c27</link>
      <guid>https://forem.com/keneojiteli/deploying-an-application-on-kubernetes-3c27</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Kubernetes is an open source technology used to autonomously deploy, scale, and manage applications that use containers. It is a popular container orchestration solution since it enables the control of several containers as a single entity as an alternative to managing each container separately.&lt;/p&gt;

&lt;p&gt;This is a walkthrough of deploying an application to Azure Kubernetes Service similar to &lt;a href="https://github.com/KeneOjiteli/deploy-container-app-on-k8s-cluster"&gt;my previous article walkthrough&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What will be covered&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure Kubernetes Service &lt;/li&gt;
&lt;li&gt;Azure Container Registry&lt;/li&gt;
&lt;li&gt;Docker Container&lt;/li&gt;
&lt;li&gt;Azure CLI&lt;/li&gt;
&lt;li&gt;Kubectl&lt;/li&gt;
&lt;li&gt;Simple Node app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;br&gt;
A note keeper app written in node js will be containerized and pushed to ACR, after which one Kubernetes cluster will be created on AKS and a deployment and a service will be created having 4 pod replicas via a manifest file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terminal and Azure CLI installed on local machine - some of the steps for this project (such as connecting to kubernetes cluster will be done using Azure CLI).&lt;/li&gt;
&lt;li&gt;Docker installed on local machine - which will be used to build an image and push to a registry (Azure Container Registry in this demo).&lt;/li&gt;
&lt;li&gt;Azure account with active subscription.&lt;/li&gt;
&lt;li&gt;Code Editor or IDE - which will be used to write code that will be containerized.&lt;/li&gt;
&lt;li&gt;Terraform installed - an IAC tool that will be used to provision infrastructure on Azure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: The required services can be created using either &lt;strong&gt;terraform, Azure portal or Azure CLI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting Started&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testing sample app locally

&lt;ul&gt;
&lt;li&gt;Write the code or fork and clone app from &lt;a href="https://github.com/KeneOjiteli/sca-final-project"&gt;Github&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to application folder on terminal using &lt;code&gt;cd &amp;lt;foldername&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Install dependencies using &lt;code&gt;npm install&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run app to test using &lt;code&gt;node &amp;lt;filename&amp;gt;&lt;/code&gt; or &lt;code&gt;npm &amp;lt;filename&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--I1eHj5PZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ov2s0gn0yqd78viiw31z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--I1eHj5PZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ov2s0gn0yqd78viiw31z.png" alt="Test on local" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Login to azure using &lt;code&gt;az login&lt;/code&gt; on terminal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EZf6RAwO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iirdko3rhy9oflr6d6tl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EZf6RAwO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iirdko3rhy9oflr6d6tl.png" alt="Login on azure CLI" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provision infrastructure on Azure with terraform using the following commands: &lt;code&gt;terraform init&lt;/code&gt; &lt;code&gt;terraform fmt&lt;/code&gt; &lt;code&gt;terraform validate&lt;/code&gt; &lt;code&gt;terraform plan&lt;/code&gt; &lt;code&gt;terraform apply&lt;/code&gt;(note that a terraform file has a .tf extension)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dxXORx3t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hs5blyib7jnzu453098f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dxXORx3t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hs5blyib7jnzu453098f.png" alt="terraform init" width="768" height="411"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--QNcLevcY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2v2sg0p0ww7zs5iqc3wl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--QNcLevcY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2v2sg0p0ww7zs5iqc3wl.png" alt="format terraform file" width="773" height="67"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--wHIqEkHI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zfhrv1zv2j58j94hwswx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wHIqEkHI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zfhrv1zv2j58j94hwswx.png" alt="terraform plan" width="800" height="303"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c6Q4a9Jo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tjeqnffq9vsljvzh83b0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c6Q4a9Jo--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tjeqnffq9vsljvzh83b0.png" alt="terraform apply" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Head to azure portal to confirm if infrastructure is provisioned&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--E9E8_KuC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3q4209hhbj40v4y9c0kp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--E9E8_KuC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3q4209hhbj40v4y9c0kp.png" alt="ACR dashboard" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HyPBxYLN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1n9qz6hr1zqn7jsvajp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HyPBxYLN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1n9qz6hr1zqn7jsvajp7.png" alt="AKS dashboard" width="800" height="393"&gt;&lt;/a&gt; &lt;br&gt;
&lt;strong&gt;Containerize the application&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;After resources have been provisioned on azure, we need to login to ACR using credentials (which includes login server, username and password) from &lt;code&gt;container registry =&amp;gt; settings =&amp;gt; access keys&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to your project directory via terminal and login using the details from the step above &lt;code&gt;docker login &amp;lt;login server&amp;gt; --username &amp;lt;username&amp;gt; --password &amp;lt;password&amp;gt;&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build docker image using this command &lt;code&gt;docker build -t &amp;lt;name&amp;gt;:&amp;lt;tag&amp;gt; .&lt;/code&gt; where &lt;code&gt;.&lt;/code&gt; is the current directory.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gauUApHX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkh3l6s27e6y9mh93q5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gauUApHX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kkh3l6s27e6y9mh93q5d.png" alt="build image" width="800" height="428"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the image exists using &lt;code&gt;docker images&lt;/code&gt; command ( this lists all available images, the first image is my newly created image).
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7mkHozlb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3e2v8upig933r4fe1zi.png" alt="List of docker images" width="800" height="150"&gt;
&lt;/li&gt;
&lt;li&gt;Push the image to ACR using &lt;code&gt;docker push &amp;lt;name&amp;gt;:&amp;lt;tag&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--S8dO0LXX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/csur46mee4uk5m3wzi98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--S8dO0LXX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/csur46mee4uk5m3wzi98.png" alt="docker push" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Going over to &lt;code&gt;azure portal =&amp;gt; container registry&lt;/code&gt;, to verify the push is successful.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OQ2qJaxz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ptyl23xe8lrkk27q9zd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OQ2qJaxz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ptyl23xe8lrkk27q9zd.png" alt="image on azure portal" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I created a container using my image (exercising how to forward ports, run container in a detached mode, give container a specified name and run the app using the forwarded port on my local machine)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XpRbJBi7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/433cz8zodnbebhdnkgve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XpRbJBi7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/433cz8zodnbebhdnkgve.png" alt="docker run" width="800" height="222"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4VP4p8sY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qfue3vf6au1gtw3ncqcm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4VP4p8sY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qfue3vf6au1gtw3ncqcm.png" alt="localhost" width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Connect to Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl&lt;/strong&gt; will be used to manage the Kubernetes cluster. Run the command &lt;code&gt;az aks get-credentials --resource-group &amp;lt;resourcegroupname&amp;gt; --name &amp;lt;clustername&amp;gt;&lt;/code&gt; to configure kubectl and connect to the cluster we previously created and also verify connection using &lt;code&gt;kubectl get nodes&lt;/code&gt; to return a list of the cluster nodes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BUAv-M1p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vzgrlcakkr8g1wkqs0lz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BUAv-M1p--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vzgrlcakkr8g1wkqs0lz.png" alt="configure kubectl" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes Deployment&lt;/strong&gt;: A Deployment is one of the Kubernetes objects that is used to manage Pods via ReplicaSets in a declarative way. It provides updates, control as well as rollback functionalities. This deployment file will be used to:&lt;/li&gt;
&lt;li&gt;Create a deployment (which automatically creates a replicaset), service and pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mN46LyQ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h6mkexzworpgl7hvz4qn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mN46LyQ1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h6mkexzworpgl7hvz4qn.png" alt="deployment" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lists all objects created at once using &lt;code&gt;kubectl get&lt;/code&gt; command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dgIHEJ6N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ovpe33lsvrkvaodcij66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dgIHEJ6N--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ovpe33lsvrkvaodcij66.png" alt="All objects" width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get more details about the deployment, use &lt;code&gt;kubectl describe deployments &amp;lt;deploymentname&amp;gt;&lt;/code&gt; command&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qiJ5B-li--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wgkdm4qjur1d1true9p7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qiJ5B-li--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wgkdm4qjur1d1true9p7.png" alt="describe" width="800" height="623"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scale down replica set from 2 to 4&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Lr_K5-4K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5uotwznvcgr33dr05cui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Lr_K5-4K--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5uotwznvcgr33dr05cui.png" alt="scale down" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;And also seeing the change in the events log&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GIaxFlOc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bbthmsj5fw97h42d6ykc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GIaxFlOc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bbthmsj5fw97h42d6ykc.png" alt="describe" width="800" height="551"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CUXPH7T7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x03d99w8fo1l8flqk8q1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CUXPH7T7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x03d99w8fo1l8flqk8q1.png" alt="event" width="800" height="121"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deleting a pod and showing how a deployment always ensures the desired number of pods is always present.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UFYTdC7---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x0fksgt323o997a5ynz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UFYTdC7---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x0fksgt323o997a5ynz2.png" alt="delete pod" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Head to &lt;code&gt;azure portal =&amp;gt; kubernetes cluster =&amp;gt; kubernetes resources =&amp;gt; services and ingresses&lt;/code&gt; to view ports and pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fbIZ594x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/led1cpmaz8x2gqsf78zx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fbIZ594x--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/led1cpmaz8x2gqsf78zx.png" alt="view service" width="800" height="389"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fwneODAk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzvb937ixji4ewqhlano.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fwneODAk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzvb937ixji4ewqhlano.png" alt="view ports" width="800" height="386"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--H_CRFGbq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2xhml9vrd18gkmhu6n2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--H_CRFGbq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2xhml9vrd18gkmhu6n2.png" alt="view pods" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The app can be accessed using external IP of the service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PYJMRB73--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dmmra8iwz6syf3vv50ez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PYJMRB73--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dmmra8iwz6syf3vv50ez.png" alt="Note keeper app" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenges encountered during this project&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a new azure account due to credit card issues&lt;/li&gt;
&lt;li&gt;I encountered the &lt;code&gt;ImagePullBackOff&lt;/code&gt; and &lt;code&gt;ErrImagePull&lt;/code&gt; error which was as a result of not properly connecting my azure container registry to my kubernetes cluster, this was detected while debugging .&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8r5fmokN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33mjdqi1vfn3fb757ten.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8r5fmokN--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33mjdqi1vfn3fb757ten.png" alt="debug" width="800" height="238"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Changing my demo app from a note keeper app to a basic node app and back to my note keeper app due to &lt;code&gt;ImagePullBackOff&lt;/code&gt; and &lt;code&gt;ErrImagePull&lt;/code&gt; error.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This tutorial shows how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Docker image for a node application&lt;/li&gt;
&lt;li&gt;Push the image on ACR.&lt;/li&gt;
&lt;li&gt;Deploy on kubernetes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The End.&lt;/p&gt;

&lt;p&gt;Thank you for reading &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>tutorial</category>
      <category>docker</category>
    </item>
    <item>
      <title>Deploying a Container App on a Kubernetes Cluster.</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Sun, 14 May 2023 18:16:32 +0000</pubDate>
      <link>https://forem.com/keneojiteli/deploying-a-container-app-on-a-kubernetes-cluster-4bcp</link>
      <guid>https://forem.com/keneojiteli/deploying-a-container-app-on-a-kubernetes-cluster-4bcp</guid>
      <description>&lt;p&gt;Delving into the world of Containers and Container Orchestration, one might wonder what these terms mean.&lt;/p&gt;

&lt;p&gt;Containers are a way to package software in a format that can run isolated on a shared operating system. Unlike Virtual Machines, containers do not bundle a full operating system - only libraries and settings required to make the software work.&lt;/p&gt;

&lt;p&gt;Container Orchestration automates the provisioning, deployment, networking, scaling, availability, and lifecycle management of containers.&lt;/p&gt;

&lt;p&gt;In this article, I will demo how to containerize an application and deploy on a kubernetes cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker installed on your local machine - which will be used to build image and push to a registry.&lt;/li&gt;
&lt;li&gt;Azure account with active subscription&lt;/li&gt;
&lt;li&gt;Terraform installed - an IAC tool that will be used to provision infrastructure on a cloud provider (Azure in this demo)&lt;/li&gt;
&lt;li&gt;Code Editor or IDE with a terminal- which will be used to write code that will be containerized.&lt;/li&gt;
&lt;li&gt;Azure CLI - to connect to kubernetes cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;br&gt;
There are 4 main steps which include:&lt;br&gt;
&lt;strong&gt;Writing code for the application that will be containerized with docker&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;This section involves writing the code in a preferred programming language (I will be using node js).  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using terraform to provision infrastructure to Azure&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Terraform is an infrastructure as code tool used to provision resources, instead of using the azure portal to create my resources, I will be using terraform, &lt;a href="https://github.com/KeneOjiteli/deploy-container-app-on-k8s-cluster/tree/main/deploy-container-app-on-k8s-cluster/terraform"&gt;see terraform code&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;To make this creation possible, the following terraform commands will be used:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform init&lt;/strong&gt;: initializes a working directory that contains terraform configuration files (note that a terraform file has a &lt;code&gt;.tf&lt;/code&gt; extension)&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0_UJNCpU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3tfft7b5v2rylsaibqo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0_UJNCpU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3tfft7b5v2rylsaibqo.png" alt="Terraform init" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Formatting and validating my code using the command below&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--IELvjlXY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/to1f9uevgc9e7cvszx93.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--IELvjlXY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/to1f9uevgc9e7cvszx93.png" alt="Validate terraform code" width="800" height="59"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform plan&lt;/strong&gt;: used to preview the action terraform would take to modify your infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---1jlaXih--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dxsf9f59720y6npax87s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---1jlaXih--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dxsf9f59720y6npax87s.png" alt="terraform plan" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3MllJhzP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yq46npwisr6zhiwpgmsl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3MllJhzP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yq46npwisr6zhiwpgmsl.png" alt="terraform plan" width="800" height="219"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;terraform apply&lt;/strong&gt;: similar to terraform plan but actually carries out the planned changes to each resource using the relevant infrastructure provider's API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yDmscP-6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tb7k5c4ciy2u6iphmun5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yDmscP-6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tb7k5c4ciy2u6iphmun5.png" alt="terraform apply" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BRKGnouX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a87cg9fftyri48wtj5or.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BRKGnouX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a87cg9fftyri48wtj5or.png" alt="terraform apply" width="800" height="313"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--3PLxnbOc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/16c225vk8pwieap0ohpt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3PLxnbOc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/16c225vk8pwieap0ohpt.png" alt="terraform apply" width="720" height="27"&gt;&lt;/a&gt;&lt;br&gt;
I created a resource group, container registry and azure kubernetes service on azure.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Docker to containerize the application and pushing to a registry&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Dockerizing an application requires a &lt;a href="https://github.com/KeneOjiteli/deploy-container-app-on-k8s-cluster/blob/main/deploy-container-app-on-k8s-cluster/Dockerfile"&gt;Dockerfile&lt;/a&gt; which is a template that contains all the commands a user could call on the command line to build an image after which the image will be pushed to a container registry.&lt;/p&gt;

&lt;p&gt;To push to a container registry, a connection first need to be established before login to the container registry.&lt;/p&gt;

&lt;p&gt;I will head over to azure =&amp;gt; container registry =&amp;gt; settings =&amp;gt; access keys&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dx4_szIG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n40xc02hiikt1va18qdm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dx4_szIG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n40xc02hiikt1va18qdm.png" alt="container registry" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UERbYfUg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n22dzzh1ricmh4n4a78v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UERbYfUg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n22dzzh1ricmh4n4a78v.png" alt="container registry access keys" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to your project directory via terminal and login using the details from the step above &lt;code&gt;docker login &amp;lt;login server&amp;gt; --username &amp;lt;username&amp;gt; --password &amp;lt;password&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mxyrL7dG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9euoylu7mgxv9eauzyh7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mxyrL7dG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9euoylu7mgxv9eauzyh7.png" alt="ACR login" width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build docker image using this command &lt;code&gt;docker build -t &amp;lt;name&amp;gt;:&amp;lt;tag&amp;gt; .&lt;/code&gt; where &lt;code&gt;.&lt;/code&gt; is the current directory.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tueRWqRi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2dwmviajtfh7ne1ihf4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tueRWqRi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2dwmviajtfh7ne1ihf4x.png" alt="build docker image" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the image exists using &lt;code&gt;docker images&lt;/code&gt; command ( this lists all available images, the first image is my newly created image).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ijwi0W3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i4iiu9j3g4a3cn5zl63f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ijwi0W3a--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i4iiu9j3g4a3cn5zl63f.png" alt="list docker images" width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Push the image to ACR using &lt;code&gt;docker push &amp;lt;name&amp;gt;:&amp;lt;tag&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pH_cfmBO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r6ivs10or5479tj3m9tb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pH_cfmBO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r6ivs10or5479tj3m9tb.png" alt="push docker image to acr" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Going over to azure portal =&amp;gt; container registry, to verify the push is successful.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--PxKCYzb9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0wb8igwgbpwoj5jbl15m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PxKCYzb9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0wb8igwgbpwoj5jbl15m.png" alt="Image on ACR" width="800" height="393"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Deploying the web application to azure kubernetes cluster&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Having created an azure kubernetes cluster using terraform, go over to azure portal to verify.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Kqi1XQKw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/91ov2efbmcd7myanm1j0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Kqi1XQKw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/91ov2efbmcd7myanm1j0.png" alt="AKS" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I will be using azure CLI to get credentials so as to connect to my cluster using the commands below
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;az login&lt;/code&gt;&lt;br&gt;
&lt;code&gt;az aks get-credentials --resource-group &amp;lt;resourcegroupname&amp;gt; --name &amp;lt;clustername&amp;gt;&lt;/code&gt; (this command downloads credentials and configures the kubernetes CLI to use them)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8V-H1aet--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xsa7ic0crb4hn9q4zxti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8V-H1aet--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xsa7ic0crb4hn9q4zxti.png" alt="login with azure cli" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--roH5KESk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kdngtc45n0yd7exqdjt4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--roH5KESk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kdngtc45n0yd7exqdjt4.png" alt="credentials to manage cluster" width="800" height="57"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verifying connection to my cluster using &lt;code&gt;kubectl get nodes&lt;/code&gt; to return a list of the cluster nodes (note that I declared 1 cluster node and status = ready).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R0vt9EhM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fs8svjuvs8up5mauhkr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R0vt9EhM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fs8svjuvs8up5mauhkr6.png" alt="list nodes" width="800" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a deployment using a YAML manifest file that will roll out a replicaset to bring up specified number of instances of a specified pod and create a service (which enables network access to a set of pods) using &lt;code&gt;kubectl apply -f &amp;lt;deploymentfile-name&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--s9drwHlg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ryw2v0uqysma65us3nad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--s9drwHlg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ryw2v0uqysma65us3nad.png" alt="create deployment" width="800" height="69"&gt;&lt;/a&gt;&lt;br&gt;
A kubernetes deployment tells kubernetes how to create or modify instances of the pods that hold a containerized application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To get more details about the deployment, use &lt;code&gt;kubectl describe deployments &amp;lt;deploymentname&amp;gt;&lt;/code&gt; command
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C7nPZrLs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/th3m0u76y1or16zb6fbj.png" alt="Describe deployments" width="800" height="592"&gt;
&lt;/li&gt;
&lt;li&gt;Using the commands below to get deployments, replicasets (a deployment automatically creates a replica set), service and pods (pod name usually starts with the deployment name).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DKAhggXA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/11ita56p6qzwqxwjtp61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DKAhggXA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/11ita56p6qzwqxwjtp61.png" alt="get deployments" width="800" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Notice there are 2 pods, this is because my deployment file specified &lt;strong&gt;2 replicas&lt;/strong&gt;
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Y58vH2Ey--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yep9vlqonsdi1znlp0ry.png" alt="get pods" width="800" height="72"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jJwujzVV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b6q4nnhnjmhd7w4ieyzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jJwujzVV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b6q4nnhnjmhd7w4ieyzl.png" alt="get replicasets" width="800" height="54"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--AwtUNaES--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpmrvjonqjnnawgarll8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--AwtUNaES--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qpmrvjonqjnnawgarll8.png" alt="get service" width="800" height="71"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Checking azure portal =&amp;gt; kubernetes cluster =&amp;gt; kubernetes resources =&amp;gt; services and ingresses to view ports and pods.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--74NAslsC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54epjxpbz30en4711miu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--74NAslsC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/54epjxpbz30en4711miu.png" alt="service" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iusK19DY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sba8rmp667oibarnjozn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iusK19DY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sba8rmp667oibarnjozn.png" alt="pod overview" width="800" height="386"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2z-30Aym--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vt637votfpdmuico8ck3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2z-30Aym--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vt637votfpdmuico8ck3.png" alt="pod details" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using external IP of the service with port to access the app
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6fZzgRIP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tq1k6hing16g5sq5c4ix.png" alt="service url" width="800" height="409"&gt;
&lt;/li&gt;
&lt;li&gt;A use case of replicas (which specifies the number of pods that will run concurrently) can be seen when a pod is deleted and a new pod is automatically created based on the number of replicas stated in the manifest file, as shown below&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--246ifHyE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2jytpk1c320dst0dprr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--246ifHyE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u2jytpk1c320dst0dprr.png" alt="delete pod" width="800" height="57"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oVQdqNRC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3gzmybjnnbtqemenj9mf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oVQdqNRC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3gzmybjnnbtqemenj9mf.png" alt="get new pods" width="800" height="72"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;kubectl get all&lt;/code&gt; lists all objects created at once
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iWkpkINf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8c54bw26u4pwi71xo1wz.png" alt="All details" width="800" height="248"&gt;
&lt;/li&gt;
&lt;li&gt;A deployment can also be updated to either increase or decrease number of instances by changing the replica value or changing the version of a container image, the former can be done using the command below:
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4LhrMWPa--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/prsm6wuht41h503ozlv9.png" alt="Edit deployment" width="800" height="125"&gt;
&lt;/li&gt;
&lt;li&gt;The updated number of pods on AKS.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Qxr9pxGF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6pzqv71x6dghpfd1r0c4.png" alt="updated deployment on cluster" width="800" height="367"&gt;
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZdNdcna0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxwpki3nosxf068ql10b.png" alt="details showing updated replica" width="800" height="583"&gt;
&lt;/li&gt;
&lt;li&gt;A list of updated pods
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wm3RZmLV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vz5vv3wn39o2k9bt70aj.png" alt="new running pods" width="800" height="110"&gt;
&lt;/li&gt;
&lt;li&gt;Cleaned up my resources using &lt;code&gt;terraform destroy&lt;/code&gt; command.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you for reading and I hope you learned something new, project code can be found on my &lt;a href="https://github.com/KeneOjiteli/deploy-container-app-on-k8s-cluster"&gt;repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>cloud</category>
      <category>learning</category>
    </item>
    <item>
      <title>Connecting Azure App Service to Azure SQL Database and Storage Account using Azure CLI part 2.</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Mon, 08 May 2023 13:46:34 +0000</pubDate>
      <link>https://forem.com/keneojiteli/connecting-azure-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-part-2-4a1m</link>
      <guid>https://forem.com/keneojiteli/connecting-azure-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-part-2-4a1m</guid>
      <description>&lt;p&gt;This is a continuation of &lt;a href="https://dev.to/keneojiteli/connecting-azure-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-part-1-1ok7"&gt;previous article&lt;/a&gt; which showed how to connect azure app service to an azure sql database using azure cli, it is important to read &lt;strong&gt;part 1&lt;/strong&gt; before &lt;strong&gt;this article&lt;/strong&gt; so as to be aware of the prerequisites and some azure terms.&lt;/p&gt;

&lt;p&gt;In this article, I'll demonstrate how to use azure cli to connect an azure app service to a storage account, this is similar to &lt;a href="https://dev.to/keneojiteli/connecting-azure-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-part-1-1ok7"&gt;previous article&lt;/a&gt; but with &lt;strong&gt;slight differences&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Azure account with an active subscription.&lt;/li&gt;
&lt;li&gt;Azure Cloud shell or a Azure CLI installed on local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Login into your azure account using &lt;code&gt;az login&lt;/code&gt; command.&lt;/li&gt;
&lt;li&gt;Create your script and save on an editor using &lt;strong&gt;.ps1&lt;/strong&gt; extension; then navigate to the directory where script is saved and run the script.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BkR2oTnO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94ohr3r957bpd81l4mte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BkR2oTnO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94ohr3r957bpd81l4mte.png" alt="Demo script" width="800" height="403"&gt;&lt;/a&gt;&lt;br&gt;
Note: Always ensure the storage account name is unique in lowercase and does not include special characters (like the screenshot above, I had to change the name to something unique &lt;code&gt;demoaccount901&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Feel free to change your location and variable names.&lt;/p&gt;

&lt;p&gt;Ensure the variable names on the script is preceded by &lt;strong&gt;$&lt;/strong&gt; to avoid errors like screenshot below on powershell.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--53OJqM8A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fs85leho0qfn1lzfxfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--53OJqM8A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0fs85leho0qfn1lzfxfk.png" alt="Demo script error" width="800" height="67"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As with the previous post, the following will be created: a resource group, app service plan and web app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After that a storage account with a unique name will be created, through which our app service will connect with to store data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Retrieving the connection string from the storage account to have access to stored data in app service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;And lastly assigning connection string to an app setting which are exposed as environment variables.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VBb-EznQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ru5go82t30y32uo2hspq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VBb-EznQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ru5go82t30y32uo2hspq.png" alt="Connection string" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Viewing the created resources within the resource group on the azure portal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--bEJjneYI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/975o2l4jd79dlacrzm20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--bEJjneYI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/975o2l4jd79dlacrzm20.png" alt="Resources" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clean up resources with &lt;code&gt;az group delete --name $resourceGroup&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thank you for reading and I hope you learned something new, view script on my &lt;a href="https://github.com/KeneOjiteli/connect-app-service-to-azure-db"&gt;repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The End!!!&lt;/p&gt;

</description>
      <category>azure</category>
      <category>storage</category>
      <category>cli</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Connecting Azure App Service to Azure SQL Database and Storage Account using Azure CLI part 1.</title>
      <dc:creator>Kene Ojiteli</dc:creator>
      <pubDate>Mon, 08 May 2023 11:56:27 +0000</pubDate>
      <link>https://forem.com/keneojiteli/connecting-azure-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-part-1-1ok7</link>
      <guid>https://forem.com/keneojiteli/connecting-azure-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-part-1-1ok7</guid>
      <description>&lt;p&gt;This article is a walkthrough on how to connect an azure app service to an azure SQL database and also an azure storage account using the command line interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recall the following terms:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Azure app service is a fully managed service which enables you to build and host web apps, mobile back ends, and RESTful APIs in the programming language of your choice without managing infrastructure.&lt;/li&gt;
&lt;li&gt;Azure SQL database is a fully-managed platform as a service (PAAS), that handles management functions such as patching, upgrading, backups, etc and gives an SLA of 99.99% availability.&lt;/li&gt;
&lt;li&gt;Azure storage account contains all of your Azure Storage data objects, including blobs, file shares, queues, tables, and disks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites for this project include:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An Azure account with an active subscription.&lt;/li&gt;
&lt;li&gt;Azure Cloud shell or a Azure CLI installed on local machine (I will be using powershell with azure CLI installed).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Things to Note&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I will be using powershell (an automated engine with an interactive command-line shell), I will run a powershell script (this file will contain all the configuration needed for connecting app service to azure sql database, and will be run at once).&lt;/li&gt;
&lt;li&gt;I will breakdown the commands in the script via screenshot.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Login into your azure account using &lt;code&gt;az login&lt;/code&gt; command.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hd9MsPv---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mx1zzltxj6uhl0o9yapv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hd9MsPv---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mx1zzltxj6uhl0o9yapv.png" alt="Azure login" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create your script and save on an editor using &lt;strong&gt;.ps1&lt;/strong&gt; extension (showing it is a powershell script); then navigate to the directory where script is saved and run the script as shown below&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eDk8u1g6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/un0cfgrse8t3yvib7k0u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eDk8u1g6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/un0cfgrse8t3yvib7k0u.png" alt="Run script" width="768" height="36"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This is a breakdown of my script with variables declared first.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2Y9a9hXv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwhj230o1w6sz6yucq51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2Y9a9hXv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwhj230o1w6sz6yucq51.png" alt="Variable declaration" width="536" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a resource group which will house all resources used in this demo with the command below:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NdcBxfC9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rsh8iitvz04aq0fxic7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NdcBxfC9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9rsh8iitvz04aq0fxic7.png" alt="Resource group command" width="800" height="85"&gt;&lt;/a&gt;&lt;br&gt;
Creating a resource group needs a resource group name, a location and a tag which will be referenced by calling my variables. Upon successful creation, the output will be:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5_Mz94D2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ulyx1uk1nu548n1nblr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5_Mz94D2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ulyx1uk1nu548n1nblr.png" alt="RG-output" width="736" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating an app service plan that defines a set of compute resources for a web app to run, this requires a name, resource group and a location.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rHlLO6qs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1014vfwhzlne7c13pr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rHlLO6qs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1014vfwhzlne7c13pr7.png" alt="App service plan" width="800" height="65"&gt;&lt;/a&gt;&lt;br&gt;
If all goes well, the output will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--VYVNtqd6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekp13a9tbg165cdg62l0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--VYVNtqd6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekp13a9tbg165cdg62l0.png" alt="App service plan output" width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a web app which requires an app service plan and a resource group.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dg7QUQRG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i0wjsjb436a14oc651jc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dg7QUQRG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i0wjsjb436a14oc651jc.png" alt="Web app" width="800" height="81"&gt;&lt;/a&gt;&lt;br&gt;
With output as:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8bZV2f-A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v2e8e86r7hxpu5nprjro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8bZV2f-A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v2e8e86r7hxpu5nprjro.png" alt="Web app output" width="800" height="865"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Since we need a database, it is essential to create a server for the database with parameters like server name, resource group, location, and username and password, this is done below with the output as:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--pP5tVO3w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tcjmnj9an6kyhq7db1af.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--pP5tVO3w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tcjmnj9an6kyhq7db1af.png" alt="DB server" width="800" height="48"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Gk0Yagta--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/05rzsj0qez85kv6h4zvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Gk0Yagta--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/05rzsj0qez85kv6h4zvx.png" alt="DB server output" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To enable access to the server, a firewall rule is needed, this rule determines what traffic is allowed or blocked by the firewall.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ay29uuNw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wqhhe7fthtr0k3opbaw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ay29uuNw--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wqhhe7fthtr0k3opbaw6.png" alt="Firewall rule" width="800" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--84WwULEZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1okeevulk0qt2hlxv5ay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--84WwULEZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1okeevulk0qt2hlxv5ay.png" alt="Firewall output" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating a database and attach it to the server using the database name, server, resource group variables and service objective.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EwbQgMdj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4tc9j69v5y47bzd22fn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EwbQgMdj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4tc9j69v5y47bzd22fn.png" alt="Create DB" width="800" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z_q9qi43--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0lsd9ioenkg3pmplqt1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z_q9qi43--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0lsd9ioenkg3pmplqt1s.png" alt="Create Db output" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Getting a connection string which is required by say the application hosted on the app service to form a connection with the database server, which requires the database name, server, client driver (ado.net in this demo) and output format stored in a variable using the command below:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HYMjBCCR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hg4eee0axk9hw0vvs0wk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HYMjBCCR--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hg4eee0axk9hw0vvs0wk.png" alt="Get connection string" width="800" height="48"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add login credentials to the connection string variable using the command below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jNx98bDM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v6hiiudyxx2uotm5kowh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jNx98bDM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v6hiiudyxx2uotm5kowh.png" alt="Add credentials" width="526" height="72"&gt;&lt;/a&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Assign the connection string to app setting in the azure app service, this would help to prevent exposing credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jXHTKP6I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0w78q5qa4k48u72yw2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jXHTKP6I--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g0w78q5qa4k48u72yw2y.png" alt="Assign connection string" width="800" height="32"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the script is run and successful, head over to azure portal to view the created resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--llRTQwqD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0n01ehgkgu86a18cffo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--llRTQwqD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0n01ehgkgu86a18cffo9.png" alt="Resources created" width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From my resource group, the resources created are visible.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;And my web app is running but without a content.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YiFraPhx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35otddeppwvd077t11n6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YiFraPhx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35otddeppwvd077t11n6.png" alt="Web app output" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;To avoid incurring excess charges on your azure account, clean up your resources using the command below:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;az group delete --name $resourceGroup&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;which would prompt you to choose a yes or no.&lt;/p&gt;

&lt;p&gt;Thank you for reading and I hope you learned something new, the powershell script is available on my &lt;a href="https://github.com/KeneOjiteli/connect-app-service-to-azure-db"&gt;repo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Kindly read &lt;a href="https://dev.to/keneojiteli/connecting-azure-app-service-to-azure-sql-database-and-storage-account-using-azure-cli-part-2-4a1m"&gt;part 2&lt;/a&gt; of this article.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>cli</category>
      <category>cloud</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
