<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Amarachi Iheanacho</title>
    <description>The latest articles on Forem by Amarachi Iheanacho (@amaraiheanacho).</description>
    <link>https://forem.com/amaraiheanacho</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/amaraiheanacho"/>
    <language>en</language>
    <item>
      <title>Securing your AWS EKS cluster</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Sat, 09 Aug 2025 12:32:58 +0000</pubDate>
      <link>https://forem.com/amaraiheanacho/securing-your-aws-eks-cluster-4deb</link>
      <guid>https://forem.com/amaraiheanacho/securing-your-aws-eks-cluster-4deb</guid>
      <description>&lt;p&gt;When a small analytics firm's misconfigured Kubernetes cluster exposed the sensitive data of a &lt;a href="https://www.aquasec.com/blog/kubernetes-exposed-one-yaml-away-from-disaster/" rel="noopener noreferrer"&gt;Fortune 500 client worth billions&lt;/a&gt; in revenue, it wasn't just a technical oversight, it was a business catastrophe waiting to happen. This isn't an isolated incident. In 2024 alone, researchers discovered over &lt;a href="https://www.aquasec.com/blog/kubernetes-exposed-one-yaml-away-from-disaster/" rel="noopener noreferrer"&gt;350 organizations with publicly accessible&lt;/a&gt;, largely unprotected Kubernetes clusters, with 60% of them already breached and running active malware campaigns.&lt;/p&gt;

&lt;p&gt;The financial stakes couldn't be higher.&lt;a href="https://www.cncf.io/blog/2024/09/26/the-state-of-security-in-cloud-native-development-2024/" rel="noopener noreferrer"&gt; The average data breach now costs $4.88 million&lt;/a&gt;, while&lt;a href="https://www.armosec.io/blog/unraveling-the-state-of-kubernetes-security-2024/" rel="noopener noreferrer"&gt; software supply chain attacks cost businesses $45.8 billion globally in 2023&lt;/a&gt;. For organizations running AWS Elastic Kubernetes Service (EKS), these aren't abstract statistics, they're urgent realities that demand immediate attention.&lt;/p&gt;

&lt;p&gt;AWS EKS is indeed a powerful solution for running containerized applications at scale. It simplifies many operational challenges by providing a fully managed Kubernetes control plane, helping organizations reduce overhead and accelerate deployment. However, the AWS shared responsibility model makes one thing crystal clear: while AWS secures the underlying infrastructure, protecting your workloads, configurations, and data remains entirely your responsibility. Even the smallest oversight can cascade into breaches or compliance violations that cost millions.&lt;/p&gt;

&lt;p&gt;This is where this guide comes in. We'll show you how to build a robust security posture for your EKS environment, covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Control plane security&lt;/li&gt;
&lt;li&gt;Network and pod security layers&lt;/li&gt;
&lt;li&gt;Secrets management&lt;/li&gt;
&lt;li&gt;Monitoring and incident response planning.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Understanding the EKS security landscape
&lt;/h1&gt;

&lt;p&gt;With EKS clusters holding our entire application, saying that you need to care about securing these clusters is by no means a throwaway statement. Security is absolutely critical, and, unfortunately, also very challenging.&lt;/p&gt;

&lt;p&gt;The challenge with EKS security isn’t just about implementing individual protective measures. It’s about understanding how all these components interact within a complex, distributed system.&lt;/p&gt;

&lt;p&gt;Unlike traditional monolithic applications, where security boundaries are clearly defined, containerized environments create dynamic attack surfaces that shift as pods scale, migrate, and communicate across your infrastructure. &lt;/p&gt;

&lt;p&gt;This complexity is compounded by the fact that many security decisions must happen simultaneously at multiple levels: at the cluster level through RBAC policies, at the network level through CNI configurations, and at the application level through service mesh implementations. A compromise at any of these layers can put your entire cluster, and therefore your entire application, at risk.&lt;/p&gt;

&lt;p&gt;In this article, we’re going to discuss security at different levels, covering how to secure the following components of your cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Control plane&lt;/li&gt;
&lt;li&gt;Network&lt;/li&gt;
&lt;li&gt;Pods&lt;/li&gt;
&lt;li&gt;Secrets &lt;/li&gt;
&lt;li&gt;Images&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And more importantly, we’ll look at why even after doing all this, you should never abandon your cluster and must continue monitoring it for anomalies and vulnerabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Control plane security
&lt;/h2&gt;

&lt;p&gt;The control plane is the heart, or more accurately, the brain, of any cluster, and this holds true for an EKS cluster as well. That makes it the most critical component to secure properly.&lt;/p&gt;

&lt;p&gt;The control plane decides and enforces everything: where and when pods should run, who has access to what, how API requests are handled, and the overall state of the cluster. If an attacker gains access to the control plane, they can see everything happening in your environment, change configurations, and much more.&lt;/p&gt;

&lt;p&gt;To put it simply, so you understand the severity: if you lose control of the control plane, you lose control of your entire cluster.&lt;/p&gt;

&lt;p&gt;Thankfully, there are ways to protect your control plane from malicious actors. Here’s how:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API server access control&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your API server is the gateway to your entire Kubernetes cluster. Every kubectl command, every deployment, every request to access secrets, it all flows through this single point. This centralization makes the API server incredibly powerful but also a potential security risk if you don’t secure it properly.&lt;/p&gt;

&lt;p&gt;To protect your API server, start by enabling private endpoint access. You can refer to the &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html" rel="noopener noreferrer"&gt;Cluster API server endpoint documentation&lt;/a&gt; for detailed steps on how to set this up.&lt;/p&gt;

&lt;p&gt;Private endpoints ensure that communication between your worker nodes and the control plane stays entirely within your VPC, eliminating exposure to internet-based attack vectors.&lt;/p&gt;

&lt;p&gt;While this approach is excellent for securing the control plane, using only private endpoints can make cluster management more challenging.&lt;/p&gt;

&lt;p&gt;That’s why I usually recommend a hybrid approach: enable both private and public endpoints but restrict public access to specific IP ranges using CIDR blocks. This setup allows you to manage your cluster securely from authorized locations without sacrificing flexibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentication and authorization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another way to secure your control plane is by carefully managing who has access to your clusters and, once they do, what they’re allowed to do inside them.&lt;/p&gt;

&lt;p&gt;A powerful way to achieve this is by leveraging AWS IAM. Amazon EKS integrates seamlessly with IAM, allowing you to use your existing AWS identities and permissions to control access to your Kubernetes clusters.&lt;/p&gt;

&lt;p&gt;While this integration is convenient, you must be cautious when configuring IAM to avoid privilege escalation, in other words, accidentally granting people permissions they don’t actually need.&lt;/p&gt;

&lt;p&gt;When you create an EKS cluster, only the IAM entity (user or role) that created it has access by default. This design helps prevent unauthorized access right from the start. However, you’ll typically need to grant access to additional team members and service accounts in a controlled, systematic way, based on their roles and responsibilities.&lt;/p&gt;

&lt;p&gt;To efficiently and accurately grant access to individuals and teams, you use the &lt;code&gt;aws-auth&lt;/code&gt; &lt;code&gt;ConfigMap&lt;/code&gt;, which maps IAM roles and users to Kubernetes groups. Be aware that this mapping is where many security misconfigurations originate. &lt;/p&gt;

&lt;p&gt;Always review and test these mappings carefully.  Refer to &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/auth-configmap.html" rel="noopener noreferrer"&gt;Grant IAM users access to Kubernetes with a ConfigMap&lt;/a&gt; for more information on how to grant access using the ConfigMap.&lt;/p&gt;

&lt;p&gt;Finally, never grant cluster-admin privileges unless absolutely necessary. Instead, create fine-grained RBAC policies that follow the &lt;a href="https://www.cyberark.com/what-is/least-privilege/" rel="noopener noreferrer"&gt;principle of least privilege&lt;/a&gt;. For example, a developer working on frontend applications doesn’t need access to database secrets or infrastructure namespaces.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network security
&lt;/h2&gt;

&lt;p&gt;Now that we’ve covered control plane security, let’s turn to another critical, and frankly quite complex, attack surface: the network.&lt;/p&gt;

&lt;p&gt;Everything in your cluster, pods, nodes, and services, communicates over the network. If you don’t secure this layer, anyone who gains access to your network could potentially read or tamper with sensitive data.&lt;/p&gt;

&lt;p&gt;In Amazon EKS, network security operates across multiple layers, each offering a different type of protection. Let’s take a closer look at how to secure each of these layers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VPC configuration and subnet strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your cluster's network foundation starts with proper VPC design. Place your worker nodes in private subnets whenever you can, this ensures that they can't be directly accessed from the internet. &lt;/p&gt;

&lt;p&gt;This single decision can eliminate entire categories of threats, such as &lt;a href="https://helpcenter.trendmicro.com/en-us/article/tmka-19689" rel="noopener noreferrer"&gt;SSH brute-force attacks&lt;/a&gt;, &lt;a href="https://purplesec.us/learn/internal-vs-external-vulnerability-scans/" rel="noopener noreferrer"&gt;external scanning&lt;/a&gt;, and &lt;a href="https://www.twingate.com/blog/glossary/remote-exploitation" rel="noopener noreferrer"&gt;remote exploitation&lt;/a&gt;, while still allowing your applications to function as needed.&lt;/p&gt;

&lt;p&gt;Use separate subnets for different node groups based on their security requirements. Your production workloads shouldn't share network space with development environments, and your database nodes need different access patterns than your web servers.&lt;/p&gt;

&lt;p&gt;The subnet strategy becomes particularly important when implementing network policies. Kubernetes network policies work at the pod level, but they're most effective when combined with VPC-level controls. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network policies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next are &lt;a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noopener noreferrer"&gt;network policies&lt;/a&gt;, which have become very popular at this point.&lt;/p&gt;

&lt;p&gt;Network policies allow you to control traffic flow between pods with remarkable precision, creating microsegmentation that would be impossible with traditional network security tools.&lt;/p&gt;

&lt;p&gt;A well-designed network policy starts with a default-deny stance. This means that unless explicitly allowed, no traffic flows between pods. While this might seem restrictive, it's the foundation of a &lt;a href="https://www.paloaltonetworks.com/cyberpedia/what-is-a-zero-trust-architecture" rel="noopener noreferrer"&gt;zero-trust network architecture&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once you've established your default-deny policy, you can selectively allow traffic based on your application's requirements. For example, a frontend application might need to communicate with a backend API but should never have direct access to the database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service mesh&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For secure service-to-service encryption, then &lt;a href="https://aws.amazon.com/what-is/service-mesh/" rel="noopener noreferrer"&gt;service mesh&lt;/a&gt; is your guy.&lt;/p&gt;

&lt;p&gt;Service meshes like &lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;Istio&lt;/a&gt; or &lt;a href="https://aws.amazon.com/app-mesh/" rel="noopener noreferrer"&gt;AWS App Mesh&lt;/a&gt; provide traffic encryption, authentication, and authorization at the service level, adding another layer of security beyond network policies.&lt;/p&gt;

&lt;p&gt;Service meshes excel in environments where you need to implement security policies based on service identity rather than network location. They're particularly valuable when dealing with compliance requirements that mandate encryption in transit for all inter-service communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pod security
&lt;/h2&gt;

&lt;p&gt;Now that you have defined and restricted access to your pods and the spaces they run in,  it’s time to control what pods are allowed to do and how they run. This is exactly what pod security aims to achieve.&lt;/p&gt;

&lt;p&gt;Pod security lets you establish rules that prevent pods from performing potentially risky actions inside your cluster and this section will explore the components of pod security in detail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod Security Standards implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pod Security Standards are rules that define how strict Kubernetes should be about what pods are allowed to do. There are three main policy levels:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Privileged&lt;/strong&gt;: Almost no restrictions, pods can do virtually anything. This is generally a bad idea for production environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Baseline&lt;/strong&gt;: Applies some restrictions to block the most dangerous behaviors  such as running as root without restrictions, while still supporting common use cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restricted&lt;/strong&gt;: The most stringent level. Pods can’t run as root, can’t perform privileged actions, and must declare clear security settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Restricted policy is particularly effective at reducing your cluster’s exposure to container escape attacks. While you may need to adjust some applications to comply, these controls provide strong protection for your workloads.&lt;/p&gt;

&lt;p&gt;Refer to the &lt;a href="https://kubernetes.io/docs/concepts/security/pod-security-standards/" rel="noopener noreferrer"&gt;Pod Security Standards&lt;/a&gt; for more information. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security contexts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unlike Pod Security Standards, which act as cluster-wide gatekeepers deciding what pods can even be scheduled, &lt;a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="noopener noreferrer"&gt;security contexts&lt;/a&gt; are the granular controls you apply to individual pods and containers. &lt;/p&gt;

&lt;p&gt;Security contexts give you precise control over how your containers operate at runtime. You can specify exactly which user ID a container runs as, whether it can escalate privileges, what Linux capabilities it has access to, and how it interacts with the file system. This granular approach means you can tailor security settings to each workload's specific needs rather than applying broad restrictions across your entire cluster.&lt;/p&gt;

&lt;p&gt;The real power of security contexts becomes apparent when you consider that they work in tandem with Pod Security Standards. Your Pod Security Standards might enforce that containers can't run as root, but security contexts let you specify that a particular container should run as user ID 1000 with group ID 3000. The standards provide the guardrails; the contexts provide the precise configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Runtime security&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;While Pod Security Standards and security contexts define who can run and how they are configured, &lt;a href="https://docs.aws.amazon.com/eks/latest/best-practices/runtime-security.html" rel="noopener noreferrer"&gt;runtime security&lt;/a&gt; focuses on what containers actually do once they are running.&lt;/p&gt;

&lt;p&gt;Runtime security is all about continuously monitoring your workloads for suspicious or unauthorized activity. Even if a container starts in a secure state, it could be exploited through vulnerabilities, misconfigurations, or malicious code. Runtime security tools help detect and stop these threats before they can escalate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets management
&lt;/h2&gt;

&lt;p&gt;In addition to securing your pods and network, you also need to protect your secrets. Kubernetes offers almost no secure mechanism for secret storage, and the default base64 encoding is not encryption, it's merely obfuscation that provides no real security benefit.&lt;/p&gt;

&lt;p&gt;To actually secure your secrets, you need to use the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Secrets Manager integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS provides a solution for managing secrets, &lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html" rel="noopener noreferrer"&gt;AWS Secrets Manager&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can use AWS Secrets Manager to securely store and retrieve sensitive data such as API keys, passwords, and certificates. You can store and retrieve secrets without exposing them in your manifests or ConfigMaps. &lt;/p&gt;

&lt;p&gt;To make the process even more seamless, tools like External Secrets Operator can automatically synchronize secrets from AWS Secrets Manager into Kubernetes in a controlled way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Encryption at Rest and in Transit&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, you need to enable encryption at rest for the etcd database in your EKS cluster.  This ensures that even if someone gains physical access to the underlying storage, your secrets remain protected.&lt;/p&gt;

&lt;p&gt;In addition, configure TLS for all inter-service communication. Many applications default to unencrypted connections within the cluster, assuming that private networks are secure by default. This assumption is risky in cloud environments, where network boundaries are often more fluid and less predictable. Refer to the &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/logical-separation/encrypting-data-at-rest-and--in-transit.html" rel="noopener noreferrer"&gt;Encrypting Data-at-Rest and Data-in-Transit&lt;/a&gt; for more information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Image security
&lt;/h2&gt;

&lt;p&gt;Another surface to secure that is often overlooked are container images. A single vulnerable base image or malicious dependency can compromise your entire application, and this is why the importance of image security cannot be overstated.&lt;/p&gt;

&lt;p&gt;To secure your images, do the following:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image scanning and vulnerability management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Implement automated image scanning in your CI/CD pipeline using tools like Amazon ECR image scanning or third-party solutions like Twistlock or Aqua Security. These tools identify known vulnerabilities in your images before they reach production.&lt;/p&gt;

&lt;p&gt;With these solutions, you can establish rules and policies that prevent deployment of images with high-severity vulnerabilities. &lt;/p&gt;

&lt;p&gt;While in an ideal world you would want to have zero vulnerabilities, it's sometimes an impractical goal. &lt;/p&gt;

&lt;p&gt;Instead, focus on removing critical and high-severity vulnerabilities while managing medium and low-severity issues through regular patching cycles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Admission controllers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/ja_jp/whitepapers/latest/security-practices-multi-tenant-saas-applications-eks/use-admission-controllers-to-enforce-security-policies.html" rel="noopener noreferrer"&gt;Admission controllers&lt;/a&gt; are pieces of code that intercept requests to the Kubernetes APIserver after authentication and authorization, but before the object is persisted to etcd.&lt;/p&gt;

&lt;p&gt;They validate or mutate resource requests. &lt;/p&gt;

&lt;p&gt;So any time you create, update, or delete a Kubernetes resource, like a Pod, Deployment, or Secret, the Admission controllers get a chance to inspect or change the request. &lt;/p&gt;

&lt;p&gt;They can enforce policies, apply defaults, or reject things that don’t meet certain criteria.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring and logging
&lt;/h2&gt;

&lt;p&gt;Even after you’ve taken the necessary steps to secure your EKS cluster, that alone isn’t enough to guarantee complete peace of mind. You also need to continuously monitor your cluster and log events to ensure your systems remain healthy and secure over time.&lt;/p&gt;

&lt;p&gt;Effective monitoring and logging creates the foundation for visibility, accountability, and rapid response, so that you can be ready even when things eventually goes wrong. &lt;/p&gt;

&lt;p&gt;Here are the components of an effective monitoring and response strategy:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Control plane logging&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS EKS provides comprehensive control plane logging capabilities that capture critical events and activities within your cluster's management layer. By enabling control plane logs, you gain visibility into API server requests, authenticator decisions, audit trails, and scheduler operations. These logs are automatically delivered to &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html" rel="noopener noreferrer"&gt;Amazon CloudWatch Logs&lt;/a&gt;, where you can analyze patterns, set up alerts, and maintain compliance requirements.&lt;/p&gt;

&lt;p&gt;The five types of control plane logs available include API server logs for tracking all API requests, audit logs for security compliance, authenticator logs for authentication debugging, controller manager logs for resource management oversight, and scheduler logs for pod placement decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application and node monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Beyond the control plane, your monitoring strategy must extend to the applications running within your cluster and the underlying worker nodes. Container-level metrics such as CPU usage, memory consumption, and network traffic patterns help you understand application performance and resource utilization. Node-level monitoring tracks system health, disk usage, and overall infrastructure stability.&lt;/p&gt;

&lt;p&gt;Tools like &lt;a href="https://grafana.com/docs/grafana/latest/getting-started/get-started-grafana-prometheus/" rel="noopener noreferrer"&gt;Prometheus paired with Grafana&lt;/a&gt; provide powerful open-source monitoring capabilities, while AWS CloudWatch Container Insights offers native integration with EKS clusters. Implement custom metrics for your specific applications and establish baseline performance indicators so you can quickly identify when systems deviate from normal behavior. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Incident response planning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Having great monitoring is only valuable if you have a well-defined plan for what to do when an issue arises.&lt;/p&gt;

&lt;p&gt;A great example of an amazing incident response was in 2018, when Tesla addressed vulnerabilities within hours of Redlock discovering them, before any customer data had been stolen.&lt;/p&gt;

&lt;p&gt;Incident response planning involves developing clear, repeatable procedures for identifying, containing, and resolving security or operational incidents.&lt;/p&gt;

&lt;p&gt;Your response strategy should include playbooks that outline step-by-step actions for different scenarios, such as compromised workloads, suspicious API activity, or unexpected resource exhaustion. These playbooks should specify how to isolate affected resources, collect forensic evidence, escalate to the appropriate teams, and communicate with stakeholders.&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping up your EKS security journey
&lt;/h1&gt;

&lt;p&gt;Securing your AWS EKS cluster isn’t a one-time task you can check off a list. It’s an ongoing process that requires careful planning, continuous improvement, and vigilance. From protecting your control plane and locking down your network to hardening pods, securing secrets, scanning images, and building robust monitoring and response practices, every layer contributes to your overall security posture.&lt;/p&gt;

&lt;p&gt;While this guide has covered many of the most critical strategies and tools, remember that the most effective security programs are adaptive. New threats, vulnerabilities, and attack techniques will continue to emerge, and your defenses must evolve accordingly.&lt;/p&gt;

&lt;p&gt;The key is to approach EKS security as a shared responsibility that extends beyond infrastructure. It’s about cultivating a culture where security is considered at every stage, design, development, deployment, and operation. With the right processes, tooling, and mindset in place, you’ll be well-equipped to protect your workloads and maintain the trust of your users, no matter how your Kubernetes environment grows.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>devops</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Provision an AWS EC2 jumphost using Terraform and GitHub Actions</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Mon, 30 Jun 2025 08:00:00 +0000</pubDate>
      <link>https://forem.com/amaraiheanacho/provision-an-aws-ec2-jumphost-using-terraform-and-github-actions-2e6n</link>
      <guid>https://forem.com/amaraiheanacho/provision-an-aws-ec2-jumphost-using-terraform-and-github-actions-2e6n</guid>
      <description>&lt;p&gt;Modern application development demands a level of agility that traditional architectures simply can't support. Today's systems must enable teams to deploy changes quickly, safely, and efficiently. They must scale up or down in response to demand. And much of the work of DevOps involves figuring out how to achieve all of this in a repeatable and reliable way.&lt;/p&gt;

&lt;p&gt;This article kicks off a four-part series on building a secure and scalable DevSecOps pipeline for deploying a quiz application to an Amazon Elastic Kubernetes Service(EKS) cluster. Throughout the series, you’ll leverage Infrastructure as Code (IaC) with Terraform, implement CI/CD using GitHub Actions, adopt GitOps practices through ArgoCD, and harness the scalability of Amazon EKS.&lt;/p&gt;

&lt;p&gt;In this first part, we'll focus on setting up a secure EC2 jumphost. This jumphost will act as a controlled, auditable access point to the EKS cluster you’ll deploy later in the series. Rather than exposing your entire cluster to the internet, the jumphost provides a secure gateway for administrative access.&lt;/p&gt;

&lt;p&gt;You’ll use Terraform to define the jumphost infrastructure, including compute resources, networking, and security rules. To make the setup fully automated, you’ll integrate GitHub Actions so that any change to the Terraform configuration triggers a workflow. This ensures that your infrastructure remains consistent, version-controlled, and easily reproducible.&lt;/p&gt;

&lt;p&gt;By the end of this guide, you’ll have hands-on experience provisioning a hardened EC2 jumphost on AWS, entirely automated through Terraform and GitHub Actions, laying the foundation for a secure and scalable DevSecOps pipeline.&lt;/p&gt;

&lt;h1&gt;
  
  
  What this series will contain
&lt;/h1&gt;

&lt;p&gt;This four-part series will walk you through building a modern DevSecOps pipeline for a containerized quiz application. Here's what each part will cover:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provision a secure EC2 jumphost using Terraform and GitHub Actions (this article).&lt;/li&gt;
&lt;li&gt;Build a CI/CD pipeline that tests your application and pushes Docker images to Amazon ECR.&lt;/li&gt;
&lt;li&gt;Set up an Amazon EKS cluster and deploy the application with ArgoCD.&lt;/li&gt;
&lt;li&gt;Add monitoring and observability using Prometheus and Grafana.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Clone the quiz application
&lt;/h2&gt;

&lt;p&gt;This series builds on the quiz application. You can clone the repository here: &lt;a href="https://github.com/Iheanacho-ai/quiz" rel="noopener noreferrer"&gt;Quiz application GitHub&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Check out this GitHub repository, to view the complete code for the entire series: &lt;a href="https://github.com/Iheanacho-ai/three-tier-devsecops-project" rel="noopener noreferrer"&gt;Three-tier DevSecOps Project GitHub&lt;/a&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;To get the most out of this article, you must have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A basic understanding of Git.&lt;/li&gt;
&lt;li&gt;A GitHub account. If you don’t have one, you can create one &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://github.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;An AWS account. If you don’t have one, you can sign up for a free account &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A basic understanding of GitHub Actions and Terraform.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Project structure
&lt;/h1&gt;

&lt;p&gt;In this article, you will create a jumphost, which you'll later use to access an EKS cluster in the later parts of the series.&lt;/p&gt;

&lt;p&gt;After cloning and pulling the quiz project from GitHub, add the following folders and files to the project structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;terraform folder&lt;/strong&gt;: Inside this folder, create three files, &lt;code&gt;main.tf&lt;/code&gt;, &lt;code&gt;outputs.tf&lt;/code&gt;, and &lt;code&gt;variables.tf&lt;/code&gt;, to store all the Terraform configurations for the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scripts folder&lt;/strong&gt;: This folder should contain the &lt;code&gt;jumphost_init.sh&lt;/code&gt; file, which will include commands for installing the necessary packages on the jumphost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;.github/workflows folder&lt;/strong&gt;: Create a &lt;code&gt;terraform.yaml&lt;/code&gt; file in this folder to define the GitHub Actions configuration for the Terraform pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you've added these files, your project structure should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;backend&lt;/span&gt;
&lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
    &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="n"&gt;workflows&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
        &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="n"&gt;terraform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yml&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;docker&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;compose&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yaml&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;frontend&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;scripts&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="n"&gt;jumphost_init&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sh&lt;/span&gt;
&lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="n"&gt;terraform&lt;/span&gt;
    &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;
    &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;outputs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;
    &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="n"&gt;variables&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tf&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Pre-project setup checklist
&lt;/h1&gt;

&lt;p&gt;You need to do the following before diving into this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an AWS access key and secret access key.&lt;/li&gt;
&lt;li&gt;Set up an S3 bucket.&lt;/li&gt;
&lt;li&gt;Generate a public SSH key.&lt;/li&gt;
&lt;li&gt;Add your credentials (AWS access key, secret access key, and S3 bucket) to your GitHub Actions secrets.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create an AWS access key and secret access key
&lt;/h2&gt;

&lt;p&gt;These keys provide Terraform and GitHub Actions with programmatic access to your AWS account, allowing them to read or write data to your account. &lt;/p&gt;

&lt;p&gt;Follow these steps to create your AWS keys:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to your AWS account, either as an IAM user or a root user&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the AWS console, search for &lt;strong&gt;IAM&lt;/strong&gt; and select it.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcO-CYx12yo7p4pBW07gn_VsdbaA-mr7ttZISm506W_m0NE3_8kj9hwLfmvbgKU9DdKSOraPeM9NFlGBUmQSUN2F4UV0LVVrrPAHdiwjf0u5kmfbF0Rb1-oUFeW9ZsAc2F_q-ze_A%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcO-CYx12yo7p4pBW07gn_VsdbaA-mr7ttZISm506W_m0NE3_8kj9hwLfmvbgKU9DdKSOraPeM9NFlGBUmQSUN2F4UV0LVVrrPAHdiwjf0u5kmfbF0Rb1-oUFeW9ZsAc2F_q-ze_A%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="416"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Users&lt;/strong&gt; in the sidebar, and select the &lt;strong&gt;Create user&lt;/strong&gt; button&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter a username (e.g., &lt;code&gt;jumphost-terraform&lt;/code&gt;) and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfzzmKq2CaWP92nopUSMhYsvkKFj7qzGMBzrWRz54hbU-kQ-FEkZlFW8_ef6aBYYfYfTJ6iLJ9ZDlgJGnZ_nul5yUdfabbcz8iOrpIkw04yas8JozG6Z33zKXF1iQBktn3FB4AP%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfzzmKq2CaWP92nopUSMhYsvkKFj7qzGMBzrWRz54hbU-kQ-FEkZlFW8_ef6aBYYfYfTJ6iLJ9ZDlgJGnZ_nul5yUdfabbcz8iOrpIkw04yas8JozG6Z33zKXF1iQBktn3FB4AP%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="520"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the &lt;strong&gt;Attach policies directly&lt;/strong&gt; option.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdjDXYkwfFmCnQYcq2DqluBfO9ao-jImMrDQZERs7oK0KDThI5YqkAwGUHtTprC3U03Beyj_y6S1_5C10MMWPPBP9FT2qU1kr4aiP0al4YJfCNU7x_NfaQlfoKgLTDZmCtmeypV%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdjDXYkwfFmCnQYcq2DqluBfO9ao-jImMrDQZERs7oK0KDThI5YqkAwGUHtTprC3U03Beyj_y6S1_5C10MMWPPBP9FT2qU1kr4aiP0al4YJfCNU7x_NfaQlfoKgLTDZmCtmeypV%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="355"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Search for the &lt;strong&gt;AdministratorAccess&lt;/strong&gt; policy and select it. &lt;em&gt;(Note: This is just for the demo; in a real-world scenario, practice&lt;/em&gt; &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_least_privileges.html" rel="noopener noreferrer"&gt;&lt;em&gt;least privilege access&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.)&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdJT0xoKU5NXULg1mGDye-WPZ_Sg8dpxhHoOn3a5gIL8bMvIxyugkChZs4lLZSdOP9GcstCndiQIYgB8HZ52c33hOBr2HC3mJOqK5qhyzVULN1sBVzPKij6xsJzJbudLQbBWs6l6Q%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdJT0xoKU5NXULg1mGDye-WPZ_Sg8dpxhHoOn3a5gIL8bMvIxyugkChZs4lLZSdOP9GcstCndiQIYgB8HZ52c33hOBr2HC3mJOqK5qhyzVULN1sBVzPKij6xsJzJbudLQbBWs6l6Q%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="389"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Next&lt;/strong&gt;, review your user settings, and click &lt;strong&gt;Create user&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You should now see your user listed. Select the newly created user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;strong&gt;Security credentials&lt;/strong&gt; tab.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd47dQI0wmAzTM7yl0P6SxWOqwTXuyjJxSmrljBFLVxQ5wtECePuxGBgcr0PzH5EL3XSxZ1lDMVDxYwoMpOuOXJWW1v73GeYmLZOx8N_XAvMi8UAQ1_YxgYLmFgE6NF3df8DL-IUQ%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd47dQI0wmAzTM7yl0P6SxWOqwTXuyjJxSmrljBFLVxQ5wtECePuxGBgcr0PzH5EL3XSxZ1lDMVDxYwoMpOuOXJWW1v73GeYmLZOx8N_XAvMi8UAQ1_YxgYLmFgE6NF3df8DL-IUQ%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="730"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Create access key&lt;/strong&gt; in the &lt;strong&gt;Access key&lt;/strong&gt; section.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfirab78wqTpQxr_9A0kbu8nbUQzL2bnCqcEoWVMPHpgJ1WkNaXzyQoDY3HXTGlnyaAoSkKBkVFI8-iJChKdlu8OSs2rkCzC_Zivm_U6MoQBVLc0_m3vXMQuhj5s99rJpE8-2Kz0Q%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfirab78wqTpQxr_9A0kbu8nbUQzL2bnCqcEoWVMPHpgJ1WkNaXzyQoDY3HXTGlnyaAoSkKBkVFI8-iJChKdlu8OSs2rkCzC_Zivm_U6MoQBVLc0_m3vXMQuhj5s99rJpE8-2Kz0Q%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="730"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;strong&gt;Third-party service&lt;/strong&gt;, check the confirmation box, and click &lt;strong&gt;Next&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXf-Sc55t-DSNwob1mxMvMlyNWk_Si6UwX6EaiPw8KMDalHqFKTNWBVBul-ejp846N6XLfa31B3pCoGOGsKAnkcJLHiVkdZZJhxQQw7lu8zYPSP1P4BpiPqbt0OMF6CF8DzjZ5X_LA%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXf-Sc55t-DSNwob1mxMvMlyNWk_Si6UwX6EaiPw8KMDalHqFKTNWBVBul-ejp846N6XLfa31B3pCoGOGsKAnkcJLHiVkdZZJhxQQw7lu8zYPSP1P4BpiPqbt0OMF6CF8DzjZ5X_LA%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="730"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optionally, add a description for the key.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Create access key&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy your &lt;strong&gt;Access key&lt;/strong&gt; and &lt;strong&gt;Secret access key&lt;/strong&gt;, and store them in a secure location (you won’t be able to view them again after leaving this page).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Done&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With that, you have created your Access and Secret access keys.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an S3 bucket
&lt;/h2&gt;

&lt;p&gt;Next, create an S3 bucket to store Terraform's state files, which represent the current state of your infrastructure. By default, Terraform stores these files locally, but using S3 ensures that the state is centralized and accessible. Follow these steps to create your S3 bucket:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In the AWS console, search for &lt;strong&gt;S3&lt;/strong&gt; and select it.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Create bucket&lt;/strong&gt; under the &lt;strong&gt;General purpose buckets&lt;/strong&gt; section.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcrZnCV1S0GxSQZ3W1kPQt4JPkUkpedZb3qPHm6ksutMAI9MST28BQNfUA8R_3wKLuwpGH0IfitUvb9FaOMlFtTLB-Rb2TznvNGer4JfC7wNzktZWBoAKI_qN9ENHy5usaqNn-5%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcrZnCV1S0GxSQZ3W1kPQt4JPkUkpedZb3qPHm6ksutMAI9MST28BQNfUA8R_3wKLuwpGH0IfitUvb9FaOMlFtTLB-Rb2TznvNGer4JfC7wNzktZWBoAKI_qN9ENHy5usaqNn-5%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="182"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose a globally unique name for your bucket.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfDSxQti2Mc0vei5O8wZ0BuB6bYwFre-Dzz5f_-OMk_99Uw-hPEQILizTe_YSoK7QQcWuazQvi5bpp6Q-x2DQR5fNhf1OqW-kcoqMveShUOQgR7LxKEq1fZ_oTi14T7319ZhQW_GA%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfDSxQti2Mc0vei5O8wZ0BuB6bYwFre-Dzz5f_-OMk_99Uw-hPEQILizTe_YSoK7QQcWuazQvi5bpp6Q-x2DQR5fNhf1OqW-kcoqMveShUOQgR7LxKEq1fZ_oTi14T7319ZhQW_GA%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="416"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;Create bucket&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create an SSH key
&lt;/h2&gt;

&lt;p&gt;You’ll need an SSH key to securely access your EC2 instance. Here's how to create one:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open your terminal and navigate to the ~/.ssh directory. If it doesn’t exist, create it:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/.ssh  &lt;span class="c"&gt;# Navigate to the directory if it exists&lt;/span&gt;
&lt;span class="nb"&gt;mkdir&lt;/span&gt; ~/.ssh  &lt;span class="c"&gt;# Create the directory if it doesn't exist&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Run the following command to generate your SSH key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh-keygen &lt;span class="nt"&gt;-t&lt;/span&gt; ed25519
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.When prompted for the file location to save the key, enter a name for the key (e.g., key) and press &lt;strong&gt;Enter&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;4.Press &lt;strong&gt;Enter&lt;/strong&gt; for the rest of the prompts to accept the defaults. Your SSH key is now generated.&lt;/p&gt;

&lt;p&gt;5.To view and copy your public key, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat &amp;lt;name of the key&amp;gt;.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, if you had entered &lt;code&gt;key&lt;/code&gt; when prompted, your command would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat key.pub
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy and save the content of this file as it is your public key and you would need it to provision and SSH into your instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add the credentials to your GitHub secrets
&lt;/h2&gt;

&lt;p&gt;Now that you have your AWS Access Key, Secret Access Key, and S3 Bucket you need to add these as secrets in GitHub Actions for your CI/CD pipeline.&lt;br&gt;
Follow these steps to add your credentials to GitHub Secrets:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to your GitHub account.&lt;/li&gt;
&lt;li&gt;Navigate to your cloned repository.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Settings&lt;/strong&gt; tab.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left sidebar, click &lt;strong&gt;Secrets and variables&lt;/strong&gt;, then select &lt;strong&gt;Actions&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcv3MVBGW0bSvAiNEVKfXy9WbK32FqpFRLb__hNa9iJDsDiHOns13sa7plEtNgG6ggMa1D6RZM4DPRrmnLsmWX8GJf9ohZ-qBvNvTJH5FdMG6Y5rrYAnobzPGOP6ptxCeOl5ULk%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcv3MVBGW0bSvAiNEVKfXy9WbK32FqpFRLb__hNa9iJDsDiHOns13sa7plEtNgG6ggMa1D6RZM4DPRrmnLsmWX8GJf9ohZ-qBvNvTJH5FdMG6Y5rrYAnobzPGOP6ptxCeOl5ULk%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="209"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;New repository secret&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add each of the following secrets with the corresponding values:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: AWS_ACCESS_KEY_ID | &lt;strong&gt;Secret&lt;/strong&gt;: &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: AWS_SECRET_ACCESS_KEY | &lt;strong&gt;Secret&lt;/strong&gt;: &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Name&lt;/strong&gt;: BUCKET_TF | &lt;strong&gt;Secret&lt;/strong&gt;: &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After entering each secret, click &lt;strong&gt;Add secret&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcJa9xorEewJfcTtJPcr6Qgou6YyheoPEvqu2c01fuS_7kA0tmQwSAtgmxbI9CN5-TDWFd8FfCBHLSIS1QMOdSdN8SZYZE0dh7baBSqDWW47G7COF7OW0C127jDMDCfkGqejBw7dw%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcJa9xorEewJfcTtJPcr6Qgou6YyheoPEvqu2c01fuS_7kA0tmQwSAtgmxbI9CN5-TDWFd8FfCBHLSIS1QMOdSdN8SZYZE0dh7baBSqDWW47G7COF7OW0C127jDMDCfkGqejBw7dw%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="668"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;
  
  
  Provisioning the jumphost with Terraform
&lt;/h1&gt;

&lt;p&gt;To provision the jumphost, we’ll use Terraform with a modular and organized setup. The configuration is divided into three core files: &lt;code&gt;main.tf&lt;/code&gt;, &lt;code&gt;variables.tf&lt;/code&gt;, and &lt;code&gt;[outputs.tf]&lt;/code&gt;, each serving a specific purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;main.tf&lt;/code&gt; defines the infrastructure resources. In this case, it describes the EC2 instance that Terraform will provision.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt; declares reusable input variables, allowing you to customize the configuration in main.tf easily. This makes your setup more flexible and maintainable.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;outputs.tf&lt;/code&gt; specifies the output values you want Terraform to return after provisioning, such as public IPs or instance IDs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next step, you’ll copy the relevant code snippets into each of these files.&lt;/p&gt;
&lt;h2&gt;
  
  
  Define your Terraform variables in your &lt;code&gt;[variables.tf]&lt;/code&gt; file
&lt;/h2&gt;

&lt;p&gt;Create your Terraform variables by copying and pasting these variables in your &lt;code&gt;variables.tf&lt;/code&gt; file, replacing the &lt;code&gt;&amp;lt;your public key&amp;gt;&lt;/code&gt; with the public key you generated earlier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"region"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"vpc_cidr"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"10.0.0.0/16"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"instance_type"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.micro"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"ami_name_filter"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"allowed_ssh_cidr"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"key_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_key"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"public_key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Your SSH public key"&lt;/span&gt;
 &lt;span class="nx"&gt;default&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;lt;your public key&amp;gt;"&lt;/span&gt; &lt;span class="c1"&gt;# Add your public key here&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"environment"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"DevOpsProject"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"owner"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;default&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Amarachi"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s a breakdown of what each variable does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;region:&lt;/strong&gt; Specifies the AWS region where resources will be deployed. Default is us-east-1.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;vpc_cidr&lt;/strong&gt;: Sets the IP range for the Virtual Private Cloud (VPC). Default is &lt;code&gt;10.0.0.0/16&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;instance_type&lt;/strong&gt;: Defines the EC2 instance type. We’re using t3.micro for a cost-effective option suitable for lightweight tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ami_name_filter&lt;/strong&gt;: Filters the Amazon Machine Image (AMI) for Ubuntu 22.04 (Jammy). Terraform will pick the latest version matching this pattern.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;allowed_ssh_cidr&lt;/strong&gt;: Determines which IP ranges are allowed to SSH into the instance. The default (0.0.0.0/0) allows access from anywhere—fine for testing, but should be tightened for production environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;key_name&lt;/strong&gt;: The name of your SSH key pair used to access the EC2 instance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;public_key&lt;/strong&gt;: Defines a default SSH public key value that will be used to provision access to the jumphost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;environment&lt;/strong&gt;: A tag to help identify which environment (e.g., Dev, Staging, Prod) the resources belong to.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;owner&lt;/strong&gt;: Tags resources with the owner’s name for accountability and resource tracking.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Next, you’ll continue by configuring the infrastructure in &lt;code&gt;main.tf&lt;/code&gt; and defining outputs in &lt;code&gt;outputs.tf&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set your EC2 instance configuration using Terraform
&lt;/h2&gt;

&lt;p&gt;Paste the following code into your &lt;code&gt;main.tf&lt;/code&gt; file, replacing &lt;code&gt;&amp;lt;name of the bucket&amp;gt;&lt;/code&gt; with the actual name of your bucket. This will define the properties of your EC2 instance.&lt;br&gt;
&lt;a href="https://gist.github.com/Iheanacho-ai/ff01f2e0ff30e30b95a6a4b5576c73d8" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/ff01f2e0ff30e30b95a6a4b5576c73d8&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Here is a structure breakdown of what each Terraform block in the &lt;code&gt;main.tf&lt;/code&gt; file does (If you're already familiar with Terraform, feel free to skip ahead to the &lt;em&gt;Define your outputs in the &lt;code&gt;outputs.tf&lt;/code&gt; file&lt;/em&gt; section.):&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform block&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
     &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"~&amp;gt; 5.0"&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"amara-jumphost"&lt;/span&gt;
   &lt;span class="nx"&gt;key&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform.tfstate"&lt;/span&gt;
   &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform block above defines two things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Provider Configuration&lt;/strong&gt;: The &lt;code&gt;required _providers&lt;/code&gt; block specifies that the AWS provider will be used&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote State Backend&lt;/strong&gt;: The &lt;code&gt;backend"s3"&lt;/code&gt; block stores the current Terraform state file in an S3 bucket. This bucket is the same bucket you created at the start of the project.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS provider&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# This is the AWS provider&lt;/span&gt;
&lt;span class="k"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;provider"aws"&lt;/code&gt; block configures the AWS provider to operate in the region specified in the &lt;code&gt;region&lt;/code&gt; variable in your &lt;code&gt;[variables.tf]&lt;/code&gt; file. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ubuntu AMI&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Get latest Ubuntu AMI&lt;/span&gt;
&lt;span class="k"&gt;data&lt;/span&gt; &lt;span class="s2"&gt;"aws_ami"&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;most_recent&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

 &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt;
   &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ami_name_filter&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="nx"&gt;filter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"virtualization-type"&lt;/span&gt;
   &lt;span class="nx"&gt;values&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"hvm"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="nx"&gt;owners&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"099720109477"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code block above dynamically fetches the latest Ubuntu AMI available in AWS. Here is a breakdown of the &lt;code&gt;data&lt;/code&gt; block:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;data "aws_ami" "ubuntu":&lt;/strong&gt; Declares a data source block to look up an AWS AMI named "ubuntu".&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;most_recent = true&lt;/strong&gt;: Ensures that Terraform selects the most recently created AMI from all the results from the filters.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;filter block #1 — Name filter&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;name = "name": Filters AMIs by name.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;values = [var.ami_name_filter]&lt;/strong&gt;:  Uses the &lt;code&gt;ami_name_filter&lt;/code&gt; variable defined in the &lt;code&gt;[variables.tf]&lt;/code&gt; file to match AMI names for the filter.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;filter block #2 — Virtualization type filter&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;name = "virtualization-type"&lt;/strong&gt;: Filters by virtualization type.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;values = ["hvm"]&lt;/strong&gt;: Ensures only &lt;a href="https://docs.rightscale.com/faq/What_is_Hardware_Virtual_Machine_or_HVM.html" rel="noopener noreferrer"&gt;Hardware Virtual Machine (HVM)&lt;/a&gt; AMIs are returned, which is the standard for most modern EC2 instances.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;owners = ["099720109477"]&lt;/strong&gt;: Limits the search to AMIs owned by &lt;a href="https://documentation.ubuntu.com/aws/en/latest/aws-how-to/instances/find-ubuntu-images/" rel="noopener noreferrer"&gt;Canonical, the publisher of Ubuntu&lt;/a&gt;. This ID is Canonical’s official AWS account.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Networking Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This setup defines the networking infrastructure which your EC2 instance will live. It includes the following key components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Virtual Private Cloud (VPC)&lt;/strong&gt;: An isolated, configurable network that provides foundational connectivity for your AWS resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subnet&lt;/strong&gt;: A segmented IP range within the VPC that dictates availability zones and routing for your EC2 instance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internet Gateway&lt;/strong&gt;: Allows the EC2 instance to send and receive traffic from the internet by routing it in and out of the VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's break down how each of these components is defined in Terraform.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_vpc"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_vpc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;cidr_block&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_cidr&lt;/span&gt;
 &lt;span class="nx"&gt;enable_dns_hostnames&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
 &lt;span class="nx"&gt;enable_dns_support&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This resource defines a dedicated VPC named &lt;code&gt;jumphost_vpc&lt;/code&gt; with the CIDR block specified in your &lt;code&gt;variables.tf&lt;/code&gt; file. DNS hostnames and DNS support are enabled to allow for easier internal and external name resolution.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subnet
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_subnet"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_subnet"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;cidrsubnet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cidr_block&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
 &lt;span class="nx"&gt;vpc_id&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform block creates a subnet named &lt;code&gt;jumphost_subnet&lt;/code&gt; within the &lt;code&gt;jumphost_vpc&lt;/code&gt; VPC. It calculates the subnet’s CIDR block by dividing the VPC’s CIDR range into 16 smaller subnets (by increasing the subnet mask by 4 bits) and selects the second subnet (index 1). The subnet is associated with the VPC by referencing its ID. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Terraform automatically handles the creation and referencing of these resource IDs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Internet Gateway and Routing
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_internet_gateway"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_igw"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

 &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route_table"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_route_table"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

 &lt;span class="nx"&gt;route&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;cidr_block&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;
   &lt;span class="nx"&gt;gateway_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_internet_gateway&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_igw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route_table_association"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_route_table_assoc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;subnet_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
 &lt;span class="nx"&gt;route_table_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route_table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_route_table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform configuration enables internet access for a subnet using the following AWS networking components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;aws_internet_gateway.jumphost_igw&lt;/strong&gt;: Creates an Internet Gateway and attaches it to the specified VPC using &lt;code&gt;vpc_id = aws_vpc.jumphost_vpc.id&lt;/code&gt;. This allows the VPC to communicate with the internet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;aws_route_table.jumphost_route_table&lt;/strong&gt;: Creates a route table for the VPC. It includes a route that directs all outbound traffic &lt;code&gt;(0.0.0.0/0)&lt;/code&gt; through the Internet Gateway created earlier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;aws_route_table_association.jumphost_route_table_assoc&lt;/strong&gt;: Associates the route table with a specific subnet, enabling instances within that subnet to use the routing rules, i.e., to access the internet via the Internet Gateway.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt; &lt;span class="nx"&gt;subnet_id&lt;/span&gt;      &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_subnet&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
 &lt;span class="nx"&gt;route_table_id&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route_table&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_route_table&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Security Group&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Security Group&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_security_group"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_SG"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_SG"&lt;/span&gt;
 &lt;span class="nx"&gt;vpc_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

 &lt;span class="nx"&gt;ingress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;allowed_ssh_cidr&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
   &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
   &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;
   &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"tcp"&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="nx"&gt;egress&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;from_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
   &lt;span class="nx"&gt;to_port&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
   &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;-1&lt;/span&gt;
   &lt;span class="nx"&gt;cidr_blocks&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"0.0.0.0/0"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform block creates a Security Group in AWS named &lt;code&gt;jumphost_SG&lt;/code&gt; that wil be associated with the &lt;code&gt;jumphost_vpc&lt;/code&gt; VPC. With this security group you specify: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ingress Rule (Incoming traffic)&lt;/strong&gt;: This rule allows SSH access (port 22) from the IP range specified by the variable &lt;code&gt;var.allowed_ssh_cidr&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Egress Rule (Outgoing traffic&lt;/strong&gt;): This allows all outbound traffic to any IP address &lt;code&gt;(0.0.0.0/0)&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key pair&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_key_pair"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;key_name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key_name&lt;/span&gt;
 &lt;span class="nx"&gt;public_key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_key&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform block creates and attaches the SSH key you generated earlier to your EC2 instance, allowing you to connect to it via SSH.&lt;br&gt;
Here’s a breakdown of the components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;resource "aws_key_pair" "jumphost_key"&lt;/strong&gt;: Defines a new AWS EC2 key pair resource named &lt;code&gt;jumphost_key&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;key_name = var.key_name&lt;/strong&gt;: Sets the key pair’s name using the value provided in the &lt;code&gt;key_name&lt;/code&gt; variable defined in &lt;code&gt;variables.tf&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;public_key = var.public_key&lt;/strong&gt;: Passes your previously created public SSH key to AWS using the &lt;code&gt;public_key&lt;/code&gt; variable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IAM Role &amp;amp; Instance Profile&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_role"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_role"&lt;/span&gt;

 &lt;span class="nx"&gt;assume_role_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
   &lt;span class="nx"&gt;Version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;
   &lt;span class="nx"&gt;Statement&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
     &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nx"&gt;Action&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sts:AssumeRole"&lt;/span&gt;
       &lt;span class="nx"&gt;Effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
       &lt;span class="nx"&gt;Sid&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
       &lt;span class="nx"&gt;Principal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
         &lt;span class="nx"&gt;Service&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ec2.amazonaws.com"&lt;/span&gt;
       &lt;span class="p"&gt;}&lt;/span&gt;
     &lt;span class="p"&gt;},&lt;/span&gt;
   &lt;span class="p"&gt;]&lt;/span&gt;
 &lt;span class="p"&gt;})&lt;/span&gt;

 &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;tag-key&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_tag_value"&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# This is an AWS IAM policy&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role_policy_attachment"&lt;/span&gt; &lt;span class="s2"&gt;"administrator_access_attach"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;role&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
 &lt;span class="nx"&gt;policy_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::aws:policy/AdministratorAccess"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_instance_profile"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_instance_profile"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_instance_profile"&lt;/span&gt;
 &lt;span class="nx"&gt;role&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform code block sets up IAM permissions for your jump host with three component blocks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;aws_iam_role "jumphost_role"&lt;/strong&gt;: The role includes a trust policy that allows EC2 instances to assume the role using the &lt;code&gt;sts:AssumeRole&lt;/code&gt; action. The trusted service is &lt;code&gt;ec2.amazonaws.com&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;aws_iam_role_policy_attachment "administrator_access_attach"&lt;/strong&gt;: This block attaches the AWS-managed &lt;strong&gt;AdministratorAccess&lt;/strong&gt; policy to the &lt;code&gt;jumphost_role&lt;/code&gt;, granting it full administrative permissions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;aws_iam_instance_profile "jumphost_instance_profile"&lt;/strong&gt;: This block creates an instance profile that will link the IAM role to the jumphost EC2 instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This profile can be attached to EC2 instances so they can inherit the role’s permissions at runtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EC2 Instance&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;# EC2 Instance&lt;/span&gt;
&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_instance"&lt;/span&gt; &lt;span class="s2"&gt;"jumphost"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;ami&lt;/span&gt;                         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_ami&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ubuntu&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
 &lt;span class="nx"&gt;instance_type&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;instance_type&lt;/span&gt;
 &lt;span class="nx"&gt;associate_public_ip_address&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
 &lt;span class="nx"&gt;key_name&lt;/span&gt;                    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_key_pair&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key_name&lt;/span&gt;
 &lt;span class="nx"&gt;vpc_security_group_ids&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_SG&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
 &lt;span class="nx"&gt;subnet_id&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
 &lt;span class="nx"&gt;iam_instance_profile&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_instance_profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost_instance_profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;

 &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;Name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Jumphost"&lt;/span&gt;
   &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;environment&lt;/span&gt;
   &lt;span class="nx"&gt;Owner&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;owner&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="nx"&gt;user_data&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"../scripts/jumphost_init.sh"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform block provisions the EC2 instance that will serve as your jumphost. It brings together all the components you defined earlier and ties them into a single resource:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ami = data.aws_ami&lt;/strong&gt;&lt;a href="http://.ubuntu.id" rel="noopener noreferrer"&gt;&lt;strong&gt;.ubuntu.id&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;:&lt;/strong&gt;  Uses the Ubuntu AMI you previously retrieved to launch the instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;instance_type = var.instance_type&lt;/strong&gt;: Specifies the EC2 instance type based on your provided variable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;associate_public_ip_address = true&lt;/strong&gt;: Assigns a public IP address to the instance, allowing direct access over the internet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;key_name = aws_key_pair.jumphost_key.key_name&lt;/strong&gt;: Associates the instance with the SSH key pair for secure remote access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;vpc_security_group_ids = [aws_security_group.jumphost_SG.id]&lt;/strong&gt;: Attaches the instance to the predefined security group, controlling inbound and outbound traffic rules.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;subnet_id = aws_subnet.jumphost_subnet.id:&lt;/strong&gt; Places the instance in the designated subnet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;iam_instance_profile = aws_iam_instance_profile.jumphost_instance_profile.name&lt;/strong&gt;: Applies the IAM instance profile, granting the instance appropriate permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;tags = {...}&lt;/strong&gt;: Adds metadata for organizational purposes—such as the instance name, environment, and owner.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;user_data = file("../scripts/jumphost_init.sh")&lt;/strong&gt;: Runs a startup script upon instance launch, allowing for automated configuration and initialization.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, this block ties everything together to create a fully functional jumphost with networking, access control, permissions, and startup configuration all pre-configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Define your outputs in the &lt;code&gt;outputs.tf&lt;/code&gt; file
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;outputs.tf&lt;/code&gt; file allows you to expose key information about your infrastructure after running terraform apply. These outputs make it easy to retrieve critical values such as public IP addresses or instance IDs, useful for debugging, connectivity, or automation.&lt;/p&gt;

&lt;p&gt;To define outputs for your jumphost, paste the following code into your outputs.tf file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_public_ip"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The public IP of the jumphost"&lt;/span&gt;
 &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;public_ip&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"jumphost_ssm_instance_id"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="nx"&gt;description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Instance ID for use with AWS SSM"&lt;/span&gt;
 &lt;span class="nx"&gt;value&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_instance&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jumphost&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break down what each output does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;output "jumphost_public_ip"&lt;/strong&gt;: This outputs the public IP address of the EC2 jumphost. It's particularly useful when you need to SSH into the instance or connect via tools that require the IP.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;description&lt;/strong&gt;: This describes the purpose of the output function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;value&lt;/strong&gt;: Points to &lt;code&gt;aws_instance.jumphost.public_ip&lt;/code&gt;, which fetches the actual IP address of the jumphost&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;output "jumphost_ssm_instance_id"&lt;/strong&gt;: This outputs the EC2 instance ID, a key value if you plan to use AWS Systems Manager (SSM) for session-based access, allowing you to connect without SSH keys.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;description&lt;/strong&gt;: This describes the purpose of the output function.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;value&lt;/strong&gt;: Refers to &lt;code&gt;aws_instance.jumphost.id&lt;/code&gt;, which returns the unique identifier of the EC2 instance. This is useful for SSM sessions and automation scripts.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Creating the initialization script
&lt;/h1&gt;

&lt;p&gt;Once your Terraform configurations for the EC2 instance are in place, the next step is to create an initialization script that runs after the instance is launched. This script prepares the jumphost for interacting with the AWS EKS cluster, whether for creating clusters or running monitoring tools like Prometheus and Grafana, by installing the necessary dependencies.&lt;/p&gt;

&lt;p&gt;The required tools are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;kubectl&lt;/strong&gt;: Command-line tool for managing Kubernetes clusters and workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;helm&lt;/strong&gt;: Package manager for Kubernetes, used to install charts such as Prometheus, Grafana, nginx-ingress, and more.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;eksctl&lt;/strong&gt;: Utility for creating and managing EKS clusters, including the underlying infrastructure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;awscli&lt;/strong&gt;: AWS Command Line Interface for authenticating and managing AWS resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you're not planning to follow the full tutorial series, you may safely skip this section.&lt;/p&gt;

&lt;p&gt;To install these dependencies, copy and paste the following script into your &lt;code&gt;scripts/jumphost_init.sh&lt;/code&gt; file:&lt;br&gt;
&lt;a href="https://gist.github.com/Iheanacho-ai/cd3833e34add7d1b7cc67257dbaef104" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/cd3833e34add7d1b7cc67257dbaef104&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The script prepares the instance with tools and services needed for interacting with and managing an Amazon EKS (Kubernetes) cluster. &lt;/p&gt;

&lt;p&gt;Here's a breakdown of what it does, step by step:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Script setup and logging&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;# For Ubuntu 22.04&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="c"&gt;# Exit script immediately on first error.&lt;/span&gt;

&lt;span class="c"&gt;# Log all output to file&lt;/span&gt;
&lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; /var/log/init-script.log 2&amp;gt;&amp;amp;1

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Starting initialization script..."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What this does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;#!/bin/bash&lt;/strong&gt;: Specifies Bash as the interpreter for this script.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;set -e:&lt;/strong&gt; Ensures the script halts on any error, preventing unintended consequences.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;exec &amp;gt;&amp;gt; /var/log/init-script.log 2&amp;gt;&amp;amp;1&lt;/strong&gt;: Logs all output, both standard and error, to &lt;code&gt;/var/log/init-script.log&lt;/code&gt; for easier troubleshooting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Update the operating system&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Update system&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt upgrade &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script above updates the system's package lists and upgrades all installed packages to their latest versions. This step helps prevent compatibility or security issues during subsequent installations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install AWS CLI&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install AWS CLI&lt;/span&gt;
curl &lt;span class="s2"&gt;"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"awscliv2.zip"&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;unzip &lt;span class="nt"&gt;-y&lt;/span&gt;
unzip awscliv2.zip
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./aws/install
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"AWS CLI installed. Remember to configure credentials with 'aws configure'"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The bash script block above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Downloads the AWS CLI installation archive.&lt;/li&gt;
&lt;li&gt;Installs the unzip utility if it’s not already present.&lt;/li&gt;
&lt;li&gt;Extracts the archive and installs the CLI.&lt;/li&gt;
&lt;li&gt;Prompts you to configure your credentials for AWS access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Install kubectl (Kubernetes CLI)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install Kubectl&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;curl &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-LO&lt;/span&gt; &lt;span class="s2"&gt;"https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl"&lt;/span&gt;
&lt;span class="nb"&gt;sudo chmod&lt;/span&gt; +x kubectl
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;kubectl /usr/local/bin/
kubectl version &lt;span class="nt"&gt;--client&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script installs the Kubernetes command-line tool, kubectl. It allows you to interact with and manage Kubernetes resources within your EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install eksctl&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--silent&lt;/span&gt; &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="s2"&gt;"https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;_amd64.tar.gz"&lt;/span&gt; | &lt;span class="nb"&gt;tar &lt;/span&gt;xz &lt;span class="nt"&gt;-C&lt;/span&gt; /tmp
&lt;span class="nb"&gt;sudo mv&lt;/span&gt; /tmp/eksctl /usr/local/bin
eksctl version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script installs &lt;code&gt;eksctl&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Helm&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;snap &lt;span class="nb"&gt;install &lt;/span&gt;helm &lt;span class="nt"&gt;--classic&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs Helm, a powerful package manager for Kubernetes. You'll use Helm to deploy applications like Prometheus, Grafana, and NGINX Ingress using pre-configured packages called “charts.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add Helm repositories&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These commands add external Helm repositories to your environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;prometheus-community&lt;/strong&gt;: for monitoring tools like Prometheus&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;grafana&lt;/strong&gt;: for powerful visualization dashboards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ingress-nginx&lt;/strong&gt;: for managing external access to your Kubernetes services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally, helm repo update refreshes the list of available charts so you can install the latest versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Prometheus&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;prometheus prometheus-community/kube-prometheus-stack &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs Prometheus using the kube-prometheus-stack Helm chart into the monitoring namespace. If the namespace doesn't already exist, it will be created automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Grafana&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;grafana grafana/grafana &lt;span class="nt"&gt;--namespace&lt;/span&gt; monitoring &lt;span class="nt"&gt;--create-namespace&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs Grafana in the same monitoring namespace. Grafana provides a powerful dashboard interface to visualize metrics collected by Prometheus.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Installs NGINX Ingress Controller&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;ingress-nginx ingress-nginx/ingress-nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The command above installs the NGINX Ingress Controller, which acts as a gateway for routing external HTTP and HTTPS traffic to services running inside your EKS cluster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finish Script&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo update
echo "Initialization script completed successfully."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Updates your local Helm chart repository cache, ensuring access to the latest versions of available charts.&lt;/li&gt;
&lt;li&gt;Prints a confirmation message to indicate that the initialization script has finished running successfully.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Automating Terraform deployments with GitHub Actions
&lt;/h1&gt;

&lt;p&gt;Once you’ve written your Terraform configuration and initialization script, the next step is to automate the deployment process using GitHub Actions. This ensures that any changes made to your Terraform files are automatically applied to your infrastructure, keeping everything up to date without manual intervention.&lt;/p&gt;

&lt;p&gt;To set this up, create a GitHub Actions workflow by copying the following YAML snippet into your  &lt;code&gt;.github/workflows/terraform.yaml&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/Iheanacho-ai/9d71fe4759dc8621de97967958cd60a5" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/9d71fe4759dc8621de97967958cd60a5&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Here is a breakdown of the GitHub Actions pipeline:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow Name:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Jumphost Configuration&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This line gives your workflow a descriptive name, "Terraform Jumphost Configuration," which will be visible in your GitHub Actions tab.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Triggers (on):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
   &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform/**&lt;/span&gt;
 &lt;span class="na"&gt;pull_request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;main&lt;/span&gt;
   &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform/**&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section defines when this workflow will be automatically triggered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;push&lt;/strong&gt;: This means the workflow will run when code is pushed to the repository.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;branches: - main&lt;/strong&gt;: Specifically, it will only trigger when commits are pushed to the main branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;paths: - terraform/&lt;/strong&gt;*&lt;em&gt;:  It only runs if the changes affect any files or subdirectories inside the terraform/ directory. The *&lt;/em&gt; wildcard ensures all nested files are included.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;pull_request&lt;/strong&gt;: The workflow will also run when a pull request is created, updated, or merged.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;branches: - main&lt;/strong&gt;: It will only trigger for pull requests that target the main branch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;paths: - terraform/&lt;/strong&gt;**: Similar to the push event, it only runs if changes are made within the terraform/ directory.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Environment Variables (env):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;AWS_ACCESS_KEY_ID&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
 &lt;span class="na"&gt;AWS_SECRET_ACCESS_KEY&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
 &lt;span class="na"&gt;BUCKET_TF_STATE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.BUCKET_TF }}&lt;/span&gt;
 &lt;span class="na"&gt;AWS_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_REGION }}&lt;/span&gt;
 &lt;span class="na"&gt;TF_LOG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DEBUG&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section defines environment variables that are accessible to all jobs within the workflow. Sensitive values are securely pulled from GitHub Secrets using the secrets context (secrets.NAME), ensuring credentials are not exposed in plain text.&lt;br&gt;
Here's what each variable does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS_ACCESS_KEY_ID&lt;/strong&gt;: Stores your AWS access key ID, securely retrieved from GitHub Secrets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS_SECRET_ACCESS_KEY&lt;/strong&gt;: Stores your AWS secret access key, also pulled from Secrets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BUCKET_TF_STATE&lt;/strong&gt;: Specifies the name of the S3 bucket where your Terraform state file will be stored.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS_REGION&lt;/strong&gt;: Sets the AWS region for your operations (e.g., us-east-1).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;TF_LOG&lt;/strong&gt;: Enables debug-level logging for Terraform, which provides detailed output useful for troubleshooting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Jobs (jobs):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt; &lt;span class="na"&gt;terraform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Apply&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Terraform&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;configuration&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;on&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;changes"&lt;/span&gt;
   &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
   &lt;span class="na"&gt;defaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;shell&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;bash&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./terraform&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section defines the tasks, called "jobs", that will be executed in your GitHub Actions workflow. In this case, there's a single job named terraform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;name: "Apply Terraform configuration on changes"&lt;/strong&gt;: A descriptive name for the job.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;runs-on: ubuntu-latest&lt;/strong&gt;: Specifies that the job will run in a clean, virtual machine hosted on the latest version of Ubuntu provided by GitHub Actions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;defaults&lt;/strong&gt;: Defines default settings for all run steps within this job.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;shell: bash&lt;/strong&gt;: Ensures that the commands in the run steps are executed using the Bash shell.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;working-directory: ./terraform&lt;/strong&gt;: Sets the current working directory for all subsequent run steps within this job to the terraform directory in your repository. This is crucial because your Terraform configuration files are located there.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Steps (steps):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout code&lt;/span&gt;
     &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup Terraform&lt;/span&gt;
     &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;hashicorp/setup-terraform@v3&lt;/span&gt;

   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Init&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform init -backend-config="bucket=$BUCKET_TF_STATE"&lt;/span&gt;

   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Format&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform fmt -check&lt;/span&gt;
     &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Validate&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform validate&lt;/span&gt;

   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Plan&lt;/span&gt;
     &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;plan&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform plan -no-color -input=false -out planfile&lt;/span&gt;
     &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Plan Status&lt;/span&gt;
     &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;steps.plan.outcome == 'failure'&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;exit &lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;

   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Terraform Apply&lt;/span&gt;
     &lt;span class="na"&gt;if&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;github.ref == 'refs/heads/main' &amp;amp;&amp;amp; github.event_name == 'push'&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;terraform apply -auto-approve -input=false -parallelism=1 planfile&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section defines the individual steps that will be executed within the terraform job, in sequential order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Checkout code:&lt;/strong&gt; This step leverages the official &lt;code&gt;actions/checkout&lt;/code&gt; action (version 4) to clone your repository into the GitHub Actions runner, making your code available for the workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Setup Terraform:&lt;/strong&gt; Here, the &lt;code&gt;hashicorp/setup-terraform&lt;/code&gt; action (version 3) is used to install and configure the Terraform CLI on the runner environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Init:&lt;/strong&gt; This command initializes Terraform and configures the backend. Specifically, it uses the -backend-config option to point to your S3 bucket (&lt;code&gt;$BUCKET_TF_STATE&lt;/code&gt;) where the Terraform state is stored securely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Format&lt;/strong&gt;: The terraform &lt;code&gt;fmt-check&lt;/code&gt; command verifies that your Terraform code conforms to the standard formatting conventions.The setting continue-on-error: true allows the workflow to proceed even if formatting issues are detected, preventing the entire job from failing at this stage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Validate:&lt;/strong&gt; This step runs terraform validate to ensure that the Terraform configuration files are syntactically correct and internally consistent.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Terraform Plan:&lt;/strong&gt; This generates an execution plan with the command &lt;code&gt;terraform plan -no-color -input=false -out planfile&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-no-color&lt;/code&gt; disables colored output for clearer logs.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-input=false&lt;/code&gt; prevents Terraform from prompting for input interactively.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-out planfile&lt;/code&gt; saves the generated plan to a file named planfile, ensuring that the apply step runs exactly what was planned.&lt;/li&gt;
&lt;li&gt; Similar to the formatting step, &lt;code&gt;continue-on-error: true&lt;/code&gt; lets the workflow continue even if the plan generation encounters errors.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Terraform Plan Status:&lt;/strong&gt; This step acts as a gatekeeper by checking the outcome of the plan step. If the plan failed (&lt;code&gt;steps.plan.outcome == 'failure'&lt;/code&gt;), it runs exit 1 to terminate the job immediately, preventing a potentially harmful apply.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Terraform Apply:&lt;/strong&gt; The final step applies the Terraform changes, but only when two conditions are met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The workflow was triggered by a push to the main branch (&lt;code&gt;github.ref == 'refs/heads/main'&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;The event type is a push event (&lt;code&gt;github.event_name == 'push'&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; The command &lt;code&gt;terraform apply -auto-approve -input=false -parallelism=1 planfile&lt;/code&gt; applies the saved execution plan:&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-auto-approve&lt;/code&gt; skips manual confirmation.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-input=false&lt;/code&gt; avoids interactive prompts.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-parallelism=1&lt;/code&gt; limits resource creation/modification to one at a time to avoid race conditions or ordering issues, though it may slow execution.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Running your CI/CD pipeline
&lt;/h1&gt;

&lt;p&gt;Once you’ve configured your pipeline, the next step is to trigger it to create the Terraform infrastructure. To do so, push your code to GitHub. GitHub will automatically detect your push, read your &lt;code&gt;.github/workflows&lt;/code&gt; file, and run the pipeline.&lt;/p&gt;

&lt;p&gt;Refer to the &lt;a href="https://docs.github.com/en/migrations/importing-source-code/using-the-command-line-to-import-source-code/adding-locally-hosted-code-to-github" rel="noopener noreferrer"&gt;GitHub documentation&lt;/a&gt; for guidance on pushing your locally hosted code to GitHub.&lt;/p&gt;

&lt;p&gt;After pushing your code, go to your project repository on GitHub and click on the &lt;strong&gt;Actions&lt;/strong&gt; tab to monitor your workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you do not see any workflow in the &lt;strong&gt;Actions&lt;/strong&gt; tab, double-check the folder name and make sure your &lt;code&gt;terraform.yaml&lt;/code&gt; file is correctly located in the &lt;code&gt;workflow&lt;/code&gt; folder within the &lt;code&gt;.github&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;Once the workflow completes, your infrastructure should be provisioned.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfjXX4iDN-FcCopk94vOJIS4YNZHzzcSTSX-zNb5qHUWVpyUkBaxFgnIZenCFePsu7bRHr4eZVJRd-DCK7fNtKYz9ZybrQ-eUyf9mUj4rZsW8oKOUAdD9LizsPzI0_o4FTJXPPWLA%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfjXX4iDN-FcCopk94vOJIS4YNZHzzcSTSX-zNb5qHUWVpyUkBaxFgnIZenCFePsu7bRHr4eZVJRd-DCK7fNtKYz9ZybrQ-eUyf9mUj4rZsW8oKOUAdD9LizsPzI0_o4FTJXPPWLA%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="668"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Verify and review your project
&lt;/h1&gt;

&lt;p&gt;To confirm that Terraform has successfully provisioned your infrastructure, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sign in to your AWS Console&lt;/strong&gt;: Access your AWS console, then search for "EC2" in the search bar.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Check for Your EC2 Instance&lt;/strong&gt;: After searching, you should see your EC2 instance listed in the console.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Get Your EC2 Instance's Public IP&lt;/strong&gt;: To SSH into your EC2 instance, you need the public IP address. You can find this in your AWS console or from your GitHub workflow. In this case, we'll retrieve it from the GitHub workflow.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To get the public IP address from the GitHub workflow:&lt;br&gt;
  a. Click on your workflow run in the &lt;strong&gt;Actions&lt;/strong&gt; page&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcEN-pqq16wjjnXIYjVY9DAg_uUrtpxjcO6K82TUejF1msXBRd7mq4qLZd_y_4NWjA4AeE5PZQj5N_ZbsUBuj7mG-Je8a3nI3kxVtHSTMUW4gHpSAfN-Df0LpXT1EbDDzgcPfn76A%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcEN-pqq16wjjnXIYjVY9DAg_uUrtpxjcO6K82TUejF1msXBRd7mq4qLZd_y_4NWjA4AeE5PZQj5N_ZbsUBuj7mG-Je8a3nI3kxVtHSTMUW4gHpSAfN-Df0LpXT1EbDDzgcPfn76A%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="655"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;b. Select your Job on the sidebar&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdXQazEMpPMBwuaDhjJU6H3ueoGCDxudBwPMwvBUcFUnSDPuEBH1kdCoy__zDAukwdi-VYepwd68aWzSoEiRzncgWlWSUVF9rRE4JkQQs9GwalxgvRta9GZ898W-ynohEmEo0Du5A%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdXQazEMpPMBwuaDhjJU6H3ueoGCDxudBwPMwvBUcFUnSDPuEBH1kdCoy__zDAukwdi-VYepwd68aWzSoEiRzncgWlWSUVF9rRE4JkQQs9GwalxgvRta9GZ898W-ynohEmEo0Du5A%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="655"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    c. Expand the Terraform Apply step.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdAVZgZD7YYzi7f0CmWCxyH8hgt2nJwad5mQQphGEoXM7fgP6rA7P8eyUYzAiGVfjxKiweRTROKjMNGNfdrw5uv8hzufBeqMMZyFC2tbUlNB4wVU8uRasO9En8IkkqOWD10xIp23Q%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdAVZgZD7YYzi7f0CmWCxyH8hgt2nJwad5mQQphGEoXM7fgP6rA7P8eyUYzAiGVfjxKiweRTROKjMNGNfdrw5uv8hzufBeqMMZyFC2tbUlNB4wVU8uRasO9En8IkkqOWD10xIp23Q%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;d. Scroll to the end of the step, and you should see the&lt;code&gt;jumphost_public_ip&lt;/code&gt; value in the outputs section.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcEi-HVeHaspqLTZ5jeN25pKKnVpx5AHKXlH1v92oosYHqckb-KgMMhJDHAL6dwmxUwZeGIqM6iXmYLxWUWSxulMSeR9Zu1ZHwPMpPYbMZkTt2V21CPPal8-t7wyz6PsmTsRwW2oQ%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcEi-HVeHaspqLTZ5jeN25pKKnVpx5AHKXlH1v92oosYHqckb-KgMMhJDHAL6dwmxUwZeGIqM6iXmYLxWUWSxulMSeR9Zu1ZHwPMpPYbMZkTt2V21CPPal8-t7wyz6PsmTsRwW2oQ%3Fkey%3D_NDP5YV4IBA_qyOBmubuSGiE" width="1600" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;e. Copy this value; it’s your EC2 instance's public IP.&lt;/p&gt;

&lt;p&gt;5.&lt;strong&gt;SSH into Your EC2 Instance&lt;/strong&gt;: Now that you have the public IP, you can SSH into your EC2 instance. Use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &amp;lt;path to your public key&amp;gt; ubuntu@&amp;lt;public_ip&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace the placeholders:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;/strong&gt;: This is the location of the key you created earlier, typically &lt;code&gt;~/.ssh/&amp;lt;name of the key&amp;gt;&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;/strong&gt;: This is the public IP you copied from the GitHub workflow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, if your key's path is &lt;code&gt;~/.ssh/key&lt;/code&gt; and your public IP is &lt;code&gt;3.81.145.221&lt;/code&gt;, the command would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; ~/.ssh/key ubuntu@3.81.145.221
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this, you should now be logged into your EC2 instance, provisioned by Terraform.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final thoughts
&lt;/h1&gt;

&lt;p&gt;Terraform enables consistent provisioning, management, and versioning of infrastructure across multiple cloud providers. Regardless of who runs the pipeline that triggers the Terraform configuration you set up in this tutorial, the infrastructure will always be identical, with the same configuration. By automating these processes, Terraform reduces the potential for manual errors, boosts efficiency, and ensures your infrastructure is reproducible and scalable.&lt;/p&gt;

&lt;p&gt;In this tutorial, you’ve created a jumphost on AWS, a secure server that will facilitate controlled and secure access to an EKS cluster you’ll set up later in the series.&lt;/p&gt;

&lt;p&gt;But this is just the beginning of what you can achieve with Terraform and AWS. To dive deeper, check out the official &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/2.43.0/docs" rel="noopener noreferrer"&gt;Terraform with AWS documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Automate testing and Docker image deployment to Amazon ECR with CI.</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Thu, 26 Jun 2025 12:54:03 +0000</pubDate>
      <link>https://forem.com/amaraiheanacho/automate-testing-and-docker-image-deployment-to-amazon-ecr-with-ci-3k3l</link>
      <guid>https://forem.com/amaraiheanacho/automate-testing-and-docker-image-deployment-to-amazon-ecr-with-ci-3k3l</guid>
      <description>&lt;p&gt;The frequency and speed of releases in modern software development requires robust CI/CD pipelines that ensure code quality, security, and reliable deployments. These pipelines eliminate the errors, that spring up from manual integration development and deployment, some of which are due to inefficient and insufficient testing, and some are just due to human errors from performing the same steps a million times.&lt;/p&gt;

&lt;p&gt;In this article you'll build a comprehensive continuous integration(CI) pipeline that automatically tests your application, performs security scans, and pushes Docker images to Amazon ECR (Elastic Container Registry). It is the second part of this four-part DevSecOps series.&lt;/p&gt;

&lt;p&gt;This pipeline implements DevSecOps best practices by integrating security at every stage, from code quality analysis with SonarCloud to vulnerability scanning with Snyk and Trivy. By the end of this guide, you'll have a production-ready pipeline that automatically validates your application before containerizing and storing it securely in AWS.&lt;/p&gt;

&lt;h1&gt;
  
  
  What this series contains
&lt;/h1&gt;

&lt;p&gt;This four-part series walks you through building a modern DevSecOps pipeline for a containerized quiz application:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Provision a secure EC2 jumphost using Terraform and GitHub Actions (previous article)&lt;/li&gt;
&lt;li&gt;Build a CI pipeline that tests your application and pushes Docker images to Amazon ECR (this article)&lt;/li&gt;
&lt;li&gt;Set up an Amazon EKS cluster and deploy the application with ArgoCD&lt;/li&gt;
&lt;li&gt;Add monitoring and observability using Prometheus and Grafana&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;Ensure you have the following before proceeding with this article:  &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Completed the first part of the series. You should have a secure EC2 jumphost provisioned using Terraform and GitHub Actions&lt;/li&gt;
&lt;li&gt;A basic understanding of Docker and containerization.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Project structure overview
&lt;/h1&gt;

&lt;p&gt;This project will build on the EC2 jumphost project you had from the first part of the series.&lt;/p&gt;

&lt;p&gt;So create a &lt;code&gt;ci.yaml&lt;/code&gt; file in your &lt;code&gt;.github/workflows&lt;/code&gt; folder. This file will hold the code for the CI pipeline responsible for testing, validating and pushing your application to the Amazon ECR repository. Your complete file structure should look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="p"&gt;.&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;backend&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;frontend&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;github&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="n"&gt;workflows&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;       &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;terraform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yaml&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;       &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="n"&gt;ci&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yaml&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;terraform&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="n"&gt;scripts&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Pre-project setup checklist
&lt;/h1&gt;

&lt;p&gt;Before getting right into building your CI pipeline you need to do the following: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up your AWS ECR repositories&lt;/li&gt;
&lt;li&gt;Set up SonarCloud for your repository&lt;/li&gt;
&lt;li&gt;Set up Snyk&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Follow the rest of this section to complete the steps above.&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up your AWS ECR repositories
&lt;/h2&gt;

&lt;p&gt;The Amazon ECR repositories will hold the frontend and backend Docker images your CI pipeline would create. &lt;/p&gt;

&lt;p&gt;You need your AWS account ID, and your AWS ECR repositories to effectively push your Docker images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get your account ID&lt;/strong&gt;: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to your &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Click your account name at the top right corner of the navigation bar&lt;/li&gt;
&lt;li&gt;Copy your account ID from the dropdown menu.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd5DHo7VI3EoJCCNmzR9CKYcGfD9dSSyukgKZ0Lt2BvqV-IMTjYSE4VsstExDtzs5cucxZljs5lq7RDVnwOanw5tkHb21DaJUwMiR0H5JmIqvNC5vacqdCVB_8THKMSevjRuuEDHg%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1516" height="415"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Create your ECR repositories&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Search and navigate to &lt;strong&gt;Elastic Container Registry (ECR)&lt;/strong&gt; in your console.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create repositories for both frontend and backend by doing the following:&lt;/p&gt;

&lt;p&gt;a.Click &lt;strong&gt;Create repository&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfZqeD5anfmmr2BVxpwW1TPHyh_wTcwPQlmV67Y0KpMEyg3qCtVQo0hT5l0zmLAcdMwAbuv5jdttCV392tcWNEnsoRGZnQjs9O_GT3bbYVL1xgPlp0zlaY1Wcsl4MYp96T_MneM2w%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfZqeD5anfmmr2BVxpwW1TPHyh_wTcwPQlmV67Y0KpMEyg3qCtVQo0hT5l0zmLAcdMwAbuv5jdttCV392tcWNEnsoRGZnQjs9O_GT3bbYVL1xgPlp0zlaY1Wcsl4MYp96T_MneM2w%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;b.Set &lt;strong&gt;Repository name&lt;/strong&gt; to frontend.&lt;br&gt;
 c.Leave the remaining settings as default and click the &lt;strong&gt;Create&lt;/strong&gt; button*&lt;em&gt;.&lt;/em&gt;*&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfVuUbkrf1iMiK_ynnDxlCnGTnF49Xpk-OCjIKzlAIdJUxRsb-5W8gsHOFZmNJf0dKovgckoYMjRtV37zFJ7yso50nNGI8rXE6DUQuIIvX_Lgh9134O69Ym1aXIBODVsdXd8lMdBg%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfVuUbkrf1iMiK_ynnDxlCnGTnF49Xpk-OCjIKzlAIdJUxRsb-5W8gsHOFZmNJf0dKovgckoYMjRtV37zFJ7yso50nNGI8rXE6DUQuIIvX_Lgh9134O69Ym1aXIBODVsdXd8lMdBg%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="826"&gt;&lt;/a&gt;&lt;br&gt;
  d. Repeat the steps above to create a second repository named backend.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfjaviuKNbr_TA-Isl6lX1rWe0LVOOMejmPiARLfn1khaT3TEQbkhLAECuxFIcvbp5KifAuVzSDcc6MAY_GQkYW33JZulIkbylwDMxyzKjgdARyxyrZsQwh3JZnVWtqET309nxGXw%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfjaviuKNbr_TA-Isl6lX1rWe0LVOOMejmPiARLfn1khaT3TEQbkhLAECuxFIcvbp5KifAuVzSDcc6MAY_GQkYW33JZulIkbylwDMxyzKjgdARyxyrZsQwh3JZnVWtqET309nxGXw%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="826"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Set up SonarCloud
&lt;/h2&gt;

&lt;p&gt;SonarCloud provides code quality analysis, helping you identify bugs and vulnerabilities in the code you plan to package. To enable SonarCloud, you'll need four things: a SonarCloud account and project, a SonarCloud token, an organization token, and a SonarCloud project key. &lt;/p&gt;

&lt;p&gt;Do the following to get your SonarCloud all set up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a SonarCloud account and project&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign up for a free &lt;a href="https://sonarcloud.io/" rel="noopener noreferrer"&gt;SonarCloud&lt;/a&gt; account using your GitHub account&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Analyze new project&lt;/strong&gt; to import your organization&lt;/li&gt;
&lt;li&gt;Select the quiz-application repository that will hold your CI pipeline (this is the same GitHub repository from part 1) &lt;/li&gt;
&lt;li&gt;Select a new code definition to define what SonarCloud would define as new code.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Create projec&lt;/strong&gt;t button.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Generate a SonarCloud token&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your account’s icon at the top right of the page&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;My Account&lt;/strong&gt; → &lt;strong&gt;Security&lt;/strong&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdCk8Bj7IiEP11YDkMVwme84i-lw89vfNllbKLaXshLX0yi1X28Gq3cDj_SDcxSwUGByyNwIpXLRtDMPz6BQnBNlzRkc0RJ7ydrmMgav30uNanrzUJlcK-ZKZbzTqfwNpLZPttEfQ%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="170"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;3.Enter the name of your token &lt;br&gt;
4.Select the Generate Token button to create the token&lt;br&gt;
5.Copy the token&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcQy4xB_EoJSniGxzc6TrdLMNvvxh7gUUCKOaaQACFu_WN_SD_BexLznPC6raTzwb9ltFgxp6h6GNf-LUvrWVPGqNsuHlhUUzeZTAWj4UeaiKvwS_z2gLjLm4a5-E1oBluK-3Vwdg%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcQy4xB_EoJSniGxzc6TrdLMNvvxh7gUUCKOaaQACFu_WN_SD_BexLznPC6raTzwb9ltFgxp6h6GNf-LUvrWVPGqNsuHlhUUzeZTAWj4UeaiKvwS_z2gLjLm4a5-E1oBluK-3Vwdg%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="483"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Get your organization key&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your account’s icon at the top right of the page&lt;/li&gt;
&lt;li&gt;Select your organization (the same one that holds the quiz application GitHub repository) 
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeYaX3dBsu58a0Vr1pWsRPNy-dgUvpYZWIBWcxR34PlZvTlvNXagf2_vb7Frmp-OQ66LQLjfdcOU_JFBIpK6CWqGHWz72rCx6NHQwDJIqZUCNSZilZXOfmbKGUh_rZ1rC1N-E3B0Q%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="331"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;3.Copy your organization key from the top right corner of the webpage.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdnAyTU2RzfqaxQhmUyntY-KPOvIXZG6tpv6awH8on7M6n24dkDTZ5KEh_-ylBwTeHxYOwOKSrM4mKA-HbiwGC3-61x2YRkNFiKkgPNtbeSRplAPNMRwOFmT6nCj5OdggNBN-bY%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdnAyTU2RzfqaxQhmUyntY-KPOvIXZG6tpv6awH8on7M6n24dkDTZ5KEh_-ylBwTeHxYOwOKSrM4mKA-HbiwGC3-61x2YRkNFiKkgPNtbeSRplAPNMRwOFmT6nCj5OdggNBN-bY%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="187"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Get a SonarCloud project key&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your organization&lt;/li&gt;
&lt;li&gt;Select the project that holds your quiz-application repository.&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Administration -&amp;gt; Update Key&lt;/strong&gt;.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfwubNTEIoAqN52FKtNGiAyGgm4Dkcu2X9EMTj_-T5tBvbLjnlwsgg0JrpCr-qpx-8fXUb0UeGIaAdYJ4IXIi3zvHH_BEu54-1xNvr8yYYaAXSnOie13AcbSIsxnrYEoMXK8nBfCg%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1030" height="910"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;4.Copy your &lt;strong&gt;Project Key&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: After setting up, make sure to disable automatic analysis to avoid conflicts with your CI configuration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disable automatic analysis&lt;/strong&gt;&lt;br&gt;
SonarCloud recommends using only one analysis method (either CI-based or automatic) to avoid duplicate results and conflicts.&lt;/p&gt;

&lt;p&gt;Since you’re configuring analysis through a CI pipeline, you must disable Automatic Analysis for your project.&lt;/p&gt;

&lt;p&gt;Here is how to do it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your project&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to &lt;strong&gt;Administration -&amp;gt; Analysis method&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXduO71GHqp7hsxcoGvdbdNT6IwFxVaALHLXM8VHAx253g-wx32eq5yVKuypiXsPOnAIA9pZrsk2cM5gTzTNaMRFD-WbE8dH6R0GdB_VELw8kwaKGpacFapKXLSJ2r-DxDDVyr2Ppg%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXduO71GHqp7hsxcoGvdbdNT6IwFxVaALHLXM8VHAx253g-wx32eq5yVKuypiXsPOnAIA9pZrsk2cM5gTzTNaMRFD-WbE8dH6R0GdB_VELw8kwaKGpacFapKXLSJ2r-DxDDVyr2Ppg%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1030" height="910"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Toggle the &lt;strong&gt;Automatic Analysis&lt;/strong&gt; button to disable automatic analysis.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd-44btdga2xdYAMsAKbOoCeRJvwXqjwUxV2Co0naR8hvyMGZnVZhwB37GiPwjimG8MyHln5Ko2cfc2_OO1WwcrkO-fWDPjHzum23FVd3J82Akeh6t2GLFkU2v_X-VIkDLYFt_5%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd-44btdga2xdYAMsAKbOoCeRJvwXqjwUxV2Co0naR8hvyMGZnVZhwB37GiPwjimG8MyHln5Ko2cfc2_OO1WwcrkO-fWDPjHzum23FVd3J82Akeh6t2GLFkU2v_X-VIkDLYFt_5%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="347"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Set up Snyk
&lt;/h2&gt;

&lt;p&gt;Snyk scans for security vulnerabilities in your dependencies and Docker images. &lt;/p&gt;

&lt;p&gt;Do the following to get a Snyk Auth Token for authenticating your CI pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a &lt;a href="https://app.snyk.io/account" rel="noopener noreferrer"&gt;Snyk account&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Select the Choose integration option and connect Snyk to your GitHub repository.&lt;/li&gt;
&lt;li&gt;Select your account at the bottom of the sidebar .&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Account settings&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfm2EjsAsonWteN1vOwxSL6ItXEnw9hiFOYgAdB5ut31tRKFxEC9WPqJF8XghnO6FXRJrNs3_T0Euht7fz9oA0IeCVKUtv7V0o9uAe0EO9JQ3CdtqoE9URZYwXlDMNF2IoVqMGjTA%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfm2EjsAsonWteN1vOwxSL6ItXEnw9hiFOYgAdB5ut31tRKFxEC9WPqJF8XghnO6FXRJrNs3_T0Euht7fz9oA0IeCVKUtv7V0o9uAe0EO9JQ3CdtqoE9URZYwXlDMNF2IoVqMGjTA%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="832" height="536"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy your authentication token from the &lt;strong&gt;Auth Token&lt;/strong&gt; section&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcsVYmqDinKF_z9CO0nkLEyLrLHqwoK7erx0VxFa3FKqiA8hDpB5mi7uRRUSbHMKSH7DKRWVd4yPgrDY1m-e_umpK_DPeAntmkFJkURjvcGV_ddg6eE0prwkOENoc73kvW7HWi1%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcsVYmqDinKF_z9CO0nkLEyLrLHqwoK7erx0VxFa3FKqiA8hDpB5mi7uRRUSbHMKSH7DKRWVd4yPgrDY1m-e_umpK_DPeAntmkFJkURjvcGV_ddg6eE0prwkOENoc73kvW7HWi1%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="311"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Configure your GitHub secrets
&lt;/h2&gt;

&lt;p&gt;Now that you have all your credentials, add them to your GitHub secrets so that your CI pipeline can pull them into the workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your project’s GitHub repository.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate to the &lt;strong&gt;Settings&lt;/strong&gt; → &lt;strong&gt;Secrets and variables&lt;/strong&gt; → &lt;strong&gt;Actions&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcTfkVuSIj7ceSJLehizU9Rm_DJa3Or_ZrDd1boSJhS0HFHpcUEIV961FUt27jDSWKUJlm77BM99ytnP37sxyq2zeUzBoaQN3VusEoQ5QBDZMOwbxFcF7jT9L_ITvqXL3cQ-96r%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcTfkVuSIj7ceSJLehizU9Rm_DJa3Or_ZrDd1boSJhS0HFHpcUEIV961FUt27jDSWKUJlm77BM99ytnP37sxyq2zeUzBoaQN3VusEoQ5QBDZMOwbxFcF7jT9L_ITvqXL3cQ-96r%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="859"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on &lt;strong&gt;New repository secret&lt;/strong&gt; and add the following secrets, replacing the place holders with your actual values :        &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS_ACCOUNT_ID: your-account-id&lt;/li&gt;
&lt;li&gt;SONAR_TOKEN: your-sonarcloud-token&lt;/li&gt;
&lt;li&gt;SONAR_ORGANIZATION_KEY: your-sonarcloud-org-key&lt;/li&gt;
&lt;li&gt;SONAR_URL: &lt;a href="https://sonarcloud.io" rel="noopener noreferrer"&gt;https://sonarcloud.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;SONAR_PROJECT_KEY: your-project-key&lt;/li&gt;
&lt;li&gt;SNYK_TOKEN: your-snyk-api-token
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd51uBjsKHcgJS9pAmpXZn3cJYyhlBKK4tzqNHBKn6S-vVmY9lO1hp7EEiXh0IaNmyGXGawO_fHp4AJb5M2s1iDhsTfA4Pr_mEe11OpEoTKVXGC0MnGOWCbXBwBpQ-maqHW6buetQ%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="658"&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Understanding the CI/CD pipeline architecture
&lt;/h2&gt;

&lt;p&gt;Now that you have the file structure in place and your credentials set up, it's important to understand what this CI pipeline does.&lt;/p&gt;

&lt;p&gt;The pipeline implements a comprehensive DevSecOps workflow with the following stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code Testing&lt;/strong&gt;: Runs unit tests, linting, and formatting checks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality Analysis&lt;/strong&gt;: Performs code quality scanning using SonarCloud.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Scanning&lt;/strong&gt;: Assesses source code vulnerabilities with Snyk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container Building&lt;/strong&gt;: Builds a Docker image and pushes it to Amazon ECR.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Security&lt;/strong&gt;: Scans the Docker image for vulnerabilities using Trivy and Snyk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The pipeline is triggered on both pull requests and pushes to the main branch, ensuring code quality and security throughout the development process.&lt;/p&gt;
&lt;h1&gt;
  
  
  Building the CI/CD workflow
&lt;/h1&gt;

&lt;p&gt;With your project structure, credentials and basic knowledge of the CI pipeline all setup, copy and paste this code in your &lt;code&gt;.github/workflow/ci-cd.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/Iheanacho-ai/2ee426b821ddc2058c76956fafeb399e" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/2ee426b821ddc2058c76956fafeb399e&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;After completing this section, your pipeline will be fully set up. When it runs, your applications will be tested, built into Docker images, tested again, and then pushed to AWS ECR.&lt;/p&gt;

&lt;p&gt;But now, let's understand exactly what you just created.&lt;/p&gt;
&lt;h1&gt;
  
  
  Pipeline breakdown and analysis
&lt;/h1&gt;

&lt;p&gt;Here is the breakdown of the pipeline in stages:&lt;/p&gt;
&lt;h2&gt;
  
  
  Stage 1: Application testing
&lt;/h2&gt;

&lt;p&gt;The pipeline begins by thoroughly testing both the frontend and backend applications using the &lt;code&gt;frontend-test&lt;/code&gt; and &lt;code&gt;backend-test&lt;/code&gt; jobs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend testing through the&lt;/strong&gt; &lt;code&gt;**frontend-test**&lt;/code&gt; &lt;strong&gt;job&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt; &lt;span class="na"&gt;frontend-test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
   &lt;span class="na"&gt;defaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./frontend&lt;/span&gt;
   &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;20.x&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
       &lt;span class="na"&gt;architecture&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;x64&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
   &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check-out git repository&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;USE NODEJS ${{ matrix.node-version }} - ${{ matrix.architecture }}&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@v4&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install project dependencies&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./frontend&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
         &lt;span class="s"&gt;npm i&lt;/span&gt;
        &lt;span class="s"&gt;npm run lint&lt;/span&gt;
        &lt;span class="s"&gt;npm install --save-dev --save-exact prettier&lt;/span&gt;
        &lt;span class="s"&gt;npm run prettier&lt;/span&gt;
        &lt;span class="s"&gt;npm test&lt;/span&gt;
       &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;CI&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm run build&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="s"&gt;./frontend&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Analyze with SonarCloud&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sonarsource/sonarcloud-github-action@v5.0.0&lt;/span&gt;
       &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;SONAR_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SONAR_TOKEN }}&lt;/span&gt;
       &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;projectBaseDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend&lt;/span&gt;
         &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="s"&gt;-Dsonar.organization=${{ secrets.SONAR_ORGANIZATION_KEY }}&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.projectKey=${{ secrets.SONAR_PROJECT_KEY }}&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.host.url=${{ secrets.SONAR_URL }}&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.login=${{ secrets.SONAR_TOKEN }}&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.sources=src/&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.verbose=true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code block above the &lt;code&gt;frontend-test&lt;/code&gt; job runs tests on the frontend application with the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Check-out git repository&lt;/strong&gt;: Uses the actions/checkout@v4 action to check out the frontend application code into the GitHub Actions runner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;USE NODEJS ${{ matrix.node-version }} - ${{ matrix.architecture }}&lt;/strong&gt;: Sets the environment to use &lt;a href="http://node.js" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt; 20.x on Ubuntu&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install project dependencies&lt;/strong&gt;:  This step does the following:

&lt;ul&gt;
&lt;li&gt;Sets the working-directory to &lt;code&gt;/frontend&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Installs all the npm dependencies required to run your application&lt;/li&gt;
&lt;li&gt;Runs ESLint for code linting and Prettier for formatting&lt;/li&gt;
&lt;li&gt;Executes test suites with &lt;code&gt;npm test&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Build&lt;/strong&gt;: Compiles the frontend application using the &lt;code&gt;npm run build&lt;/code&gt; command&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Analyze with SonarCloud:&lt;/strong&gt; Uses the &lt;code&gt;sonarsource/&lt;a href="mailto:sonarcloud-github-action@v5.0.0"&gt;sonarcloud-github-action@v5.0.0&lt;/a&gt;&lt;/code&gt; action to perform static code analysis, identifying bugs, vulnerabilities, and code smells. Refer to the &lt;a href="https://github.com/SonarSource/sonarqube-scan-action" rel="noopener noreferrer"&gt;SonarSource project available as a GitHub Actio&lt;/a&gt;n resource for more information on using SonarCloud in your GitHub Actions workflow.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Backend testing through the&lt;/strong&gt; &lt;code&gt;**backend-test**&lt;/code&gt; &lt;strong&gt;job&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt; &lt;span class="na"&gt;backend-test&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
   &lt;span class="na"&gt;defaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./backend&lt;/span&gt;
   &lt;span class="na"&gt;strategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;matrix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;node-version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;20.x&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
       &lt;span class="na"&gt;architecture&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;x64&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
   &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Check-out git repository&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;USE NODEJS ${{ matrix.node-version }} - ${{ matrix.architecture }}&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/setup-node@v4&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install project dependencies&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./backend&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
         &lt;span class="s"&gt;npm i&lt;/span&gt;
        &lt;span class="s"&gt;npm run lint&lt;/span&gt;
        &lt;span class="s"&gt;npm install --save-dev --save-exact prettier&lt;/span&gt;
        &lt;span class="s"&gt;npm run prettier&lt;/span&gt;
        &lt;span class="s"&gt;npm test&lt;/span&gt;
       &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;CI&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="kc"&gt;true&lt;/span&gt;

           &lt;span class="s"&gt;# Setup sonar-scanner&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Setup SonarQube&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;warchant/setup-sonar-scanner@v8&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Analyze with SonarCloud&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sonarsource/sonarcloud-github-action@v5.0.0&lt;/span&gt;
       &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;SONAR_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SONAR_TOKEN }}&lt;/span&gt;
       &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;projectBaseDir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend&lt;/span&gt;
         &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;&amp;gt;&lt;/span&gt;
           &lt;span class="s"&gt;-Dsonar.organization=${{ secrets.SONAR_ORGANIZATION_KEY }}&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.projectKey=${{ secrets.SONAR_PROJECT_KEY }}&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.host.url=${{ secrets.SONAR_URL }}&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.login=${{ secrets.SONAR_TOKEN }}&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.sources=.&lt;/span&gt;
          &lt;span class="s"&gt;-Dsonar.verbose=true&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;backend-test&lt;/code&gt; above mirrors the &lt;code&gt;frontend-test&lt;/code&gt; job to run lints and comprehensive testing for the backend application. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Security scanning
&lt;/h2&gt;

&lt;p&gt;After successfully testing the application code, the pipeline proceeds to scan both the frontend and backend applications for security vulnerabilities using the frontend-security and backend-security jobs, respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend security with &lt;code&gt;frontend-security&lt;/code&gt; job&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt; &lt;span class="na"&gt;frontend-security&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend-test&lt;/span&gt;
   &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
   &lt;span class="na"&gt;defaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./frontend&lt;/span&gt;
   &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout frontend code&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@master&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Snyk to check for vulnerabilities&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk/actions/node@master&lt;/span&gt;
       &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
       &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;SNYK_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SNYK_TOKEN }}&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Snyk CLI&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk/actions/setup@master&lt;/span&gt;
       &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
       &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;SNYK_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SNYK_TOKEN }}&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Snyk Authenticate&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk auth ${{ secrets.SNYK_TOKEN }}&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Snyk Code Test&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk code test --all-projects&lt;/span&gt;
       &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The frontend-security job runs on Ubuntu and starts only after the frontend-test job has completed.&lt;br&gt;
It performs a security vulnerability scan on the frontend application using the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Checkout frontend code&lt;/strong&gt;: Checks out the frontend application code into the GitHub Actions runner.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run Snyk to check for vulnerabilities&lt;/strong&gt;:Uses the &lt;code&gt;snyk/actions/node@master&lt;/code&gt; action to scan for security issues. The &lt;code&gt;continue-on-error: true&lt;/code&gt; setting ensures that the job won’t fail even if vulnerabilities are detected.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install Snyk CLI&lt;/strong&gt;: Installs the latest version of the Snyk Command Line Interface tool.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snyk Authenticate:&lt;/strong&gt; Authenticates the Snyk CLI with the provided token from GitHub Secrets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Snyk Code Test&lt;/strong&gt;: Runs static code analysis on all projects in the &lt;code&gt;frontend&lt;/code&gt; directory to detect vulnerabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Backend security with &lt;code&gt;backend-security&lt;/code&gt; job&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt; &lt;span class="na"&gt;backend-security&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;backend-test&lt;/span&gt;
   &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
   &lt;span class="na"&gt;defaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./backend&lt;/span&gt;
   &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout backend code&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@master&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Snyk to check for vulnerabilities&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk/actions/node@master&lt;/span&gt;
       &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;# To make sure that SARIF upload gets called&lt;/span&gt;
       &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;SNYK_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SNYK_TOKEN }}&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Snyk CLI&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk/actions/setup@master&lt;/span&gt;
       &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
       &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;SNYK_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SNYK_TOKEN }}&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Snyk Authenticate&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk auth ${{ secrets.SNYK_TOKEN }}&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Snyk Code Test&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk code test --all-projects&lt;/span&gt;
       &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;backend-security&lt;/code&gt; job mirrors the &lt;code&gt;frontend-security&lt;/code&gt; job and runs vulnerability scans on the backend application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;defaults&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./backend&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Stage 3: Container image creation and security
&lt;/h2&gt;

&lt;p&gt;This stage builds the frontend and backend applications into Docker images, pushes them to the AWS ECR repository, and then scans the images for security vulnerabilities using Trivy and Snyk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build, validate and push your frontend docker image with the frontend image job&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/Iheanacho-ai/e8672ae025e5d4ec5be3646f42534027" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/e8672ae025e5d4ec5be3646f42534027&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;frontend-image&lt;/code&gt; job consists of a job definition and a series of steps that build the Dockerfile located in your frontend directory and push the resulting image to your frontend AWS ECR repository. &lt;/p&gt;

&lt;p&gt;Below is a breakdown of the job definition and its steps.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Frontend-image&lt;/code&gt; &lt;strong&gt;job definition&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt; &lt;span class="na"&gt;frontend-image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;needs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;frontend-security&lt;/span&gt;
   &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;
   &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;
     &lt;span class="na"&gt;security-events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
     &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;
     &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code block above specifies the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;needs: frontend-security&lt;/strong&gt;: Specifies that the job will only run after the &lt;code&gt;frontend-security&lt;/code&gt; job completes successfully&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;permissions&lt;/strong&gt;: Specifies the permissions required for this job, including:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;contents: read&lt;/strong&gt;: Allows the job to read the repository content&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;security-events: write&lt;/strong&gt;: Enables uploading of vulnerability scan results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;actions: read&lt;/strong&gt;: Grants read access to GitHub Actions metadata.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;id-token: write&lt;/strong&gt;: Allows the use of OpenID Connect (OIDC) tokens.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Once the environment is set up, the next step is to define the actions the frontend-image job will take, specifically, building and pushing the Docker image to a container registry and validating its security.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Frontend-image&lt;/code&gt; &lt;strong&gt;job steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here are the steps the job takes to build, push and scan the Docker images:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;   &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout the application code&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS credentials&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4&lt;/span&gt;
       &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;aws-access-key-id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCESS_KEY_ID }}&lt;/span&gt;
         &lt;span class="na"&gt;aws-secret-access-key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_SECRET_ACCESS_KEY }}&lt;/span&gt;
         &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_REGION }}&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build and push frontend Docker image to ECR&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./frontend&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
         &lt;span class="s"&gt;aws ecr get-login-password --region ${{ secrets.AWS_REGION }} | docker login --username AWS --password-stdin ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com&lt;/span&gt;
        &lt;span class="s"&gt;IMAGE_URI=${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com/frontend&lt;/span&gt;
        &lt;span class="s"&gt;docker build -t ${IMAGE_URI}:latest .&lt;/span&gt;
        &lt;span class="s"&gt;docker push ${IMAGE_URI}:latest&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Trivy vulnerability scanner&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aquasecurity/trivy-action@master&lt;/span&gt;
       &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;image-ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;secrets.AWS_ACCOUNT_ID&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}.dkr.ecr.${{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;secrets.AWS_REGION&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}.amazonaws.com/frontend:latest"&lt;/span&gt;
         &lt;span class="na"&gt;format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sarif"&lt;/span&gt;
         &lt;span class="na"&gt;output&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;trivy-results.sarif"&lt;/span&gt;
         &lt;span class="na"&gt;severity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CRITICAL,HIGH"&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install Snyk CLI&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk/actions/setup@master&lt;/span&gt;
       &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;snyk-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SNYK_TOKEN }}&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Snyk Authenticate&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk auth ${{ secrets.SNYK_TOKEN }}&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Snyk Container monitor&lt;/span&gt;
       &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk container monitor ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com/frontend:latest --file=Dockerfile&lt;/span&gt;
       &lt;span class="na"&gt;working-directory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./frontend&lt;/span&gt;

     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Run Snyk to check for vulnerabilities in the Docker image&lt;/span&gt;
       &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;snyk/actions/docker@master&lt;/span&gt;
       &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com/frontend:latest&lt;/span&gt;
         &lt;span class="na"&gt;args&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;--file=frontend/Dockerfile --severity-threshold=high&lt;/span&gt;
       &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;SNYK_TOKEN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SNYK_TOKEN }}&lt;/span&gt;
       &lt;span class="na"&gt;continue-on-error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;frontend-image&lt;/code&gt; job does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Configure AWS credentials&lt;/strong&gt;: Sets up AWS credentials from GitHub Secrets so the job can interact with ECR.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build and push frontend Docker image to ECR&lt;/strong&gt;: This step authenticates Docker with AWS ECR, builds the Docker image, and pushes it to your ECR repository. It includes:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;aws ecr get-login-password ... | docker login ...&lt;/strong&gt;: Logs into Amazon ECR using a token generated by AWS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IMAGE_URI…:&lt;/strong&gt; Defines the full Docker image URI, pointing to your ECR repo.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;docker build -t ${IMAGE_URI}:latest .&lt;/strong&gt;: Builds the Docker image from the &lt;code&gt;./frontend&lt;/code&gt; directory and tags it as &lt;code&gt;&amp;lt;your-ecr-repo&amp;gt;:latest&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;docker push ${IMAGE_URI}:latest&lt;/strong&gt;: Uploads the &lt;code&gt;latest&lt;/code&gt; version of your image to the &lt;code&gt;frontend&lt;/code&gt; repository in ECR&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Run Trivy vulnerability scanner&lt;/strong&gt;: Scans the pushed Docker image for known vulnerabilities using Trivy and outputs the results in SARIF format.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Install Snyk CLI&lt;/strong&gt;: Installs the Snyk CLI tool to perform additional security checks.&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;&lt;strong&gt;Snyk Authenticate&lt;/strong&gt;&lt;/code&gt;: Logs into Snyk using your Snyk token from GitHub Secrets.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Snyk Container monitor&lt;/strong&gt;: Uploads the Docker image to Snyk for continuous monitoring and alerting about new vulnerabilities as they are discovered.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Run Snyk to check for vulnerabilities in the Docker image&lt;/strong&gt;: Performs a vulnerability scan of the &lt;code&gt;frontend:latest&lt;/code&gt; Docker image in ECR. The workflow will continue even if high-severity issues are detected.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Build, validate and push your backend docker image with the&lt;/strong&gt; &lt;code&gt;**backend image**&lt;/code&gt; &lt;strong&gt;job&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/Iheanacho-ai/dabc139fe35d496fa45eb2e6bca01278" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/dabc139fe35d496fa45eb2e6bca01278&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;backend-image&lt;/code&gt; job mirrors the &lt;code&gt;frontend-image&lt;/code&gt; job, but operates on the backend service. It builds, scans, and pushes the Docker image from the &lt;code&gt;./backend&lt;/code&gt; directory to the backend repository in AWS ECR.&lt;/p&gt;

&lt;h1&gt;
  
  
  Running your pipeline
&lt;/h1&gt;

&lt;p&gt;Once you’ve configured your pipeline, the next step is to trigger it to build, test, and push your Docker images to AWS ECR. To do this, simply push your code to GitHub. GitHub will automatically detect the push, read the &lt;code&gt;.github/workflows&lt;/code&gt; file, and run the pipeline. &lt;/p&gt;

&lt;p&gt;Refer to the &lt;a href="https://docs.github.com/en/migrations/importing-source-code/using-the-command-line-to-import-source-code/adding-locally-hosted-code-to-github" rel="noopener noreferrer"&gt;GitHub documentation&lt;/a&gt; if you are unsure how to push your local code to GitHub.&lt;/p&gt;

&lt;p&gt;After pushing your code, navigate to your project repository on GitHub and click the &lt;strong&gt;Actions&lt;/strong&gt; tab to monitor your workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you don’t see any workflows under the &lt;strong&gt;Actions&lt;/strong&gt; tab, double-check that the &lt;code&gt;ci.yaml&lt;/code&gt; file  is placed correctly in the &lt;code&gt;.github/workflows&lt;/code&gt; directory. Make sure there are no typos in folder or file names.&lt;/p&gt;

&lt;p&gt;Once the workflow runs successfully, your Docker images will be available in the AWS ECR service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcFc3NgAntnuaQzrVkNpIkXkdPB1o5J2VvTYYUkf9OWCX2kH9WoPmpKwRfQvkhhKYi6WgLWmJs-VlfAePYmQvasQZEwk2eFP81kzNp9FSal-Lvc7MdzrQVr1o8Q8groloqUHF2f%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcFc3NgAntnuaQzrVkNpIkXkdPB1o5J2VvTYYUkf9OWCX2kH9WoPmpKwRfQvkhhKYi6WgLWmJs-VlfAePYmQvasQZEwk2eFP81kzNp9FSal-Lvc7MdzrQVr1o8Q8groloqUHF2f%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="673"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdsOGE5cfurE_Zj4NKa9T_t8NbTD2CM28bt03WTFlstjkht8p4JpBLAVOUcxlSbGVBmdpzCPixfhf7hR_sQem6GioXmdKzmo5jTaUJRmL8MdJ1Fez5FVAK865yKOyCDxjWYetMPsA%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdsOGE5cfurE_Zj4NKa9T_t8NbTD2CM28bt03WTFlstjkht8p4JpBLAVOUcxlSbGVBmdpzCPixfhf7hR_sQem6GioXmdKzmo5jTaUJRmL8MdJ1Fez5FVAK865yKOyCDxjWYetMPsA%3Fkey%3DzU949nj2vJQbL3dGbLfOBw" width="1600" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  What's next
&lt;/h1&gt;

&lt;p&gt;With your CI/CD pipeline now successfully building and pushing secure Docker images to Amazon ECR, you're ready to move on to the next phase of the series.&lt;/p&gt;

&lt;p&gt;In Part 3, you will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up an Amazon EKS cluster using the jumphost from Part 1&lt;/li&gt;
&lt;li&gt;Deploy your applications using the Docker images built in this pipeline&lt;/li&gt;
&lt;li&gt;Implement GitOps practices with ArgoCD for automated deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This pipeline lays the foundation for your DevSecOps workflow, ensuring that only tested, secure, and validated code is deployed to production.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final thoughts
&lt;/h1&gt;

&lt;p&gt;Building a comprehensive CI/CD pipeline requires balancing speed, security, and reliability. This pipeline demonstrates how to integrate multiple security tools and best practices while maintaining development velocity. The automated testing, security scanning, and containerization process ensures that your applications are production-ready and secure.&lt;/p&gt;

&lt;p&gt;Remember to regularly update your dependencies, review security scan results, and continuously improve your pipeline based on your team's needs and security requirements. The DevSecOps approach implemented here provides a solid foundation for scalable, secure application delivery.&lt;/p&gt;

&lt;p&gt;In the next article, you’ll leverage these Docker images to deploy your application to a fully managed Kubernetes cluster with ArgoCD, completing the deployment automation loop.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ci</category>
      <category>devops</category>
    </item>
    <item>
      <title>What are Helm charts?</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Sun, 20 Apr 2025 17:47:14 +0000</pubDate>
      <link>https://forem.com/amaraiheanacho/what-are-helm-charts-4ck3</link>
      <guid>https://forem.com/amaraiheanacho/what-are-helm-charts-4ck3</guid>
      <description>&lt;p&gt;Package managers have been the norm throughout different fields of software engineering, from npm in JavaScript to pip in Python. They help simplify the installation, configuration, upgrade, and sharing of software and its dependencies, and Helm charts are no different.&lt;/p&gt;

&lt;p&gt;When you deploy applications to Kubernetes, it often means managing dozens of YAML files for deployments, services, ingress rules, and configuration. And as your app grows, so does the complexity, making it harder to maintain consistency, update values, or replicate environments. This manual process can quickly become error-prone and time-consuming.&lt;/p&gt;

&lt;p&gt;Helm addresses this challenge by packaging all those Kubernetes resources into a single, versioned chart.  With Helm:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All the Kubernetes objects for your app live in a single chart.&lt;/li&gt;
&lt;li&gt;You use &lt;code&gt;values.yaml&lt;/code&gt; to customize configs per environment without editing the templates.&lt;/li&gt;
&lt;li&gt;You can install or upgrade your entire app stack with a single command:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;myapp ./myapp-chart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;You can publish your chart to a shared repository and let others reuse it with different configs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In short, Helm charts bring the familiar benefits of package management, templating, and lifecycle automation to Kubernetes. They help teams standardize deployments, eliminate unnecessary duplication, and move faster — all while preserving the power and flexibility of Kubernetes itself. This article walks you through what Helm charts are, how they work, and how you can start using and creating them with confidence.&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;To get the most out of this article, make sure you have the following in place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitHub account. If you don’t have one, create a &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;GitHub account here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;An Artifact Hub account. You can sign up for a free &lt;a href="https://artifacthub.io/" rel="noopener noreferrer"&gt;Artifact Hub account here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A basic understanding of Git.&lt;/li&gt;
&lt;li&gt;A basic understanding of Kubernetes.&lt;/li&gt;
&lt;li&gt;A running Kubernetes cluster. You can use tools like &lt;a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/" rel="noopener noreferrer"&gt;Minikube&lt;/a&gt;, &lt;a href="https://medium.com/@m0v_3r/intro-to-kind-7d553ed40ce0" rel="noopener noreferrer"&gt;Kind&lt;/a&gt;, or any cloud provider such as &lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html" rel="noopener noreferrer"&gt;AWS EKS&lt;/a&gt;, &lt;a href="https://dev.to/jdxlabs/create-an-eks-or-gke-cluster-in-minutes-1o5b"&gt;Google GKE&lt;/a&gt;, or &lt;a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-deploy-cluster" rel="noopener noreferrer"&gt;Azure AKS&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Helm installed. To verify that Helm is installed, run the following command in your terminal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see version information printed in your terminal if Helm is installed correctly:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfRrO6gEjCeAnBWTcU66WpQbkfvxYBCozJ-V8uEhU38ZhKpXMIgPhD3GAnhCjx9IQnC-FeAcyZse8Dtd_aRnqyoG9zPEHCIJ0exFGzxgjFHXcSOjj4AT1mmm7Drn-ybLPIAkMzL8Q%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfRrO6gEjCeAnBWTcU66WpQbkfvxYBCozJ-V8uEhU38ZhKpXMIgPhD3GAnhCjx9IQnC-FeAcyZse8Dtd_aRnqyoG9zPEHCIJ0exFGzxgjFHXcSOjj4AT1mmm7Drn-ybLPIAkMzL8Q%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Refer to the official documentation to &lt;a href="https://helm.sh/docs/intro/install/" rel="noopener noreferrer"&gt;install Helm&lt;/a&gt; if it is not already installed.&lt;/p&gt;

&lt;h1&gt;
  
  
  Core Helm concepts
&lt;/h1&gt;

&lt;p&gt;Before diving into building and using Helm charts, it's important to understand some core concepts: Charts, Releases, Config, and Repositories. These are the building blocks that make Helm powerful and flexible for managing Kubernetes applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chart&lt;/strong&gt;: A Helm chart is a package that contains all the necessary files to describe a Kubernetes application,  including templates for deployments, services, and other resources. Think of it as a blueprint for your app that can be reused, versioned, and shared.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Release&lt;/strong&gt;: A release is an instance of a chart that has been deployed to a Kubernetes cluster. You can deploy the same chart multiple times with different configurations, and Helm will manage each as a separate release with its own lifecycle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Config&lt;/strong&gt;: Configurations in Helm are defined using a &lt;code&gt;values.yaml&lt;/code&gt; file. This file lets you override default settings in the chart templates — such as replica counts, image tags, or environment-specific variables — without changing the chart structure itself.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repository&lt;/strong&gt;: A Helm repository is a place where charts are stored and made available for installation. Similar to npm or PyPI, you can publish your own charts or pull charts from public sources like ArtifactHub to quickly deploy popular open-source tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Helm chart anatomy
&lt;/h1&gt;

&lt;p&gt;Understanding the structure of a Helm chart is essential for both using and creating charts effectively. Helm follows a standardized directory layout that organizes your Kubernetes manifests, configuration defaults, metadata, and dependencies. &lt;/p&gt;

&lt;p&gt;Let’s explore each component of a typical Helm chart.&lt;/p&gt;

&lt;h2&gt;
  
  
  Directory structure
&lt;/h2&gt;

&lt;p&gt;Here’s what a typical Helm chart folder looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight cpp"&gt;&lt;code&gt;&lt;span class="n"&gt;myapp&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;chart&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;Chart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yaml&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;values&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;yaml&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;templates&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="n"&gt;charts&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="n"&gt;README&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;md&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;optional&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Chart.yaml&lt;/strong&gt;: This file contains metadata about the chart, including its name, version, description, and maintainers. Helm uses this information to identify and manage the chart.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.1.0&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;A Helm chart for deploying MyApp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;values.yaml&lt;/strong&gt;: This file defines the default configuration values used by the templates in the chart. These values control various aspects of your Kubernetes resources, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which Docker image to use&lt;/li&gt;
&lt;li&gt;The number of replicas&lt;/li&gt;
&lt;li&gt;Exposed ports&lt;/li&gt;
&lt;li&gt;Environment variables&lt;/li&gt;
&lt;li&gt;Type of Kubernetes service (ClusterIP, LoadBalancer, etc.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can override these values at install time using the &lt;code&gt;--values&lt;/code&gt; flag or &lt;code&gt;--set&lt;/code&gt; on the command line, making the chart highly customizable and reusable across different environments.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;replicaCount&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;templates/&lt;/strong&gt;: This directory contains Go template files that define the Kubernetes resources Helm will generate and deploy. Common files include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;deployment.yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;service.yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ingress.yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;configmap.yaml&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;hpa.yaml&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These files resemble standard Kubernetes manifests but use template syntax ({{ ... }}) to inject dynamic values from values.yaml.&lt;/p&gt;

&lt;p&gt;When you run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;my-app ./my-chart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Helm reads &lt;code&gt;values.yaml&lt;/code&gt;, plugs the values into the templates, and generates valid Kubernetes YAML.&lt;/p&gt;

&lt;p&gt;To understand what a Helm template looks like, check out this &lt;a href="https://gist.github.com/Iheanacho-ai/16a102cf84c8c966facc61aec7aa4b50" rel="noopener noreferrer"&gt;demo deployment template&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In the deployment template, here are some of the most common things you will see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;include "template.name" .&lt;/code&gt; : Calls a reusable helper template (defined in &lt;code&gt;_helpers.tpl&lt;/code&gt;), passing in the current context (.).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.Values&lt;/code&gt;: References values from &lt;code&gt;values.yaml&lt;/code&gt;, or overrides via &lt;code&gt;--set&lt;/code&gt;. Example: &lt;code&gt;.Values.image.repository&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.Chart&lt;/code&gt;: Accesses the chart metadata. Example: &lt;code&gt;.Chart.Name&lt;/code&gt;, &lt;code&gt;.Chart.AppVersion&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;.Template&lt;/code&gt;: Information about the current template file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;with &amp;lt;value&amp;gt;&lt;/code&gt;: Changes the context (.) inside the block to the given value.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;if / else / end&lt;/code&gt; : Conditional logic. Example: {{ if .Values.autoscaling.enabled }}&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;toYaml&lt;/code&gt;: Converts an object to YAML format.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;nindent&lt;/code&gt;: Indents lines to preserve YAML structure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Other templates you would see in the templates/ folders are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;service.yaml&lt;/code&gt; – Defines how your app is exposed within or outside the cluster.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ingress.yaml&lt;/code&gt; – Configures domain-based access via Kubernetes Ingress.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;_helpers.tpl&lt;/code&gt; – Stores reusable template snippets for labels, names, etc.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;serviceaccount.yaml&lt;/code&gt; – Creates a ServiceAccount for access to the Kubernetes API.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hpa.yaml&lt;/code&gt; – Configures horizontal pod autoscaling.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;configmap.yaml&lt;/code&gt; – Injects non-sensitive configuration into your pods.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;secret.yaml&lt;/code&gt; – Injects sensitive data like credentials or API keys.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;tests/&lt;/code&gt; – Contains Helm test hooks to validate that the chart works after installation.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;NOTES.txt&lt;/code&gt; – Provides post-installation instructions or tips displayed in the CLI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each of these templates can dynamically adjust based on values in the &lt;code&gt;values.yaml&lt;/code&gt; file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;charts/&lt;/strong&gt;: This optional directory is where you can include other Helm charts as dependencies (subcharts). For example, if your app relies on a PostgreSQL database, you can include the PostgreSQL chart here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;README.md (optional)&lt;/strong&gt;: Although not required, it’s a good practice to include a &lt;code&gt;README.md&lt;/code&gt; file to document what the chart does, how to install it, configuration options, and usage examples. This is especially helpful when sharing your chart with others.&lt;/p&gt;

&lt;h1&gt;
  
  
  Installing and using a Helm chart
&lt;/h1&gt;

&lt;p&gt;To better understand Helm charts and their significance, let's walk through creating one by templatizing a simple HTML web server built with Nginx. This server simply serves a static webpage that says:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Hello, world! I hope you're getting the hang of this Helm chart business.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create a Helm chart&lt;/strong&gt;&lt;br&gt;
Start by generating the chart structure with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm create nginx-chart
&lt;span class="nb"&gt;cd &lt;/span&gt;nginx-chart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command creates a folder named &lt;code&gt;nginx-chart&lt;/code&gt; containing the standard Helm chart directory structure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfqKC6Hnnp9RAMZJk5l2p1uLU0LQ3bZ47NmhGa35Da-kD-or_NlyWkld7RgKRAG_hGF6mhNoeqrMVAJxyqDA27oHVf3HknZkw0jbPzG2JETAYLNDUhVAjpPnQomAnvKjBjASHGBiA%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfqKC6Hnnp9RAMZJk5l2p1uLU0LQ3bZ47NmhGa35Da-kD-or_NlyWkld7RgKRAG_hGF6mhNoeqrMVAJxyqDA27oHVf3HknZkw0jbPzG2JETAYLNDUhVAjpPnQomAnvKjBjASHGBiA%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="62"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdxYgtdowQ7h3vg5BqYrLxfqs8d_Df5fbaxspo-HnC2oTVjR6vMAZTnA6tPPZ9rx9ZboxueBQxeXSXWRfED7uszKQVdrWETeqh-UcjCkgGWhTzI7sp2ih7UkK95gEfXR0mFEU9Mwg%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdxYgtdowQ7h3vg5BqYrLxfqs8d_Df5fbaxspo-HnC2oTVjR6vMAZTnA6tPPZ9rx9ZboxueBQxeXSXWRfED7uszKQVdrWETeqh-UcjCkgGWhTzI7sp2ih7UkK95gEfXR0mFEU9Mwg%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="952" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Update the values.yaml File&lt;/strong&gt;&lt;br&gt;
In the root directory of your &lt;code&gt;nginx-chart&lt;/code&gt; project, locate the &lt;code&gt;values.yaml&lt;/code&gt; file. Open it and update the following values with the content below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;amaraiheanacho/nginx-site&lt;/span&gt;
  &lt;span class="na"&gt;pullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;IfNotPresent&lt;/span&gt;
  &lt;span class="na"&gt;tag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;latest"&lt;/span&gt;
&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NodePort&lt;/span&gt;
  &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXef_vrwC2CE6oN-HNMJfOYlojWhzFPL7m0STNS26Ka78Cg8haK8IGl8thVacatI-d_2eKlw-1fgF6YKLco3HISjs_K-DmBXWz_9Sg9fES3iz0ttLkspMrEZJu77Hvo86Xw59EfgJg%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXef_vrwC2CE6oN-HNMJfOYlojWhzFPL7m0STNS26Ka78Cg8haK8IGl8thVacatI-d_2eKlw-1fgF6YKLco3HISjs_K-DmBXWz_9Sg9fES3iz0ttLkspMrEZJu77Hvo86Xw59EfgJg%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="952" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcy5DUxCgpa7aTVrrMMdAMLJplRe7jC7k8tHVoXJrvpLx0PoOvF9bnBJ_9KNA2fATvwEgaDQhZniXe6_k0bi5D8Z1GiAhzEJ-5vRVgreZsGWXSmyXdyh-mvBxWkE_CEJHaagVPINg%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcy5DUxCgpa7aTVrrMMdAMLJplRe7jC7k8tHVoXJrvpLx0PoOvF9bnBJ_9KNA2fATvwEgaDQhZniXe6_k0bi5D8Z1GiAhzEJ-5vRVgreZsGWXSmyXdyh-mvBxWkE_CEJHaagVPINg%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="952" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These values define the image source and how the service will be exposed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Confirm the template references&lt;/strong&gt;&lt;br&gt;
Go into the &lt;code&gt;templates/&lt;/code&gt; folder and review the &lt;code&gt;deployment.yaml&lt;/code&gt; and &lt;code&gt;service.yaml&lt;/code&gt; templates to ensure they correctly reference the values defined in &lt;code&gt;values.yaml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In the &lt;code&gt;deployment.yaml&lt;/code&gt; template, you should see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Values.image.repository&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}:{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Values.image.tag&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;|&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;default&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;.Chart.AppVersion&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;span class="na"&gt;imagePullPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.image.pullPolicy&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.service.port&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;image&lt;/code&gt;: References the image repository and tag from &lt;code&gt;values.yaml&lt;/code&gt; — in this case, &lt;code&gt;amaraiheanacho/nginx-site:latest&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;imagePullPolicy&lt;/code&gt;: Uses the value of &lt;code&gt;image.pullPolicy&lt;/code&gt; from &lt;code&gt;values.yaml&lt;/code&gt;, typically set to &lt;code&gt;IfNotPresent&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;containerPort&lt;/code&gt;: Retrieves the port number (e.g., &lt;code&gt;80&lt;/code&gt;) from &lt;code&gt;service.port&lt;/code&gt; in &lt;code&gt;values.yaml&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While in the &lt;code&gt;deployment.yaml&lt;/code&gt; file, you should see something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;.Values.service.type&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;type&lt;/code&gt;: Uses the &lt;code&gt;service.type&lt;/code&gt; value from &lt;code&gt;values.yaml&lt;/code&gt;, which is &lt;code&gt;NodePort&lt;/code&gt;. This makes your Nginx server accessible externally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these steps complete, you now have a working Helm chart for your Nginx web server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the Helm chart
&lt;/h2&gt;

&lt;p&gt;Once you have configured the values for your chart, you can deploy it to your Kubernetes cluster using the following command, replacing the :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install my-nginx-release &amp;lt;path to nginx-chart&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command installs the chart as a release named &lt;code&gt;my-nginx-release&lt;/code&gt; (you can use any name you prefer), using the configuration from the &lt;code&gt;nginx-chart&lt;/code&gt; directory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc8Ae2iu2fXwTqa6_-EdLQ3xgE6MpuOTIaYe6ChfmlfkkPRmL5-HX8rkpbm0bSkLE9YCt2rjwosa06dAu9LJIT5UVNSdnBiiJGxqxyTcKe7oY8OL9MATZ2y2IBbXbwwbfw8YiiJvQ%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc8Ae2iu2fXwTqa6_-EdLQ3xgE6MpuOTIaYe6ChfmlfkkPRmL5-HX8rkpbm0bSkLE9YCt2rjwosa06dAu9LJIT5UVNSdnBiiJGxqxyTcKe7oY8OL9MATZ2y2IBbXbwwbfw8YiiJvQ%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Viewing your application
&lt;/h2&gt;

&lt;p&gt;If you're using Minikube, you can access the application directly with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube service &amp;lt;release-name&amp;gt;-&amp;lt;chart-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, if your release is named &lt;code&gt;my-nginx-release&lt;/code&gt; and your chart is named &lt;code&gt;nginx-chart&lt;/code&gt;, the command would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube service my-nginx-release-nginx-chart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfK3rmPz-alMm0raNa0rtEXMQEGAglrWPQ-ykVbP0nBH7Uy_zNQGylbNdqpqr5Ce6YaBHyKXaAeEoboTwRt6wj8RetjtLrt-kkQSRwC97jTZ2tdHRCmDvAyzdlnM18v6Ul-duyJGQ%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfK3rmPz-alMm0raNa0rtEXMQEGAglrWPQ-ykVbP0nBH7Uy_zNQGylbNdqpqr5Ce6YaBHyKXaAeEoboTwRt6wj8RetjtLrt-kkQSRwC97jTZ2tdHRCmDvAyzdlnM18v6Ul-duyJGQ%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command opens the app in your default browser using Minikube’s service tunneling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdRD-IiFuIHerEF1P_jF4HdDaiDn1lqEVailTvtOHSGCtgE-ZhcWKRjzlcrNYIvX-PmhtrSmpcQqU6JaxEHL-xIEKcJjtpAo_BAgivQAngi2PTkkX090G0_J6EUqzxJtrl563qc%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdRD-IiFuIHerEF1P_jF4HdDaiDn1lqEVailTvtOHSGCtgE-ZhcWKRjzlcrNYIvX-PmhtrSmpcQqU6JaxEHL-xIEKcJjtpAo_BAgivQAngi2PTkkX090G0_J6EUqzxJtrl563qc%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’re using any other Kubernetes cluster, use the following commands to retrieve the external URL of your app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services nginx-site)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will output the full URL to access your deployed application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upgrading and rolling back releases
&lt;/h2&gt;

&lt;p&gt;Once deployed, you can update your Helm chart and roll out changes in your release with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm upgrade my-nginx-release ./nginx-chart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command applies any new changes in the chart (like updated images or values) to the existing release.&lt;/p&gt;

&lt;p&gt;If something breaks during an upgrade, you can roll back to a previous version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm rollback my-nginx-release &lt;span class="o"&gt;[&lt;/span&gt;REVISION]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view available revisions, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;history &lt;/span&gt;my-nginx-release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will list the release history, including version numbers you can roll back to.&lt;/p&gt;

&lt;h1&gt;
  
  
  Upload your Helm chart to a repository
&lt;/h1&gt;

&lt;p&gt;A Helm chart repository is a location where developers publish Helm charts, allowing others to easily install and deploy applications without needing to write code from scratch. One such popular registry is Artifact Hub, which you'll use in this tutorial.&lt;/p&gt;

&lt;p&gt;To publish your Helm chart to Artifact Hub, follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Ensure that you are currently in your project’s directory&lt;/strong&gt;&lt;br&gt;
If you're not already in your Helm chart project directory (the one that contains the &lt;code&gt;Chart.yaml&lt;/code&gt; file), change to it using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd &amp;lt;name of the project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Push your project to a GitHub repository&lt;/strong&gt; &lt;br&gt;
First, push your project to a public GitHub repository. Artifact Hub doesn't store charts directly—it indexes them from externally hosted, Helm-compatible repositories. GitHub Pages is a popular and simple option for this.&lt;/p&gt;

&lt;p&gt;Check out the &lt;a href="https://gist.github.com/mindplace/b4b094157d7a3be6afd2c96370d39fad" rel="noopener noreferrer"&gt;Pushing your first project to GitHub&lt;/a&gt; guide if you are new to GitHub.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Package your Helm chart&lt;/strong&gt;&lt;br&gt;
Next, you need to package your Helm chart so it can be distributed and recognized by Helm.&lt;/p&gt;

&lt;p&gt;1.From your chart's root directory, run the following command to generate a .tgz file (e.g., &lt;code&gt;nginx-chart-0.1.0.tgz&lt;/code&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm package &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.Then, create an &lt;code&gt;index.yaml&lt;/code&gt; file that references your packaged chart:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo index &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--url&lt;/span&gt; https://&amp;lt;GitHub-username&amp;gt;.github.io/&amp;lt;repository-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;GitHub-username&amp;gt;&lt;/code&gt; with your GitHub username&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;repository-name&amp;gt;&lt;/code&gt; with the name of your GitHub repository&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3.You should now have two files in your project directory: the &lt;code&gt;.tgz&lt;/code&gt; file and the &lt;code&gt;index.yaml&lt;/code&gt;. Commit and push both files to your GitHub repository using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git add &lt;span class="nb"&gt;.&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Add packaged chart and index.yaml"&lt;/span&gt;
git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Configure GitHub Pages&lt;/strong&gt;&lt;br&gt;
To serve your Helm chart, you need to enable GitHub Pages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to your GitHub repository.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Settings&lt;/strong&gt; tab.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdKLDpBPyKHkU1YITI1g4jB_28B3JqD5lDG1EOM9R1NWBhVpkQSNbr_uv-HWh9whixwjTXgmnyNHxxIr0mxrrn5h6a2jM7QbG8x8K8_kB3OtwUPIUHCY4cmp2mcaxR_PbzCZd4k%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdKLDpBPyKHkU1YITI1g4jB_28B3JqD5lDG1EOM9R1NWBhVpkQSNbr_uv-HWh9whixwjTXgmnyNHxxIr0mxrrn5h6a2jM7QbG8x8K8_kB3OtwUPIUHCY4cmp2mcaxR_PbzCZd4k%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="102"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Pages&lt;/strong&gt; from the left sidebar.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeuBS8gOvilkx6PUTlx2DpzbhUEET60HwKlP_Do4gqus8eqGHR53i7wm3bdpQcB463VRJ-dO_cyAJhPUjK7Q1ykUOx_OQSiZruN-h21C73xi9TTc1gs9vJzJWaG9j9rTwP-G2IlNw%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeuBS8gOvilkx6PUTlx2DpzbhUEET60HwKlP_Do4gqus8eqGHR53i7wm3bdpQcB463VRJ-dO_cyAJhPUjK7Q1ykUOx_OQSiZruN-h21C73xi9TTc1gs9vJzJWaG9j9rTwP-G2IlNw%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="868"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;strong&gt;Build and deployment&lt;/strong&gt; section, under &lt;strong&gt;Branch&lt;/strong&gt;, choose the branch and folder where your chart files are located. For this guide, use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Branch&lt;/strong&gt;: &lt;code&gt;main&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Folder&lt;/strong&gt;: &lt;code&gt;/ (root)&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click the &lt;strong&gt;Save&lt;/strong&gt; button.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXf_waqgA27qz4L_6YkTK-bXDJJOEocmAKEpdwDWBTT2dAq8yIdPXmvIDOUCLMXAnOznLifFrzEFbq1trhhljHNW_85lkvAOIXvitMHSnkWZdmIwwD3dLadZ2QUldjl0kBtWsl_Zyw%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXf_waqgA27qz4L_6YkTK-bXDJJOEocmAKEpdwDWBTT2dAq8yIdPXmvIDOUCLMXAnOznLifFrzEFbq1trhhljHNW_85lkvAOIXvitMHSnkWZdmIwwD3dLadZ2QUldjl0kBtWsl_Zyw%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="552"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;GitHub will now serve your Helm chart at: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;https://&amp;lt;GitHub-username&amp;gt;.github.io/&amp;lt;repository-name&amp;gt;&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you don't see the 'Your site is live' message right away, wait a few moments and refresh the page. The message should appear shortly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXedj1HV3wp8Xr86o1mwPtMTXGo9FdbtjfWA0y4q4po4w66dV8JjUl745fgZBdtXG_t1LjZJwTWbUEfKhsXY1BoB05kMOt4zxvRHWHiOyTWVS9nqSD2P4ODO_3iBy7NA8Ib52WQoag%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXedj1HV3wp8Xr86o1mwPtMTXGo9FdbtjfWA0y4q4po4w66dV8JjUl745fgZBdtXG_t1LjZJwTWbUEfKhsXY1BoB05kMOt4zxvRHWHiOyTWVS9nqSD2P4ODO_3iBy7NA8Ib52WQoag%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Register your repository on Artifact Hub&lt;/strong&gt;&lt;br&gt;
Now you’re ready to publish your chart on Artifact Hub:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to &lt;a href="https://artifacthub.io/" rel="noopener noreferrer"&gt;Artifact Hub&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click your profile icon and select &lt;strong&gt;Control Panel&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXemaTDGAIZEUT0kjf_mB0woNJ0dADwz58W1VtNeGAn08sIpvQSzctQj0fz1AMxPt4Ss5vEf-dBXpTl88CLeYnZoyknKchIyaQnP0EjlSXG67VX9mnLQYUvvx4eMGwMJZ_eIzfuaHw%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXemaTDGAIZEUT0kjf_mB0woNJ0dADwz58W1VtNeGAn08sIpvQSzctQj0fz1AMxPt4Ss5vEf-dBXpTl88CLeYnZoyknKchIyaQnP0EjlSXG67VX9mnLQYUvvx4eMGwMJZ_eIzfuaHw%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="514"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to the &lt;strong&gt;Repositories&lt;/strong&gt; tab and click &lt;strong&gt;+ Add&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Fill in the details:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Repository name&lt;/strong&gt;: e.g., nginx-site&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Display name&lt;/strong&gt;: e.g., nginx-site&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;URL&lt;/strong&gt;: Your GitHub Pages URL from Step 3&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click &lt;strong&gt;+ Add&lt;/strong&gt; to create your repository.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfRHOKH6C5xG4BzWndBMXHYNKMbtIRiaMN54YkFzL5aIcg2lNcbTzQp8jwd8dB2lxiJ-smdLJpyadZDbqPpv5JuSKp-Bi-GHBnAsG-syVgTOOSf3mpJUsq3-V-g24jbym4CCgRtFg%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfRHOKH6C5xG4BzWndBMXHYNKMbtIRiaMN54YkFzL5aIcg2lNcbTzQp8jwd8dB2lxiJ-smdLJpyadZDbqPpv5JuSKp-Bi-GHBnAsG-syVgTOOSf3mpJUsq3-V-g24jbym4CCgRtFg%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="889"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once added, Artifact Hub will index your chart. Congratulations! You've successfully published your Helm chart to Artifact Hub.&lt;/p&gt;
&lt;h1&gt;
  
  
  Verify that your Helm chart was uploaded successfully
&lt;/h1&gt;

&lt;p&gt;Now that your Helm chart has been uploaded, it's time to install and use it to start the Nginx server — all without writing a single line of code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Clean up any existing Nginx release&lt;/strong&gt;&lt;br&gt;
Before proceeding, make sure there are no existing Helm Nginx releases running. This will help you avoid confusing the release created from the Artifact Hub chart with the one created from your local chart. You can check for existing releases using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdpCza7ELrftitoR7QFu1Z8hQ_RDgPxF_1sukUnJpQH6WDUli2-CD9byQafXeKNc449s3SaJrGRcDyFA5VaGGtnBzmRrsvyDBSOmCZ9kRVgOCaBQ9o_fallJKJwqmfOfQvaLfGV%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdpCza7ELrftitoR7QFu1Z8hQ_RDgPxF_1sukUnJpQH6WDUli2-CD9byQafXeKNc449s3SaJrGRcDyFA5VaGGtnBzmRrsvyDBSOmCZ9kRVgOCaBQ9o_fallJKJwqmfOfQvaLfGV%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you see an Nginx release, delete it by copying the release name and running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm uninstall &amp;lt;release-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;release-name&amp;gt;&lt;/code&gt; with the actual name of the Nginx pod you want to remove.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXecwljoneEx5vVh8iYBm_avEVkglYOeBpnQ7H4fjKPxCRkPLk7JFQwfXwJjxACD_yfxS_P1Y5ZPGNSTOkQrgJxNewoEoWQFR83znh55wi6giNW2NfJ0FbDxxhIW6TWY_Kzy_xKDGA%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXecwljoneEx5vVh8iYBm_avEVkglYOeBpnQ7H4fjKPxCRkPLk7JFQwfXwJjxACD_yfxS_P1Y5ZPGNSTOkQrgJxNewoEoWQFR83znh55wi6giNW2NfJ0FbDxxhIW6TWY_Kzy_xKDGA%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1414" height="68"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Install your Helm chart&lt;/strong&gt;&lt;br&gt;
The process of installing a Helm chart is generally consistent across charts. Here's how to do it:&lt;/p&gt;

&lt;p&gt;1.Add your Helm chart repository to your local list of Helm repositories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add &amp;lt;local-repo-name&amp;gt; &amp;lt;GitHub-Pages-URL&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace &lt;code&gt;&amp;lt;local-repo-name&amp;gt;&lt;/code&gt; with the name you want to use for the repository locally, and &lt;code&gt;&amp;lt;GitHub-Pages-URL&amp;gt;&lt;/code&gt; with the URL of your GitHub Pages site where the Helm chart is hosted.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfklv-32dQzegjCTnGNqsncM2SVqgViMTIwpIiGL6pRyNcJmOa9xPhdWhdfCoiziq6UYni6G3WlUEZ22Cz_aFtEf27nFeedTZm3RBp8x9gDSaZUsb1epQ8k0CKmsvx_HMTZx-iikA%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfklv-32dQzegjCTnGNqsncM2SVqgViMTIwpIiGL6pRyNcJmOa9xPhdWhdfCoiziq6UYni6G3WlUEZ22Cz_aFtEf27nFeedTZm3RBp8x9gDSaZUsb1epQ8k0CKmsvx_HMTZx-iikA%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="65"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.Update your Helm repositories to ensure you have the latest version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Install your Helm chart by running the command below. Replace the placeholders with your actual values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install&lt;/span&gt; &amp;lt;release name&amp;gt; &amp;lt;repository name&amp;gt;/&amp;lt;chart name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;release-name&amp;gt;&lt;/code&gt;: A name you choose for this specific deployment (e.g., nginx-release)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;repository-name&amp;gt;&lt;/code&gt;: The name you gave the repository when adding it (e.g., nginx-chart)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;&amp;lt;chart-name&amp;gt;&lt;/code&gt;: The name of the Helm chart (you can find this in the index.html generated and pushed to GitHub)
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXepbZLLhTV_XG5TLXlrtkmb-7Lcyu8q5TpFa7zf_cxooAzHWV3xlmhD6TrH3RRUB8R1RzQ6NdTkqxRmZwAJKhDHN98ecbtdIyDxTe6teeI6p2sQwKGbfutziFzotlf1hIprvgkqeQ%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="683"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example, if your release name is &lt;code&gt;nginx-chart-release&lt;/code&gt;, the repository name is &lt;code&gt;nginx-chart&lt;/code&gt;, and the chart name is also &lt;code&gt;nginx-chart&lt;/code&gt;,the command would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install nginx-chart-release nginx-chart/nginx-chart
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXftrsOCw0SBtL2Ev3D-D2Fsx6oAIa7wm5Ve-Szcrc8A0mAOL78mhuVLw_XfeAjO7v5XrutvWzkjw_qCovJP9PKpmSRZyRtf4402arn5A5VgG5zy0WZxkwLXzNyJt0_rEnYKjil44Q%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXftrsOCw0SBtL2Ev3D-D2Fsx6oAIa7wm5Ve-Szcrc8A0mAOL78mhuVLw_XfeAjO7v5XrutvWzkjw_qCovJP9PKpmSRZyRtf4402arn5A5VgG5zy0WZxkwLXzNyJt0_rEnYKjil44Q%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Confirm the Deployment&lt;/strong&gt;&lt;br&gt;
To verify that your Nginx server is running, check the pods again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see your Nginx pod listed and running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Access Your Nginx Web Server&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;If you're using Minikube&lt;/strong&gt;, run the following to get the external URL:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube service &amp;lt;release-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;minikube service nginx-chart-release
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;If you're using another Kubernetes setup&lt;/strong&gt;, run the following commands to get the Node IP and Node Port, then open the URL in your browser:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;NODE_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get &lt;span class="nt"&gt;--namespace&lt;/span&gt; default &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.spec.ports[0].nodePort}"&lt;/span&gt; services nginx-chart-release&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;NODE_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;kubectl get nodes &lt;span class="nt"&gt;--namespace&lt;/span&gt; default &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.items[0].status.addresses[0].address}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
  &lt;span class="nb"&gt;echo &lt;/span&gt;http://&lt;span class="nv"&gt;$NODE_IP&lt;/span&gt;:&lt;span class="nv"&gt;$NODE_PORT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And that’s it! You've successfully created a Helm chart, pushed it to your repository, and deployed it to your Kubernetes cluster using Helm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeXe63KIl3RgZbep1LrBBaGy0VemeBC-62X3A5x4xCudfqcZoUnoLg2Et1lkM8k2USAMpoCH7cuGQH02sAYkuLQhCjoAYNY9JP4lpDB7Rcg0-K48lS1fb3GoFeYYQe8WnP-GSNB4Q%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeXe63KIl3RgZbep1LrBBaGy0VemeBC-62X3A5x4xCudfqcZoUnoLg2Et1lkM8k2USAMpoCH7cuGQH02sAYkuLQhCjoAYNY9JP4lpDB7Rcg0-K48lS1fb3GoFeYYQe8WnP-GSNB4Q%3Fkey%3D49sqG_QGIf01seasu1PYL_9Z" width="1600" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Wrapping up
&lt;/h1&gt;

&lt;p&gt;Helm charts are a game-changer for managing Kubernetes applications at scale. By turning complex, multi-file Kubernetes manifests into reusable, versioned packages, Helm simplifies application deployment, configuration, and maintenance. Whether you're working solo on side projects or collaborating with a team on production-grade systems, Helm lets you ship faster with fewer errors and greater consistency.&lt;/p&gt;

&lt;p&gt;As you've seen throughout this article, understanding the building blocks of Helm charts, from the basic concepts to the anatomy of a Chart, gives you the tools to build and manage your own charts with confidence.&lt;/p&gt;

&lt;p&gt;Start small by templating a basic service, tweaking some values, and installing it. Then gradually expand your charts to support real-world configurations, secrets, and dependencies. Like any good tool, the more you use Helm, the more indispensable it becomes.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>cloudcomputing</category>
      <category>tutorial</category>
      <category>helm</category>
    </item>
    <item>
      <title>Understanding Kubernetes by deploying a real-world application</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Tue, 15 Apr 2025 20:24:59 +0000</pubDate>
      <link>https://forem.com/amaraiheanacho/understanding-kubernetes-by-deploying-a-real-world-application-5ah2</link>
      <guid>https://forem.com/amaraiheanacho/understanding-kubernetes-by-deploying-a-real-world-application-5ah2</guid>
      <description>&lt;p&gt;Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. Originally developed by Google and now maintained by the Cloud Native Computing Foundation, Kubernetes has become the industry standard for managing applications in production. It allows developers and operations teams to focus more on building features rather than managing infrastructure, thanks to its ability to handle things like automatic scaling, self-healing, and service discovery out of the box.&lt;/p&gt;

&lt;p&gt;In this article, we’ll explore the core components of Kubernetes by walking through the process of deploying a full-stack Todo application. You'll learn how different Kubernetes objects—such as Deployments, Services, ConfigMaps, and Persistent Volumes—work together to run and maintain an application in a cluster. This hands-on project will help you better understand how Kubernetes manages containerized workloads and why it's such a powerful tool for modern DevOps practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  The project summary
&lt;/h2&gt;

&lt;p&gt;In this project, you’ll deploy a full-stack Todo application to a Kubernetes cluster using Minikube. The front end of the application is built with CSS and JavaScript, while the back end is powered by Node.js and Express.js. The application stores its data in a MongoDB database.&lt;/p&gt;

&lt;p&gt;To deploy this application in a Kubernetes cluster on Minikube, you'll follow these steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a MongoDB ConfigMap to store the MongoDB environment variables.&lt;/li&gt;
&lt;li&gt;Create a MongoDB Deployment that defines how MongoDB pods should be configured.&lt;/li&gt;
&lt;li&gt;Create a Persistent Volume and a Persistent Volume Claim to ensure that MongoDB data persists even if the cluster is restarted.&lt;/li&gt;
&lt;li&gt;Create an internal service to expose MongoDB to the Todo application.&lt;/li&gt;
&lt;li&gt;Create a Deployment for the Todo application that defines how its pods should run.&lt;/li&gt;
&lt;li&gt;Create a ConfigMap for the Todo application to manage its environment variables.&lt;/li&gt;
&lt;li&gt;Create an external service to expose the Todo application to the outside world.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All the configuration files for this tutorial can be found in the k8s folder of this GitHub repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Iheanacho-ai/FullStack-Todo-List-Application" rel="noopener noreferrer"&gt;https://github.com/Iheanacho-ai/FullStack-Todo-List-Application&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;What you will need to get started with this tutorial:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minikube. Run this command to check if you have Minikube installed:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Refer to the &lt;a href="https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2Farm64%2Fstable%2Fbinary+download" rel="noopener noreferrer"&gt;official Minikube documentation&lt;/a&gt; to learn how to install Minikube if you do not have it installed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of Docker and Docker compose.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Definition of terms
&lt;/h2&gt;

&lt;p&gt;You need to understand the components that make up this project and how these components interact together. These components are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pods&lt;/strong&gt;: The smallest deployable unit in Kubernetes. Even though Kubernetes is a container orchestration tool, it doesn't manage containers directly — it manages Pods. A Pod typically holds one or two containers (or applications) that share the same storage, network, and lifecycle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployments&lt;/strong&gt;: A Deployment manages the desired state of your Pods — at scale. Think of it as an instruction manual that tells Kubernetes how many replicas of your Pod should be running. It makes sure those Pods stay up and running, and handles updates seamlessly through rolling updates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ConfigMap&lt;/strong&gt;: A ConfigMap lets you separate your configuration data from your application code. It’s like a .env file but managed by Kubernetes. This allows you to inject environment-specific settings into your Pods without touching the app itself. You can update the config as often as needed, and share it across multiple Pods.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service&lt;/strong&gt;: A Service exposes a set of Pods as a network-accessible service. This matters because Pods are ephemeral — they restart often and get new IP addresses each time. Services give you a stable way to reach your Pods using consistent IPs and DNS names, no matter how often the underlying Pods change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Volume (PV)&lt;/strong&gt;: A Persistent Volume is a piece of storage within your cluster — either statically provisioned by an admin or created dynamically. Think of it as a hard drive that lives in your cluster. It helps ensure that your application’s data persists beyond your pod or cluster’s lifecycle.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Persistent Volumes can come from different sources, such as hostPath (local storage), NFS (network file systems), cloud providers like AWS EBS, GCE Persistent Disks, or dynamic provisioning through CSI drivers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Persistent Volume Claim (PVC)&lt;/strong&gt;: A Persistent Volume Claim is a user's request for storage. With a PVC, you can ask for a specific size and type of storage without knowing how it’s provided. It connects your app to a Persistent Volume behind the scenes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Setting up the project
&lt;/h2&gt;

&lt;p&gt;Create a folder where your project will live and change your current directory into the folder by running this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &amp;lt;name of your project&amp;gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; &amp;lt;name of your project&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Create a ConfigMap to store MongoDB environment variables
&lt;/h2&gt;

&lt;p&gt;Now that you've set up your project directory, the next step is to create a ConfigMap that stores configuration data for your MongoDB instance. Specifically, this ConfigMap will define the name of the database that MongoDB should create when it starts up.&lt;/p&gt;

&lt;p&gt;To create the MongoDB ConfigMap:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a new file named mongodb-configmap.yaml in your project’s root directory.&lt;/li&gt;
&lt;li&gt;Paste the following YAML into the file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-configmap&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;mongo-initdb-database&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo_list&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What this file does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apiVersion: v1&lt;/code&gt;: Specifies the API version used by Kubernetes for this resource. v1 is the standard version for ConfigMaps. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kind: ConfigMap&lt;/code&gt;: Declares that this file defines a Kubernetes ConfigMap, which is used to store configuration data in key-value pairs.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metadata&lt;/code&gt;:  Contains metadata about the ConfigMap. Here, we set the name as &lt;code&gt;mongodb-configmap&lt;/code&gt;, which you'll use to reference this ConfigMap in your MongoDB Deployment later.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;data&lt;/code&gt;: This section holds the actual configuration values. In this case, we’re setting a key named &lt;code&gt;mongo-initdb-database&lt;/code&gt; with a value of todo_list. This value will be passed as an environment variable to the MongoDB container, instructing it to create a database named &lt;code&gt;todo_list&lt;/code&gt; when it initializes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 3: Set up a Persistent Volume for storing MongoDB data
&lt;/h2&gt;

&lt;p&gt;Next, you’ll define the Persistent Volume that MongoDB will use to store its data. In this setup, the volume will point to a folder on your local machine. However, you can also point this volume to a remote storage solution such as an NFS share or a cloud provider's block storage (like AWS EBS, Google Persistent Disk, or Azure Disk). Refer to Kubernetes documentation on &lt;a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noopener noreferrer"&gt;Persistent Volume&lt;/a&gt; to learn more.&lt;/p&gt;

&lt;p&gt;To create this Persistent Volume:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create a file named &lt;code&gt;mongodb.yaml&lt;/code&gt; in the root directory of your project. This file will contain all the necessary Kubernetes configurations for MongoDB: the &lt;strong&gt;Persistent Volume&lt;/strong&gt;, the &lt;strong&gt;Persistent Volume Claim&lt;/strong&gt;, the &lt;strong&gt;Deployment&lt;/strong&gt;, and the internal &lt;strong&gt;Service&lt;/strong&gt; that your Todo application will communicate with.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Paste the following YAML into your &lt;code&gt;mongodb.yaml&lt;/code&gt; file to define the Persistent Volume:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolume&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume&lt;/span&gt;
 &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
 &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
 &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
 &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/mnt/data"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s what each part of the code means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apiVersion: v1&lt;/code&gt;: This tells Kubernetes to use version 1 of its API, which is the standard for defining core resources like volumes.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kind: PersistentVolume&lt;/code&gt;: This specifies that you’re creating a Persistent Volume (PV) — a reusable chunk of storage that exists outside of pods.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metadata.name: mongodb-volume&lt;/code&gt;: This sets the name of the volume. You’ll refer to this name later when creating a PersistentVolumeClaim.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;labels&lt;/code&gt;: Labels are optional key-value tags that help organize and identify Kubernetes resources.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;type: local&lt;/code&gt;: This is a custom label you’ve added to indicate that this storage is local (on your own machine) rather than in the cloud. It’s just for your own reference — Kubernetes doesn’t enforce it.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;spec&lt;/code&gt;: This section defines how the volume behaves:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;storageClassName: ""&lt;/code&gt;: Leaving this blank means you don’t want Kubernetes to automatically provision storage. Instead, you’re manually defining it. If someone wants to use this storage, they’ll need to request it by name (&lt;code&gt;mongodb-volume&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;capacity.storage: 1Gi&lt;/code&gt;: This volume provides 1 gigabyte of storage space.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;accessModes: ReadWriteOnce&lt;/code&gt;: Only one pod can read and write to this volume at a time.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;hostPath.path: "/mnt/data"&lt;/code&gt;: This tells Kubernetes to store MongoDB’s data in the &lt;code&gt;/mnt/data&lt;/code&gt; folder on your computer.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;In simpler terms, the code block above is saying:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“I’m creating 1GB of storage on my local machine using the /mnt/data folder. This storage is named &lt;code&gt;mongodb-volume&lt;/code&gt;, and it can be used by only one pod at a time. MongoDB will store its data here, ensuring that even if the MongoDB pod crashes or restarts, the data remains safe."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Step 4: Create a PersistentVolumeClaim to request storage from the Persistent Volume
&lt;/h2&gt;

&lt;p&gt;Creating a Persistent Volume (PV) alone is not enough to provide storage for your pods. Your pods must request a specific amount of storage from the PV, and this is done using a PersistentVolumeClaim (PVC).&lt;/p&gt;

&lt;p&gt;To create a PVC, add the following YAML block to your &lt;code&gt;mongodb.yaml&lt;/code&gt; file, directly below the Persistent Volume definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume-claim&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
 &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
 &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;50Mi&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration defines a PersistentVolumeClaim named &lt;code&gt;mongodb-volume-claim&lt;/code&gt; that asks for 50 megabytes of storage.&lt;/p&gt;

&lt;p&gt;As long as a matching Persistent volume (in this case, MongoDB) meets this requirement, Kubernetes will bind the two together, allowing your pod to use that space.&lt;/p&gt;

&lt;p&gt;When you are done with this step, your &lt;code&gt;mongodb.yaml&lt;/code&gt; file will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolume&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume&lt;/span&gt;
 &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
 &lt;span class="na"&gt;capacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1Gi&lt;/span&gt;
 &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
 &lt;span class="na"&gt;hostPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/mnt/data"&lt;/span&gt;


&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PersistentVolumeClaim&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume-claim&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;storageClassName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
 &lt;span class="na"&gt;accessModes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ReadWriteOnce&lt;/span&gt;
 &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;50Mi&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Set up a MongoDB Deployment to manage your MongoDB pods
&lt;/h2&gt;

&lt;p&gt;Now that you’ve created both the Persistent Volume and the PersistentVolumeClaim, it’s time to create the Deployment that will run your MongoDB pod.&lt;/p&gt;

&lt;p&gt;Copy this YAML file block into your &lt;code&gt;mongodb.yaml&lt;/code&gt; file to define the MongoDB Deployment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt; 
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;27017&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MONGO_INITDB_DATABASE&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-configmap&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo-initdb-database&lt;/span&gt;
        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data/db&lt;/span&gt;
      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume&lt;/span&gt;
        &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume-claim&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code block creates a Kubernetes Deployment for MongoDB named &lt;code&gt;todo-mongodb&lt;/code&gt; that runs one replica of the MongoDB container. Here is the breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;API Version, Kind &amp;amp; Metadata&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt; 
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apiVersion: apps/v1&lt;/code&gt;: Tells Kubernetes to use version 1 of the apps API, which is designed for managing deployments and similar resources.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kind: Deployment&lt;/code&gt;: This specifies that you are creating a Deployment, responsible for managing and scaling your MongoDB application.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metadata.name: todo-mongodb&lt;/code&gt;:  Sets the Deployment's name to &lt;code&gt;todo-mongodb&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metadata.labels: app: todo-mongodb&lt;/code&gt;: Assigns the label &lt;code&gt;todo-mongodb&lt;/code&gt; to the Deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;/&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deployment specification&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;replicas: 1&lt;/code&gt;: This tells Kubernetes to run only &lt;strong&gt;one instance (or copy)&lt;/strong&gt; of the MongoDB pod at a time.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;selector.matchLabels&lt;/code&gt;: This defines &lt;strong&gt;which pods the Deployment should manage&lt;/strong&gt;. It does this by matching a label. In this case, it's looking for pods with the label &lt;code&gt;app: todo-mongodb&lt;/code&gt;. Only pods with this label will be controlled (i.e., created, updated, or deleted) by this Deployment.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;template.metadata.labels&lt;/code&gt;: These are the &lt;strong&gt;labels applied to any new pods&lt;/strong&gt; created by the Deployment. This part ensures that each pod is given the label &lt;code&gt;app: todo-mongodb&lt;/code&gt;, which matches the selector above — so Kubernetes knows these pods belong to this Deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;/&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pod Specification (spec)&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;27017&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Containers&lt;/code&gt;: This section defines the list of containers that will run in each pod.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name: todo-mongodb&lt;/code&gt;: This assigns a name to the container within the pod.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image: mongo&lt;/code&gt;: This tells Kubernetes to use the official MongoDB image from Docker Hub when creating the container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ports&lt;/code&gt;: This exposes port 27017 on the container, which is MongoDB’s default port.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;/&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Environment Variable from ConfigMap&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MONGO_INITDB_DATABASE&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-configmap&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongo-initdb-database&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;env&lt;/code&gt;:  Sets environment variables for the &lt;code&gt;todo-mongodb&lt;/code&gt; container.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;name: MONGO_INITDB_DATABASE&lt;/code&gt;: This environment variable specifies the default database MongoDB should create on startup.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;valueFrom.configMapKeyRef&lt;/code&gt;: Instructs Kubernetes to dynamically retrieve the environment variable’s value from from the &lt;code&gt;mongodb-configmap&lt;/code&gt; ConfigMap you created earlier rather than hardcoding it.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name: mongodb-configmap&lt;/code&gt;: Indicates the ConfigMap from which to fetch the data.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;key: mongo-initdb-database&lt;/code&gt;: Points to the specific key in the ConfigMap that holds the value for the environment variable.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;/ &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mounting Storage into the Container&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;volumeMounts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume&lt;/span&gt;
          &lt;span class="na"&gt;mountPath&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/data/db&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;volumeMounts&lt;/code&gt;&lt;strong&gt;:&lt;/strong&gt; Tells Kubernetes to attach a volume to this container.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name: mongodb-volume&lt;/code&gt;: This references the PersistentVolume you created earlier as the volume to mount to this container&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;/ &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Defining the Volume&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume&lt;/span&gt;
        &lt;span class="na"&gt;persistentVolumeClaim&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;claimName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mongodb-volume-claim&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;volumes&lt;/code&gt;&lt;strong&gt;:&lt;/strong&gt; Defines the storage volumes available to the pod.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;name: mongodb-volume&lt;/code&gt;&lt;strong&gt;:&lt;/strong&gt; Indicates that one of these volumes is named &lt;code&gt;mongodb-volume&lt;/code&gt;—this is the volume you later attach to the pod via the &lt;code&gt;volumeMounts&lt;/code&gt; key.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;persistentVolumeClaim&lt;/code&gt;&lt;strong&gt;:&lt;/strong&gt; Tells Kubernetes to use a PersistentVolumeClaim for obtaining the storage resource.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;claimName: mongodb-volume-claim&lt;/code&gt;&lt;strong&gt;:&lt;/strong&gt; Specifies the name of the PersistentVolumeClaim to use, which is the one you created to request storage from the PersistentVolume.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you are done with this section, your &lt;code&gt;mongodb.yaml&lt;/code&gt; file will look like this:&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://gist.github.com/Iheanacho-ai/82d1072bf832f676623c847a70b9c420" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/82d1072bf832f676623c847a70b9c420&lt;/a&gt;]&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Create an internal Service to connect MongoDB with the Todo application
&lt;/h2&gt;

&lt;p&gt;For your application to work properly, the Todo application must connect to the MongoDB database. To allow this connection, we need to expose the MongoDB Pod using a Service.&lt;/p&gt;

&lt;p&gt;Paste the following YAML block into your &lt;code&gt;mongodb.yaml&lt;/code&gt; file to create the service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt; 
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-mongodb&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;27017&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;27017&lt;/span&gt;
  &lt;span class="na"&gt;clusterIP&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;None&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code block above:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;apiVersion: v1&lt;/code&gt;: Specifies that this YAML uses version 1 of the Kubernetes API.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kind: Service&lt;/code&gt;: Specifies that you're creating a Service resource, which helps expose a set of pods on the network.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;metadata: name: todo-mongodb-service&lt;/code&gt;: Sets the name of the Service to &lt;code&gt;todo-mongodb-service&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;spec&lt;/code&gt;: Defines the desired behavior of the Service.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;selector: app: todo-mongodb&lt;/code&gt;: Tells the Service to target pods with the label &lt;code&gt;app: todo-mongodb&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ports&lt;/code&gt;: Lists the port configuration for the Service.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;protocol&lt;/code&gt;: TCP: Uses TCP as the communication protocol.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;port: 27017&lt;/code&gt;: Exposes port 27017 on the Service, which is the port clients connect to.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;targetPort: 27017&lt;/code&gt;: Directs the traffic to port 27017 on the selected Pods.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;code&gt;clusterIP: None&lt;/code&gt;: This makes the Service headless, which means Kubernetes won’t assign it a &lt;a href="https://kubernetes.io/docs/concepts/services-networking/cluster-ip-allocation/" rel="noopener noreferrer"&gt;Cluster IP&lt;/a&gt;. Instead of routing traffic through a single, stable IP, the Service lets other Pods communicate directly with the IP addresses of individual MongoDB Pods. This is especially useful for databases, where each Pod might maintain its own state and needs to be reached directly—rather than through a load balancer.&lt;/p&gt;

&lt;p&gt;Refer to the &lt;a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noopener noreferrer"&gt;Kubernetes documentation&lt;/a&gt; on Services to learn more about different Service types and use cases.&lt;/p&gt;

&lt;p&gt;When you are done with this step, your &lt;code&gt;mongodb.yaml&lt;/code&gt; file will look like this:&lt;/p&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;[&lt;a href="https://gist.github.com/Iheanacho-ai/5376054e00cb9dc2571e364efc32e620" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/5376054e00cb9dc2571e364efc32e620&lt;/a&gt;]&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 7: Apply the configuration to deploy your MongoDB application
&lt;/h2&gt;

&lt;p&gt;Now that you’ve defined all the necessary components for MongoDB — including the ConfigMap, Deployment, Service, PersistentVolume, and PersistentVolumeClaim — it’s time to apply them and bring your MongoDB environment to life.&lt;/p&gt;

&lt;p&gt;To do this, follow the steps below:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Apply the ConfigMap first&lt;/strong&gt;: The ConfigMap contains configuration data that the MongoDB deployment will need at startup. So, it must be created before the pod tries to read from it.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; mongodb-configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.&lt;strong&gt;Apply the MongoDB resources&lt;/strong&gt;: This will create your MongoDB Deployment, Service, Persistent Volume, and PersistentVolumeClaim, all defined in your &lt;code&gt;mongodb.yaml&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; mongodb.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8ps6cbdeoydukvzufcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8ps6cbdeoydukvzufcr.png" width="800" height="79"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;Check that everything is running correctly&lt;/strong&gt;: Your pods and services may take a few moments to fully start. To check their status, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lvomp0601x73ypi54ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7lvomp0601x73ypi54ui.png" width="800" height="111"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89j5x90ly7mnfq8sgu44.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F89j5x90ly7mnfq8sgu44.png" alt="Image description" width="800" height="46"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Set up a ConfigMap to manage environment variables for your Todo application
&lt;/h2&gt;

&lt;p&gt;Now that you have brought your MongoDB environment to life, you need to move on to the Todo application.&lt;/p&gt;

&lt;p&gt;Similar to the MongoDB application, you would need to define the ConfigMap that will hold configuration values for the Todo application.&lt;/p&gt;

&lt;p&gt;To create the ConfigMap for the Todo application:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a &lt;code&gt;todo-configmap.yaml&lt;/code&gt; file in your project’s root directory&lt;/li&gt;
&lt;li&gt;Paste this YAML code block into the &lt;code&gt;todo-configmap.yaml&lt;/code&gt; file.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ConfigMap&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-configmap&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deployment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;local"&lt;/span&gt;
  &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;development"&lt;/span&gt;
  &lt;span class="na"&gt;db&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mongodb://todo-mongodb-service:27017/todo_list"&lt;/span&gt;
  &lt;span class="na"&gt;db_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;todo_list"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the ConfigMap above, you name the ConfigMap, &lt;code&gt;todo-configmap&lt;/code&gt; and then define in it four environment variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;deployment: "local"&lt;/code&gt;: This explains that the application is deployed locally.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;environment: "development"&lt;/code&gt;: Defines that the application is running in a development environment&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;db: "mongodb://todo-mongodb-service:27017/todo_list"&lt;/code&gt;: This is the &lt;strong&gt;MongoDB connection string&lt;/strong&gt;. It tells your app:

&lt;ul&gt;
&lt;li&gt;Connect to the MongoDB Service called &lt;code&gt;todo-mongodb-service&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use port &lt;code&gt;27017&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Use the &lt;code&gt;todo_list&lt;/code&gt; database&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;db_name&lt;/code&gt;: This holds the name of the database&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 9: Setting up a Deployment for your Todo application
&lt;/h2&gt;

&lt;p&gt;Next, let’s create a Deployment to manage the pods for your Todo application.&lt;/p&gt;

&lt;p&gt;Copy the YAML below into a file named &lt;code&gt;todo-deployment.yaml&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;amaraiheanacho/amaratodo:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8000&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DEPLOYMENT&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-configmap&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deployment&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENVIRONMENT&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-configmap&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;environment&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-configmap&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_NAME&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-configmap&lt;/span&gt;
                &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db_name&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Breaking down this configuration into sections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ApiVersion, Kind, and Metadata&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration above tells Kubernetes that it defines a Deployment named todo-list.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deployment level Specification:&lt;/strong&gt; This section defines how the Deployment should behave.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This bit of configuration does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;replicas: 2&lt;/code&gt;: Runs two pods of your Todo application for high availability.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;selector.matchLabels&lt;/code&gt;: Tells Kubernetes that this Deployment manages pods with the label &lt;code&gt;app: todo-list&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;template&lt;/code&gt;: This is the blueprint Kubernetes uses to create the pods.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;metadata.labels&lt;/code&gt;: This gives every pod created by the Deployment a label of &lt;code&gt;app: todo-list&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;/&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Container Spec&lt;/strong&gt;: This describes the containers the pods should run.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;amaraiheanacho/amaratodo:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8000&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name: todo-list&lt;/code&gt;: The name of the container&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image: amaraiheanacho/amaratodo:latest&lt;/code&gt;: The Docker image used to run your app, pulled from Docker Hub.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ports.containerPort: 8000&lt;/code&gt;: The port your app listens on inside the container.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;/&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Environment Variables&lt;/strong&gt;: These are environment variables passed into your container from the &lt;code&gt;todo-configmap&lt;/code&gt; ConfigMap you created earlier:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DEPLOYMENT&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-configmap&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;deployment&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENVIRONMENT&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-configmap&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;environment&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-configmap&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DB_NAME&lt;/span&gt;
          &lt;span class="na"&gt;valueFrom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;configMapKeyRef&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-configmap&lt;/span&gt;
                &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;db_name&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This section retrieves values from your &lt;code&gt;todo-configmap&lt;/code&gt; ConfigMap. These values are then injected into the container as environment variables, which include:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Environment Variable&lt;/th&gt;
&lt;th&gt;Value comes from key&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DEPLOYMENT&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;deployment&lt;/code&gt; in ConfigMap&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ENVIRONMENT&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;environment&lt;/code&gt; in ConfigMap&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DB&lt;/td&gt;
&lt;td&gt;db in ConfigMap&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DB_NAME&lt;/td&gt;
&lt;td&gt;db_name in ConfigMap&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Step 10: Create an external Service to expose your Todo application to external traffic
&lt;/h2&gt;

&lt;p&gt;To access the Todo application from outside the cluster, you need to expose it using a Kubernetes Service — just like you did with MongoDB. However, unlike MongoDB, which only needs to be accessible to other pods in the cluster, the Todo app needs to be open to the Internet.&lt;/p&gt;

&lt;p&gt;To do this, you’ll create an external Service.&lt;/p&gt;

&lt;p&gt;Paste the following YAML block into your &lt;code&gt;todo-deployment.yaml&lt;/code&gt; file to do so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list-service&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;todo-list&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TCP&lt;/span&gt;
      &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8000&lt;/span&gt;
      &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8000&lt;/span&gt;
      &lt;span class="na"&gt;nodePort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;30000&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s what this configuration does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;name: todo-list-service&lt;/code&gt;: This names the Service so you can refer to it easily.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;selector: app: todo-list&lt;/code&gt;: This Service routes traffic to any pod with the label app: todo-list.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;type: LoadBalance&lt;/code&gt;r: This exposes the Service to the outside world. 

&lt;ul&gt;
&lt;li&gt;On cloud platforms, this would create a real Load Balancer (like AWS ELB). &lt;/li&gt;
&lt;li&gt;On Minikube, it actually creates a NodePort behind the scenes to simulate external access.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;code&gt;nodePort: 30000&lt;/code&gt;: This makes the Todo app accessible on port 30000 of your host machine (like your laptop or VM).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For example, you can open &lt;a href="http://localhost:30000" rel="noopener noreferrer"&gt;http://localhost:30000&lt;/a&gt; to access the app.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Tip&lt;/strong&gt;: To understand how this differs from the MongoDB Service, refer back to the MongoDB section.&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;When you are done with this section, this is how your &lt;code&gt;todo-deployment.yaml&lt;/code&gt; file would look like:&lt;/p&gt;

&lt;p&gt;[&lt;a href="https://gist.github.com/Iheanacho-ai/39736da41c400cefb062bd26fd4eb191" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/39736da41c400cefb062bd26fd4eb191&lt;/a&gt;]&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 11: Apply the configuration to launch your Todo application
&lt;/h2&gt;

&lt;p&gt;Finally, let's bring your Todo application to life:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, apply your ConfigMap by running the command below. This will load the environment variables your Todo app needs:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; todo-configmap.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1lkeajqbfp987e5z9md.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1lkeajqbfp987e5z9md.png" alt="Image description" width="800" height="39"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2.Next, deploy your Todo application pods and expose them via a service by applying the deployment YAML:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl apply &lt;span class="nt"&gt;-f&lt;/span&gt; todo-deployment.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4n7j0dbuei7umyh8c82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4n7j0dbuei7umyh8c82.png" alt="Image description" width="800" height="50"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3.Your pods may take a few moments to start fully. To check their status, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8al8x2mkev8avpg3g8a6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8al8x2mkev8avpg3g8a6.png" alt="Image description" width="800" height="79"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Look for a &lt;code&gt;STATUS&lt;/code&gt; of &lt;code&gt;Running&lt;/code&gt; for the &lt;code&gt;todo-list&lt;/code&gt; pods.&lt;/p&gt;

&lt;p&gt;4.To view all the services running in your cluster (including your Todo app), run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9cd3umktdy6suc98prb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9cd3umktdy6suc98prb.png" alt="Image description" width="800" height="79"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5.Find the name of the Todo service (e.g., &lt;code&gt;todo-list-service&lt;/code&gt;) and use it to start your app in the browser with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube service &amp;lt;name-of-your-todo-service&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, if your service is called &lt;code&gt;todo-list-service&lt;/code&gt;, run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;minikube service todo-list-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbpvkvyt724flhyvucx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbpvkvyt724flhyvucx5.png" alt="Image description" width="800" height="79"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command will open your Todo application in your default browser. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmis9e8ocvbkblalqh60.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmis9e8ocvbkblalqh60.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have deployed a Full stack Todo application into a mini Kubernetes Cluster using Minikube.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;This article explained the building blocks of understanding Kubernetes and stacked these blocks on top of each other to deploy a full-stack Todo application into a Kubernetes cluster propped up by Minikube. Check out the official &lt;a href="https://kubernetes.io/docs/home/" rel="noopener noreferrer"&gt;Kubernetes documentation&lt;/a&gt; to dive deeper and explore all the possibilities.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>What is an F1 score?</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Thu, 10 Oct 2024 06:37:35 +0000</pubDate>
      <link>https://forem.com/eyer-ai/what-is-an-f1-score-59m3</link>
      <guid>https://forem.com/eyer-ai/what-is-an-f1-score-59m3</guid>
      <description>&lt;p&gt;Artificial intelligence has integrated into different facets of our everyday lives, from virtual assistants and personalized recommendations to healthcare diagnostics and fraud detection; we are twice as likely to interact with a piece of software or tool powered by AI than we were a couple of years ago.  While this is a positive development, it raises an important question: how can we trust the predictions or outputs of AI-powered solutions? This concern is especially important in situations where inaccurate predictions could result in significant losses, both financial and even in terms of human life.&lt;/p&gt;

&lt;p&gt;In this article, we will explore performance metrics like the F1 score for evaluating the effectiveness of classification models, how it’s calculated, and why it’s often preferred over other performance metrics like precision or recall when evaluating classification models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the importance of F1 score in classification models
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.sciencedirect.com/topics/computer-science/classification-models" rel="noopener noreferrer"&gt;Classification models&lt;/a&gt; are algorithms that analyze and categorize complex data sets into predefined classes or labels. These models are used across various sectors, such as anomaly detection, medical diagnosis, text classification, and more. For example, in anomaly detection, classification models help label data points as either "anomalous" or "non-anomalous." &lt;/p&gt;

&lt;p&gt;Similarly, in medical diagnosis, a classification model might be used to detect cancer by categorizing patient data into "cancerous" or "non-cancerous" groups.&lt;/p&gt;

&lt;p&gt;In such examples, “false positives” and “false negatives” in classification models can have serious consequences. So how can we trust the predictions of these models? The F1 score offers one way to evaluate how well a classification model recognizes and categorizes data into different subsets. To fully understand the F1 score, let's explore three important concepts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The possible outcomes of a classification model&lt;/li&gt;
&lt;li&gt;What are precision and recall performance metrics&lt;/li&gt;
&lt;li&gt;How precision and recall combine to give a more comprehensive assessment of a model’s performance which is captured by the F1 score.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we've outlined the significance of classification models, it's important to take a closer look at their prediction outcomes. These outcomes form the foundation for performance metrics such as precision, recall, and, more importantly, the F1 score.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the possible outcomes of a classification model
&lt;/h2&gt;

&lt;p&gt;A classification model prediction typically falls into one of these four categories:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;True Positives&lt;/strong&gt;: These are events or data points that were correctly predicted as positive.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;True Negative&lt;/strong&gt;s: These are events that were correctly predicted as negative.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;False Positives&lt;/strong&gt;: These are events that were incorrectly predicted as positive but were actually negative.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;False Negatives&lt;/strong&gt;: These are events that were incorrectly predicted as negative but were actually positive.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These four outcomes form the basis of precision and recall, which together make up the F1 score.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are precision and recall?
&lt;/h2&gt;

&lt;p&gt;Now that we understand these four outcomes, let's use them to explain precision and recall.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Precision&lt;/strong&gt; &lt;br&gt;
The &lt;a href="https://builtin.com/data-science/precision-and-recall" rel="noopener noreferrer"&gt;precision performance metric&lt;/a&gt; determines the quality of positive predictions by measuring their correctness. In other words, it measures how many of the positive predictions made by the model were actually correct. Precision is calculated by dividing the number of true positive outcomes by the sum of the true positives and false positives.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Precision = True Positives / (True Positives + False Positives)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;To better understand precision, let’s consider a pool of 500 emails, and a spam filter that has been employed to figure out how many of these emails are spam.&lt;/p&gt;

&lt;p&gt;Suppose the filter identifies 120 emails as spam, but only 100 of those emails are actually spam. In this case, the precision of the spam filter would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Precision = 100 / (100 + 20) = 0.833 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means that 83.3% of the emails that the filter identified as spam were actually spam.&lt;/p&gt;

&lt;p&gt;While precision focuses on the accuracy of positive predictions, recall assesses the model's overall ability to identify all actual positive cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recall&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://builtin.com/data-science/precision-and-recall" rel="noopener noreferrer"&gt;Recall, also known as sensitivity,&lt;/a&gt; measures a model’s ability to accurately detect positive events. In simpler terms, it indicates how many of the actual positive instances were correctly identified by the model. Recall can be calculated using the formula below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Recall = True Positive / (True positive + False Negative)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let's return to the spam email filter example. We saw that out of the filter’s prediction of 120 spam emails, 100 were indeed spam. However, what if there were actually 200 spam emails in total? Then in this scenario, the recall would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Recall = 100/ 200 = 0.5

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means that the filter correctly identified 50% of all actual spam emails. &lt;/p&gt;

&lt;p&gt;While precision and recall provide valuable insights into a model’s performance, relying solely on one without considering the other can give an incomplete picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Limitations of using precision as a classification metric without recall (and vice versa)
&lt;/h2&gt;

&lt;p&gt;Considering precision without recall, and vice versa, can lead to a misleading evaluation of a model's performance, especially in scenarios where class distribution is imbalanced or where different types of errors (false positives vs. false negatives) have varying consequences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations of Precision without Recall&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Precision alone focuses solely on the correctness of the positive predictions, ignoring how well the model captures all possible positives. A model with very high precision might seem impressive, but if it misses a large number of actual positive instances (low recall), it could be underperforming. This often occurs in cases where a model is extremely cautious about making positive predictions, leading to fewer but more accurate positive results. This cautious approach minimizes false positives but increases false negatives.&lt;/p&gt;

&lt;p&gt;For example, imagine a medical diagnosis model designed to detect a rare disease. If the model has perfect precision but low recall, it correctly identifies all the positive cases it flags as having the disease. However, if it only flags 2 out of 50 actual positive cases, its recall is very low. This means that while every diagnosed patient truly has the disease (precision is 100%), the model is missing the vast majority of patients who actually have it, making it unreliable for early diagnosis and treatment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations of Recall without Precision&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Similarly, focusing on recall alone means you're only considering how many true positives are identified out of the total actual positives without regard to how many false positives the model produces. A high recall could indicate the model captures most positive instances, but it might be over-predicting positives, leading to a flood of false positives and reduced accuracy in actual predictions.&lt;/p&gt;

&lt;p&gt;Using the medical diagnosis example, imagine a medical diagnosis model with 100% recall that flags every patient as having the disease to ensure it never misses a single case. While the recall is perfect, the precision is incredibly low because many healthy individuals will be wrongly diagnosed. This makes the model impractical, as it would result in unnecessary anxiety and treatments for people who do not actually have the disease.&lt;/p&gt;

&lt;p&gt;This highlights the importance of a comprehensive metric combining precision and recall—the F1 score.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is an F1 score?
&lt;/h2&gt;

&lt;p&gt;An &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/119373842" rel="noopener noreferrer"&gt;F1 score&lt;/a&gt; can be understood as the harmonic mean of precision and recall, combining both these metrics into one comprehensive assessment that neither performance metric can offer alone.&lt;/p&gt;

&lt;p&gt;The F1 score is described as the harmonic mean of both precision and recall for two important reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The F1 score gives both of these metrics equal weights, ensuring that a good F1 score signifies that the model has a good balance between precision and recall.&lt;/li&gt;
&lt;li&gt;Unlike the arithmetic mean, the harmonic mean prevents a high precision score from disproportionately affecting the overall F1 score when recall is low, and vice versa.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The F1 score can be calculated as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
F1 - score = 2 * (precision * recall) / (precision + recall)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So using the original example of the spam filter, with a precision of 0.8333 and a recall of 0.5, the F1 score of the spam filter would be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
F1 score = 2 * (0.8333 * 0.5) / ( 0.8333 + 0.5 )

F1 score = 0.625

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Calculating a model's F1 score can provide a clearer, more balanced measure of its performance, especially in cases where both precision and recall are critical. &lt;/p&gt;

&lt;h2&gt;
  
  
  Interpreting the F1 score
&lt;/h2&gt;

&lt;p&gt;Similar to most performance metrics, the F1 score ranges from 0 to 1, with 0 representing the worst possible score and 1 representing the best possible score a model can get. &lt;/p&gt;

&lt;p&gt;A high F1 score indicates that the model has good precision and recall, showing a well-balanced performance. Conversely, a low F1 score may suggest a trade-off between precision and recall or indicate that the model performs poorly on both metrics.&lt;/p&gt;

&lt;p&gt;This comprehensive insight provided by the F1 score is particularly crucial in anomaly detection, as it helps evaluate the model's ability to accurately recognize and identify anomalous events.&lt;/p&gt;

&lt;h2&gt;
  
  
  F1 score in anomaly detection
&lt;/h2&gt;

&lt;p&gt;Anomaly detection, once a labor-intensive process, has become much more efficient with the rise of artificial intelligence. Advanced tools like Eyer, an AI-powered anomaly detection platform, have streamlined this process by automating the identification of unusual data patterns.&lt;/p&gt;

&lt;p&gt;At its core, anomaly detection involves analyzing data to identify patterns or behaviors that deviate significantly from the norm. These deviations, often referred to as anomalies or outliers, can signal critical events such as fraud, system failures, or network intrusions. By using Eyer's sophisticated algorithms, these anomalies can be detected earlier and with greater accuracy, enabling organizations to respond to potential threats in real-time.&lt;/p&gt;

&lt;p&gt;Given the potential consequences of relying on ineffective anomaly detection tools, it’s crucial to trust the performance of platforms like Eyer. One way to measure this trust is through the F1 score, which provides valuable insights into the balance between precision and recall.&lt;/p&gt;

&lt;p&gt;For a deeper dive into Eyer's performance, including its F1 score testing results, check out the official documentation and read all about Eyer’s findings on the &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/118554627" rel="noopener noreferrer"&gt;F1 performance testing of the core algorithm of Eyer&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  In summary
&lt;/h1&gt;

&lt;p&gt;Many of the artificial intelligence models we encounter in our daily lives are classification models. These models help us determine whether data has specific characteristics, ranging from something as simple as identifying spam emails to more critical applications like diagnosing cancer in patients.&lt;/p&gt;

&lt;p&gt;Since we often don’t know the correct answers to the questions posed by classification models, it’s essential to trust these systems to make accurate predictions and draw the right conclusions from the data. This is where the F1 score comes into play.&lt;/p&gt;

&lt;p&gt;The F1 score offers a balanced evaluation of a classification model’s performance by considering both precision and recall. Its value lies in providing a comprehensive measure that neither precision nor recall can fully capture. This makes the F1 score particularly vital in high-stakes scenarios like anomaly detection and medical diagnosis, where both false positives and false negatives can have serious consequences. By understanding and calculating the F1 score, we gain deeper insights into the effectiveness of AI-powered classification models, allowing us to develop more reliable and trustworthy systems. Tools like Eyer, which incorporate the F1 score into their evaluations, demonstrate how this metric can enhance decision-making in real-world AI applications.&lt;/p&gt;

&lt;p&gt;Ultimately, using the F1 score not only helps validate the performance of these models but also ensures that they align with the critical needs of various sectors. Whether in healthcare, finance, or cybersecurity, understanding the strengths and weaknesses of classification models through the F1 score can lead to better outcomes and increased confidence in automated decisions. As reliance on AI grows, prioritizing robust evaluation metrics like the F1 score will be essential for building the next generation of intelligent systems that we can trust.&lt;/p&gt;

&lt;p&gt;Lastly, check out &lt;a href="https://eyer.ai/" rel="noopener noreferrer"&gt;Eyer&lt;/a&gt; for an F1 score-approved, AI-powered anomaly detection tool to monitor your systems.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>The role of baselines in anomaly detection</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Mon, 29 Jul 2024 08:51:15 +0000</pubDate>
      <link>https://forem.com/eyer-ai/the-role-of-baselines-in-anomaly-detection-3o0f</link>
      <guid>https://forem.com/eyer-ai/the-role-of-baselines-in-anomaly-detection-3o0f</guid>
      <description>&lt;p&gt;Artificial intelligence and machine learning are quickly making their way into every facet of life, including art, customer service, engineering, and, more recently, anomaly detection, particularly through tools like &lt;a href="https://eyer.ai/" rel="noopener noreferrer"&gt;Eyer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Anomaly detection was once a repetitive and labor-intensive task, involving countless hours of peering into and analyzing large datasets to identify irregularities or anomalies. However, tools like Eyer now leverage artificial intelligence to automate the process of reading and analyzing large datasets to detect anomalies. But how do you determine if a data point is an anomaly? What constitutes normal behavior? These are the questions that baselines provide some answers to, and this article explores what baselines are and how they are used in anomaly detection and other industries.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a baseline?
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://eyer.ai/blog/how-to-use-eyer-and-grafana-to-query-and-visualize-anomalies-in-cpu-and-memory-metrics/" rel="noopener noreferrer"&gt;anomaly detection&lt;/a&gt;, baselines serve as reference points or models that represent the normal behavior of a system or dataset under normal conditions. These baselines are crucial for identifying deviations in the data that may indicate anomalies or outliers. A baseline includes lower and upper boundaries, creating a band within which a metric is expected to stay under normal conditions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.stackstate.com/v/4.0/use/baselining" rel="noopener noreferrer"&gt;Baselines&lt;/a&gt; are typically created using historical data. They can be derived from the mean or median of the dataset.  Alternatively, you can define baselines using percentiles. For example, any data point outside the 5th or 95th percentile, which in this case are the lower and upper threshold of the baseline, might be flagged as an anomaly.&lt;/p&gt;

&lt;p&gt;Additionally, you can also derive baselines with learning models like linear regression or decision trees. These models can capture relationships in the data and highlight deviations from those relationships. Additionally, clustering algorithms like K-means can be used to define normal clusters of data points. Points that don't fit well into any cluster can be considered anomalies.&lt;/p&gt;

&lt;p&gt;While baselines are normally derived from historical data, you must note that they are typically subject to change and, therefore, continuously update as new data flows in.&lt;/p&gt;

&lt;p&gt;Now that you understand baselines let's dive into the various methods for identifying baselines for your dataset.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is baseline detection?
&lt;/h2&gt;

&lt;p&gt;As the name suggests, baseline detection is the process of discovering what a baseline for a dataset is. Introduced briefly in the previous section, there are different methods of baseline detection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Statistical methods:&lt;/strong&gt; These techniques rely on statistical properties of the data to define a range of normalcy; these techniques include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mean and standard deviation&lt;/strong&gt;: This approach defines a normal range based on the mean value and its standard deviation. Data points outside a certain number of standard deviations from the mean can be considered anomalies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Percentiles&lt;/strong&gt;: This approach defines normal behavior using percentiles. For example, the 5th percentile and the 95th percentile might represent the lower and upper bounds of normal behavior. Points that fall outside this range are flagged as anomalies.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Time series analysis:&lt;/strong&gt; When dealing with data collected over time, specific methods can be used to identify the underlying baseline trend: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Autoregressive models&lt;/strong&gt;: These models predict future values based on past data points, essentially creating a baseline for what the next data point should look like.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Moving average&lt;/strong&gt;: This method smooths out short-term fluctuations by averaging a series of past data points. This helps highlight the longer-term trends, making it easier to identify deviations from the baseline.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;&lt;strong&gt;Machine learning models:&lt;/strong&gt; Machine learning offers powerful tools to automatically learn the baseline from your data. Some of these tools are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple models&lt;/strong&gt;: &lt;a href="https://www.ibm.com/topics/linear-regression#:~:text=Linear%20regression%20analysis%20is%20used,is%20called%20the%20independent%20variable." rel="noopener noreferrer"&gt;Linear regression&lt;/a&gt;, for instance, can establish a baseline by capturing the underlying relationships within the data. Deviations from this baseline might indicate anomalies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clustering&lt;/strong&gt;: &lt;a href="https://www.simplilearn.com/tutorials/machine-learning-tutorial/k-means-clustering-algorithm#:~:text=K%2DMeans%20clustering%20is%20an,'K'%20is%20a%20number." rel="noopener noreferrer"&gt;Clustering algorithms like K-means&lt;/a&gt; can group similar data points together. Points that don't fit well into any cluster are potential outliers or anomalies.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why does baseline detection matter?
&lt;/h2&gt;

&lt;p&gt;Baselines and baseline detection are important for different applications. Some of these applications are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Anomaly detection&lt;/strong&gt;: This is one of the primary use cases that comes to mind when discussing baseline detection. By identifying data points or events that stray significantly from the established norm, anomaly detection helps us spot potential problems. This is crucial in industries like observability, where tools like Eyer leverage &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/9076743" rel="noopener noreferrer"&gt;baselines to flag anomalies&lt;/a&gt; for further investigation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality control&lt;/strong&gt;:  In manufacturing processes, baselines can be established for various parameters like temperature, pressure, or component dimensions. Baseline detection helps identify products deviating from these expected values, potentially indicating defects. This allows for early intervention and ensures product quality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictive maintenance&lt;/strong&gt;: Baseline detection can be used to monitor equipment performance over time. By establishing baselines for normal operating parameters such as vibration levels, temperature, and energy consumption, deviations can be identified before they become critical failures. This allows for proactive maintenance, minimizing downtime and repair costs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finding these deviations from the norm is precisely what makes baseline detection so valuable. Next, let's take a closer look at a specific use case–anomaly detection with Eyer–dissecting how this new age tool approaches baselining.&lt;/p&gt;

&lt;h2&gt;
  
  
  Eyer’s approach to baselining using multiple baselines
&lt;/h2&gt;

&lt;p&gt;Eyer is an &lt;a href="https://eyer.ai/blog/observability-with-grafana-and-eyer/" rel="noopener noreferrer"&gt;AI-powered observability tool&lt;/a&gt; that leverages baselining for discovering anomalies in a system. Eyer approaches baselining in a very interesting way: it understands that each metric is unique and caters to each as such. For each unique metric, Eyer builds baselines using a combination of autoregressive and clustering models. These baselines, which are built from historical data,  consist of upper and lower thresholds.&lt;/p&gt;

&lt;p&gt;The term "baselines" is intentional because Eyer can build up to &lt;a href="https://www.youtube.com/watch?v=oU5Q97tpXl8" rel="noopener noreferrer"&gt;three baselines&lt;/a&gt; for a single metric: a primary (or main) baseline and one to two secondary baselines. These baselines can account for different normal behaviors of the same metric on the same day. For example, on some Mondays at noon, CPU utilization might be at 30%, while on others, it could be at 70%, and both are considered normal. However, if 30% utilization is slightly more frequent, it will be the primary baseline, with 70% as a secondary baseline.&lt;/p&gt;

&lt;p&gt;The main baseline represents the most frequent behavior and is considered anomaly-free. The secondary baselines represent less frequent behaviors that could still be normal but might occasionally conceal some anomalies.&lt;/p&gt;

&lt;p&gt;The thresholds that makeup baselines are learned automatically and are dynamic. They are learned and relearned based on past behaviors, and these thresholds are adopted and learned if any changes occur in the system. So, there is no need for manual actions to set up the monitoring systems, as the AI algorithm learns by itself.&lt;/p&gt;

&lt;p&gt;But what role does baselining play in an Eyer anomaly alert?&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Eyer build out an anomaly alert using baselining?
&lt;/h2&gt;

&lt;p&gt;With these multiple baselines defining normal behavior, it becomes easier to spot anomalies in the data. &lt;/p&gt;

&lt;p&gt;It is easy to think of any data point that exists outside the established baselines as an anomaly, but it isn't always marked as one. The data point behavior needs to meet a couple of requirements before being classified as an anomaly.&lt;/p&gt;

&lt;p&gt;The verification phase determines whether a deviation is an anomaly. In the first part of the verification phase, some deviations can be ruled out through trend analysis. For example, if the data points are only slightly outside the baselines but the overall trend appears normal, they are not considered deviations and thus not considered anomalous.&lt;/p&gt;

&lt;p&gt;After this, a 15-minute verification window is used to monitor data for anomalies. If data deviates from normal behavior for at least 8 minutes within this window, that behavior is classified as anomalous, and the corresponding data point is flagged as an anomaly.&lt;/p&gt;

&lt;p&gt;Conversely, if a data point falls outside the baseline for less than 8 minutes within the 15-minute verification window, the anomaly is considered closed.&lt;/p&gt;

&lt;p&gt;However, identifying a data point as anomalous is just the beginning. The next step is figuring out how anomalous that data point really is.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Classification of anomaly alerts&lt;/strong&gt;&lt;br&gt;
An alert can include anomalies on several metrics. Each anomaly on each metric has an assigned severity. The overall &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/9076908" rel="noopener noreferrer"&gt;severity of the alert&lt;/a&gt; is based on the severity of the anomalies contained in the alert.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The severity of the anomaly on a single metric&lt;/strong&gt;&lt;br&gt;
After confirming a data point as an anomaly, Eyer assigns it a weight based on how significantly it deviates from the baselines. These weights are categorized as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Maximum weight&lt;/strong&gt;: A data point receives a maximum weight of 2 if it exists far outside all predefined baselines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium weight&lt;/strong&gt;: This weight, valued at 1, is assigned to a data point that exists beyond the primary baseline but remains within one of the secondary baselines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero weight&lt;/strong&gt;: When a data point temporarily returns to the main baseline after deviating, it receives a weight of zero.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Depending on how long an anomaly remained outside the main or secondary baselines—these weighted deviations are averaged to form the anomaly's history. This average of weighted deviations is then translated into an anomaly score ranging from 0 to 100, where 0 indicates a critical anomaly, and 100 indicates an anomaly-free state.&lt;/p&gt;

&lt;p&gt;This anomaly score, which you can refer to as AS, is then used to describe the severity and likelihood of behavior in a data point being anomalous and potentially impactful. The higher the AS, the less likely the behavior is anomalous. Here's a breakdown of what the AS signifies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AS &amp;gt; 85&lt;/strong&gt;: &lt;strong&gt;No anomaly&lt;/strong&gt;. Anomaly scores above 85 indicate that the behavior in the data point can be thought of as primary expected behavior, with minor deviations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;60 &amp;lt; AS &amp;lt;= 85&lt;/strong&gt;: &lt;strong&gt;Low severity&lt;/strong&gt;. If the anomaly score is greater than 60 and less than or equal to 85, it indicates a low-severity anomaly. This means the data point exhibits minor anomalous behaviors similar to those observed in recent days, weeks, and months. Although the likelihood of the behavior being an anomaly is low, it may occasionally conceal anomalous behavior.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;30 &amp;lt; AS &amp;lt;= 60&lt;/strong&gt;: &lt;strong&gt;Medium severity&lt;/strong&gt;. If the anomaly score is between 30 and 60, it indicates a medium-severity anomaly. This means that the data point behavior may be anomalous but also resembles patterns seen previously, making it less certain as an anomaly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AS &amp;lt;= 30&lt;/strong&gt;: &lt;strong&gt;Severe&lt;/strong&gt;. If the anomaly score is less than or equal to 30, it indicates that the anomaly is severe. This means that there is a prevalence of new unseen behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition to classifying anomalies by their severity, another perk of  Eyer and its anomaly detection is that the metrics are not only learned in isolation. Eyer also &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/9076887" rel="noopener noreferrer"&gt;uses correlations&lt;/a&gt; to group related metrics and their anomalies together, combining them in a single alert and making it easier for root cause analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Correlations in Eyer alerts&lt;/strong&gt;&lt;br&gt;
Correlations help describe the degree to which two or more variables move in relation to one another. In Eyer, correlations help identify how different metrics influence each other or exhibit similar patterns.&lt;/p&gt;

&lt;p&gt;Most metrics have a natural correlation. For example, Process CPU is correlated with the number of executions. This is because each execution of a process consumes CPU resources. As the number of executions increases, the cumulative CPU load from these executions also increases.&lt;/p&gt;

&lt;p&gt;After using these baselines to identify anomalies in a metric, determining the severity of those anomalies, and understanding which metrics might be affected by correlations, Eyer packages all this information and sends it out in a comprehensive and succinct alert.&lt;/p&gt;

&lt;p&gt;You can see an example of an Eyer anomaly alert in the code block below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"new"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T18:43:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ended"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:27:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"667c6193d58419f64f4cb403"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"items"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Operating System. undefined"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"system"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2ce746c5-1ee3-45d1-b23f-bae56bc5d51a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Committed Virtual Memory Size"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"metric_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"int"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"aggregation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"avg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"severe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T18:42:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:12:00Z"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"5523ee20-2af2-4b8e-8390-3d2cb4410018"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"System CPU Load"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"metric_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"double"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"aggregation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"avg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:25:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:26:00Z"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a59df24a-e9ec-4c4c-a087-ea1375d4b9c7"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Process CPU Load"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"metric_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"double"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"aggregation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"avg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:26:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:27:00Z"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"closed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"low"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T18:49:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ended"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:37:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:37:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"667c62f7d58419f64f4cb426"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"items"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the alert above, you have the new alerts array, the updated array, and the closed array of alerts. Check out &lt;a href="https://antteam.atlassian.net/wiki/spaces/EKB/pages/69369863/Alerts+-+structure+and+data+explained" rel="noopener noreferrer"&gt;Alerts- structure and data explained&lt;/a&gt;, to understand the structure of the alerts.&lt;/p&gt;

&lt;p&gt;According to this alert, an anomaly update has happened in the Operating system &lt;a href="https://antteam.atlassian.net/wiki/spaces/EKB/pages/47153153/Boomi+data+collector+metrics+structure" rel="noopener noreferrer"&gt;node&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;This anomaly alert has an overall medium severity because it includes one severe anomaly in the Committed Virtual Memory Size metric. The other metrics in the alert, System CPU Load, and Process CPU Load, have medium anomalies.&lt;/p&gt;

&lt;p&gt;The metrics array, which contains both affected and correlated metrics, shows anomalies in the Committed Virtual Memory Size, System CPU Load, and Process CPU Load metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This article has helped you understand the role that baselining plays in machine learning, specifically anomaly detection using historical data. &lt;/p&gt;

&lt;p&gt;While "baseline" might seem like a simple reference point, it is the foundation upon which many crucial models and their results are built. Anomaly &lt;a href="https://eyer.ai/" rel="noopener noreferrer"&gt;detection tools&lt;/a&gt; like Eyer use baselines to determine if a data point's behavior is anomalous and to gauge the extent of the anomaly. This discernment sets the stage for proactive monitoring and timely intervention, ensuring system reliability and performance.&lt;/p&gt;

&lt;p&gt;To learn more about Eyer baselines and start using the Eyer anomaly detection solution, visit the &lt;a href="https://eyer.ai/" rel="noopener noreferrer"&gt;Eyer website&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>monitoring</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How to use Eyer and Grafana to query and visualize anomalies in CPU and memory metrics</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Mon, 29 Jul 2024 08:43:03 +0000</pubDate>
      <link>https://forem.com/eyer-ai/how-to-use-eyer-and-grafana-to-query-and-visualize-anomalies-in-cpu-and-memory-metrics-12kj</link>
      <guid>https://forem.com/eyer-ai/how-to-use-eyer-and-grafana-to-query-and-visualize-anomalies-in-cpu-and-memory-metrics-12kj</guid>
      <description>&lt;p&gt;Anomaly detection, also known as outlier detection, is the practice of identifying data points that deviate significantly from the rest of a data set. Traditionally, this was the domain of statisticians and analysts who spent hours poring over data to find these anomalies. However, like many fields, anomaly detection has evolved over time, leading to the development of solutions like &lt;a href="https://eyer.ai/" rel="noopener noreferrer"&gt;Eyer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With the rise of machine learning (ML) and artificial intelligence (AI),  ML algorithms can now automatically learn underlying patterns within vast datasets,  process the data, and effectively identify anomalies that might escape even the most trained human eye.&lt;/p&gt;

&lt;p&gt;This article introduces Eyer as the AI-powered anomaly detection tool under review and demonstrates how it can be used to identify anomalies in CPU and memory metrics in a host server or machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To get started with the tutorial, you must have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Eyer connector agents installed on production or production-like hosts. If you have not installed these agents, refer to the &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/30015491" rel="noopener noreferrer"&gt;Eyer documentation&lt;/a&gt; for installation instructions.&lt;/li&gt;
&lt;li&gt;The installed agents must be running continuously for at least a week. This allows the Eyer machine learning pipeline to learn the normal behavior of your Boomi integrations. For more information,  refer to the official documentation on &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/9043994" rel="noopener noreferrer"&gt;Onboarding, preprocessing, and filtering data&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;A Boomi Atom installed locally.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Understanding what Eyer is and how it works.
&lt;/h2&gt;

&lt;p&gt;Eyer is an AI-powered observability tool that provides deep insights into your Boomi integrations. It utilizes machine learning to analyze various metrics and identify unusual patterns or data points that deviate significantly from the norm. This anomaly detection capability helps you proactively address potential issues before they impact your integrations.&lt;/p&gt;

&lt;p&gt;To gather and deliver data to Eyer's machine learning pipeline, the connector employs a range of agents, including web servers (like Jetty or Tomcat), Jolokia, and Telegraf. Each agent plays a crucial role in this process, for example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The web server hosts and serves the Jolokia agent. During installation, you can choose to use Apache Tomcat to serve the Jolokia agent instead of Jetty or any other preferred server.&lt;/li&gt;
&lt;li&gt;The Jolokia agent helps monitor and manage Java applications through a web browser. It acts as a bridge, allowing you to access and control parts of your Java program using simple web requests, and returns the information in an easy-to-read format (JSON).&lt;/li&gt;
&lt;li&gt;The Telegraf agent collects and sends metrics and events from various sources to different databases and systems. It will be responsible for collecting data from your Boomi Atom and sending it to the machine learning pipeline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Eyer’s anomaly detection works because of the continuous data stream from your Boomi Atom to Eyer’s machine learning pipeline. It is important to note that the remaining events in this tutorial, which involve querying and visualizing anomalies, occur after anomaly detection has been enabled on the Boomi Atom, requiring at least 7 days of a steady data stream.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simulating stress for the servers holding the Boomi Atom
&lt;/h2&gt;

&lt;p&gt;Once the Eyer team confirms your anomaly detection enablement, you can begin querying for anomalies in your environment.&lt;/p&gt;

&lt;p&gt;This guide simulates a production environment by running a Windows virtual machine continuously (24/7) for at least a week. It also injects anomalies by increasing the CPU load on the virtual machine hosting the Boomi Atom you're monitoring.&lt;/p&gt;

&lt;p&gt;Since this guide uses a Windows operating system virtual machine, it utilizes the Windows tool CpuStress v2.0 to maximize CPU utilization.&lt;/p&gt;

&lt;p&gt;While maximizing CPU load offers a valuable way to understand Eyer's capabilities, you can introduce anomalies across different Boomi Atom metrics, including memory, disk usage, and system load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting started with CpuStres&lt;/strong&gt;&lt;br&gt;
To get started with CpuStres, download the executable file from the &lt;a href="https://learn.microsoft.com/en-us/sysinternals/downloads/cpustres" rel="noopener noreferrer"&gt;CpuStres v2.0 download page&lt;/a&gt;. Once the download is complete, extract the &lt;strong&gt;CPUSTRES.zip&lt;/strong&gt; file and run it to open the &lt;strong&gt;CPU Stress&lt;/strong&gt; modal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXe9GFttLSFmnsA8DGtPbt6DSkKnwq8qSKLRAfHERwu0P0HEOSgTy3iSyCgZIaROD2i_cMoe7LAR1ViyQUmh09GT5vwE6Fp9-r3Znpiei_kGSpn8gmnahE6zHGnKA47uuifTciwhKMwvzaAJoyQuMfCkrjOl%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXe9GFttLSFmnsA8DGtPbt6DSkKnwq8qSKLRAfHERwu0P0HEOSgTy3iSyCgZIaROD2i_cMoe7LAR1ViyQUmh09GT5vwE6Fp9-r3Znpiei_kGSpn8gmnahE6zHGnKA47uuifTciwhKMwvzaAJoyQuMfCkrjOl%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will see your CPU cores in this modal. To activate three or four of these cores, click on each one, navigate to the &lt;strong&gt;Thread&lt;/strong&gt; tab, and select the &lt;strong&gt;Activate&lt;/strong&gt; button from the &lt;strong&gt;Thread&lt;/strong&gt; dropdown.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXf99yE_mo4EyPfu2WDPPxuCaTn0xgMeYWr274t_WIDGssALwffZL2Hjfu3qoVTIdzXV6ngsjjnPP1PYdApIScwjHPKU7BDSi61ziE5VKuDEVSgBPSd7I8m7aUaFzeuSVEAOJ626j6tlcs_buSOhGN4ou2c%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXf99yE_mo4EyPfu2WDPPxuCaTn0xgMeYWr274t_WIDGssALwffZL2Hjfu3qoVTIdzXV6ngsjjnPP1PYdApIScwjHPKU7BDSi61ziE5VKuDEVSgBPSd7I8m7aUaFzeuSVEAOJ626j6tlcs_buSOhGN4ou2c%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdJvJ8pfWlEO9_5KE75L44TH3PsCLfUWKzK3okEMFgbXwii7ziVplh4wKEJWrF_hc6CO8b-27StKp2hAAC3_vCFEIF3lEA5343OulvkkBs2fEbay2KSTlqARsf5Rtg5nR-qMaBPeKiLP3fJMuqKSueXuY5a%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdJvJ8pfWlEO9_5KE75L44TH3PsCLfUWKzK3okEMFgbXwii7ziVplh4wKEJWrF_hc6CO8b-27StKp2hAAC3_vCFEIF3lEA5343OulvkkBs2fEbay2KSTlqARsf5Rtg5nR-qMaBPeKiLP3fJMuqKSueXuY5a%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, set the &lt;strong&gt;Activity level&lt;/strong&gt; to high to increase the load on the CPU cores.  To ensure these CPU load stress tests are registered as an anomaly, keep CpuStres running for at least 8 minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcng4VGkueuUnkbhekZCn6rwZEtfsZjbeNR7nAJ7maDtgM1Kkr1gYpli6xPKO8IxlgBlU6X6Fgxf995_IC0Upi96ZsYTvyKLAoHupJ8theg2OkezmcfD5ZLGQPQb3_uo4BXMAFShpaZu_X1vWx1h2T28pA2%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcng4VGkueuUnkbhekZCn6rwZEtfsZjbeNR7nAJ7maDtgM1Kkr1gYpli6xPKO8IxlgBlU6X6Fgxf995_IC0Upi96ZsYTvyKLAoHupJ8theg2OkezmcfD5ZLGQPQb3_uo4BXMAFShpaZu_X1vWx1h2T28pA2%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXexctxe1oMkA6kYdrmyF5uuDNqpxtUJ3GBZS0GTBYp5XTJmi3jiFFlsCO8Jk350BFrReGsEci9N-F2Z66s9JHpU1nPWiOyGTM4peAWI8euBlOgTMmS_vEzDUanExPeddViLJd4V-j1TVcaH2KXipHgcFLxu%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXexctxe1oMkA6kYdrmyF5uuDNqpxtUJ3GBZS0GTBYp5XTJmi3jiFFlsCO8Jk350BFrReGsEci9N-F2Z66s9JHpU1nPWiOyGTM4peAWI8euBlOgTMmS_vEzDUanExPeddViLJd4V-j1TVcaH2KXipHgcFLxu%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Configuring the Eyer connector to query the data on anomalies
&lt;/h2&gt;

&lt;p&gt;After simulating anomalous behavior in your host machine, use the Eyer connector to query the information on these anomalies.&lt;/p&gt;

&lt;p&gt;To query this data on anomalies, log into your &lt;a href="http://platform.boomi.com/" rel="noopener noreferrer"&gt;Boomi Atmosphere account&lt;/a&gt;. Go to the &lt;strong&gt;Integration page&lt;/strong&gt;, click &lt;strong&gt;Create New&lt;/strong&gt;, and select &lt;strong&gt;Process&lt;/strong&gt; from the dropdown menu.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXf-aRP73-NTAMdJkL0sWcX9UyleXjSv7CTdyjgzzFbr-GXwAaM2EC2R75m-gEH4icK0HD4jwG4DBqRvAsHCcIuyxfcPN7Wyt34vgQMYF_eXB1FldIQnGasi7hKQGxxI0VSI9wfztRaroTtfJfx4phtcL6wh%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXf-aRP73-NTAMdJkL0sWcX9UyleXjSv7CTdyjgzzFbr-GXwAaM2EC2R75m-gEH4icK0HD4jwG4DBqRvAsHCcIuyxfcPN7Wyt34vgQMYF_eXB1FldIQnGasi7hKQGxxI0VSI9wfztRaroTtfJfx4phtcL6wh%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This action will open the &lt;strong&gt;Start Shape&lt;/strong&gt; sidebar. Choose the &lt;strong&gt;Connector&lt;/strong&gt; radio button. Next, in the &lt;strong&gt;Connecto&lt;/strong&gt;r field, search and select the &lt;strong&gt;Eyer-Partner connector&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXecziF_eTOQ75QNEyTrbjXNkl3res8zGQgGre4zaZW0JSe6paxMyFBYPNPkeOPO7AdwzVnB9WWzbhcGPo8_ap4X0TFXZrJ79-mRnpXfwtsrnRVKw9Gduf6i-dk2n02TUcdOIddOrvnamEztn91x-WbdfadO%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXecziF_eTOQ75QNEyTrbjXNkl3res8zGQgGre4zaZW0JSe6paxMyFBYPNPkeOPO7AdwzVnB9WWzbhcGPo8_ap4X0TFXZrJ79-mRnpXfwtsrnRVKw9Gduf6i-dk2n02TUcdOIddOrvnamEztn91x-WbdfadO%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click the + button in the &lt;strong&gt;Connection&lt;/strong&gt; field to open the connection page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXclogpfWIUGQRL0XHZJxCV_UU0XPBB07BMNsRGZ3ewWhyuz_M1hkkVwdqCcEsMkiMqH71h2P6Xbt-8xcKyE84nksTtgOBOTN2sh72B-o2F41hjYfItn2rLYwO-9LCTEC3osPXqf8Z5mgqyQAHyeVDNpuBlb%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXclogpfWIUGQRL0XHZJxCV_UU0XPBB07BMNsRGZ3ewWhyuz_M1hkkVwdqCcEsMkiMqH71h2P6Xbt-8xcKyE84nksTtgOBOTN2sh72B-o2F41hjYfItn2rLYwO-9LCTEC3osPXqf8Z5mgqyQAHyeVDNpuBlb%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave the &lt;strong&gt;Server&lt;/strong&gt; and the &lt;strong&gt;Eyer authentication key&lt;/strong&gt; fields as their default values.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Custom Authentication Credentials&lt;/strong&gt; field, click the &lt;strong&gt;Encrypted&lt;/strong&gt; button and fill it out with your Eyer authentication key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXft5g5DkDXYPwmNzInF9tMCTOLZQYLYY8alaPUbV6Bqj3xOaTHR8dtz0ge6aECFfGK2_k_mZckXeBedpGIx9tgIIWhkIpXO-WX29yn0Qy1BN7G-2UbB22P-1ickuxC0oGM2G6giXeWoVLYIpKjDphYdWEjK%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXft5g5DkDXYPwmNzInF9tMCTOLZQYLYY8alaPUbV6Bqj3xOaTHR8dtz0ge6aECFfGK2_k_mZckXeBedpGIx9tgIIWhkIpXO-WX29yn0Qy1BN7G-2UbB22P-1ickuxC0oGM2G6giXeWoVLYIpKjDphYdWEjK%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;strong&gt;Save and Close&lt;/strong&gt; button to return the Eyer connector sidebar.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating Eyer-Partner operation&lt;/strong&gt;&lt;br&gt;
In the sidebar, select a &lt;strong&gt;Get&lt;/strong&gt; action and then click the &lt;strong&gt;+&lt;/strong&gt; button on the &lt;strong&gt;Operation&lt;/strong&gt; field to create a new Eyer operation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdT0c3vL0dnST25urioVNsjHOa_hsy9g0WhpcAy64re10CW0yAO_e1JhXtyP3Gl6lHwkyZqvvqJQTLvPefhajpp2374mGhZ-WBtU8HJq9jIT3N2lmYO8wRUf5ZUbgHSovnNiP56K3qUBzhFjfR7GErvXvR7%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdT0c3vL0dnST25urioVNsjHOa_hsy9g0WhpcAy64re10CW0yAO_e1JhXtyP3Gl6lHwkyZqvvqJQTLvPefhajpp2374mGhZ-WBtU8HJq9jIT3N2lmYO8wRUf5ZUbgHSovnNiP56K3qUBzhFjfR7GErvXvR7%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clicking the &lt;strong&gt;+&lt;/strong&gt; button opens up the Eyer operation’s page. On this page, click the &lt;strong&gt;Import Operation&lt;/strong&gt; button to create a new operation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXe_Xi45i74ae9MBNejpOaO3z_ZJn2E8CCYbRLoz9kYV_ob0jjPDXDbVgzvE_Zr1S7IO2zlaFjwxowny-s6BrP2fmlH7zHKLbFLxZM8Hg5Sd2mSxJBnxUSYpCa0BqEnREBz4HwgmQuY9m1Bvf81MVJO15eNt%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXe_Xi45i74ae9MBNejpOaO3z_ZJn2E8CCYbRLoz9kYV_ob0jjPDXDbVgzvE_Zr1S7IO2zlaFjwxowny-s6BrP2fmlH7zHKLbFLxZM8Hg5Sd2mSxJBnxUSYpCa0BqEnREBz4HwgmQuY9m1Bvf81MVJO15eNt%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This action opens up the &lt;strong&gt;Eyer-Partner Connector Operation Import&lt;/strong&gt; modal. Fill out this modal with the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Atom&lt;/strong&gt;: Select the Atom you are running the process in from your dropdown&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Connection&lt;/strong&gt;: Select the Eyer connection you made for this process
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd-xYl5fTBCrQjcIAv6whPcOOpT9tB7b7xH7SJzgmWLT3Q-lfQ17FmAow9KyPHdIRSr9EuIvkCZ-L_k7LMFAirVxXhuZfcvKUMGqb6E5cnj4AdBZB7GU2kNWubm35ipBKoESIKEPgiJomkqz-5M8aGcryEI%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Click on the &lt;strong&gt;Next&lt;/strong&gt; button to save your operation. Then, select the Object Type that fits your purpose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Anomalies&lt;/strong&gt; returns a list of anomaly alerts grouped by correlation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anomalies with metrics&lt;/strong&gt; return a list of anomaly alerts grouped by correlation metrics, including their respective values and baseline values at the time of the alert (new/updated)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this example, we select the &lt;strong&gt;Anomalies with metrics&lt;/strong&gt; object type. Click on the &lt;strong&gt;Next&lt;/strong&gt; button to save your &lt;strong&gt;Object Type&lt;/strong&gt; preference, and click the &lt;strong&gt;Finish&lt;/strong&gt; button to see your Eyer response profile loaded on your &lt;strong&gt;Operation&lt;/strong&gt; page.&lt;/p&gt;

&lt;p&gt;Next, you need to define the operation values. These values define the information required in anomaly alerts. For the &lt;strong&gt;Eyer-Partner connector&lt;/strong&gt;, you can define operation values using either the &lt;strong&gt;Options&lt;/strong&gt; or &lt;strong&gt;Dynamic operation property&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Options&lt;/strong&gt; are great for static operation values. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic operation properties&lt;/strong&gt; are better when the start and end values are always changing. To learn more about the distinction between options and dynamic operation properties, check the official documentation on &lt;a href="https://eyer-docs.netlify.app/docs/getting-started-with-eyer/configuring-the-eyer-connector" rel="noopener noreferrer"&gt;Configuring the Eyer connector&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This guide will use Dynamic Operation properties to determine the Operation’s value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting operation value with Dynamic operation properties&lt;/strong&gt;&lt;br&gt;
To set up the Dynamic Operation properties, navigate to the &lt;strong&gt;Dynamic Operation Properties&lt;/strong&gt; tab and click the &lt;strong&gt;Add Dynamic Operation Property&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdo8mYpkZnqVUOA5nPeFYivQ7xwP_AqypYaamtIS2mLa9dFJAt-CI5ELoS8lnmYWurrY7o3z3NM3uIilU5ziZy9D6ogHHBwXDT0JyWgcbcE6YajPveWwFXObeO3It_ISR-vE2LBkOUc_SFfQbTbkQvryEJD%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdo8mYpkZnqVUOA5nPeFYivQ7xwP_AqypYaamtIS2mLa9dFJAt-CI5ELoS8lnmYWurrY7o3z3NM3uIilU5ziZy9D6ogHHBwXDT0JyWgcbcE6YajPveWwFXObeO3It_ISR-vE2LBkOUc_SFfQbTbkQvryEJD%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This action opens up a &lt;strong&gt;Parameter Value&lt;/strong&gt; modal; in this modal, select the following options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input → Query from&lt;/li&gt;
&lt;li&gt;Type → Date/Time&lt;/li&gt;
&lt;li&gt;Date Mask → yyyy-MM-dd’T’HH:mm:ssZ&lt;/li&gt;
&lt;li&gt;Date Type → Last Successful Run Date&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These Parameter Value options tell the Eyer-Partner Connector to start the query for anomalies since the last test.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdDxHRWetYjTrFDJJc_XWtK7uXkNyeHxneaqMGw0IBX7H9C4i8PLb8V0QMdUlDvR_kNZzK1H7clFDq_hW_yjyS57r1yg7ss5tvJL1mnlsDnO-BDCqaQU-vESftWy52-UMv3dtwYdvhwVGEThPvF-vH2TnML%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdDxHRWetYjTrFDJJc_XWtK7uXkNyeHxneaqMGw0IBX7H9C4i8PLb8V0QMdUlDvR_kNZzK1H7clFDq_hW_yjyS57r1yg7ss5tvJL1mnlsDnO-BDCqaQU-vESftWy52-UMv3dtwYdvhwVGEThPvF-vH2TnML%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;strong&gt;OK&lt;/strong&gt; button to return to the &lt;strong&gt;Dynamic Operation Properties&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;Next, create a new Dynamic Operation Property, filling in the Parameter Value with the following information:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input -&amp;gt; Query to&lt;/li&gt;
&lt;li&gt;Type -&amp;gt; Date/Time&lt;/li&gt;
&lt;li&gt;Date Mask -&amp;gt; yyyy-MM-dd’T’HH:mm:ssZ&lt;/li&gt;
&lt;li&gt;Data Type -&amp;gt; Current Date&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These values tell the Eyer-Partner Connector to query the current date for anomalies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc0U-2l_9gZzwWT8haT5_3jnmPztysi7TKTGVmILW9ENXD7WI_0J33xOUi5x8obBhtAG_eCQPCIFUPHLSimgUD_CP-fVTBGCYZV_EBxKrXBV4SQlmhwIOiiJyW616nlDCzPDkZGV2a05O6-2HqzU2031Rxq%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc0U-2l_9gZzwWT8haT5_3jnmPztysi7TKTGVmILW9ENXD7WI_0J33xOUi5x8obBhtAG_eCQPCIFUPHLSimgUD_CP-fVTBGCYZV_EBxKrXBV4SQlmhwIOiiJyW616nlDCzPDkZGV2a05O6-2HqzU2031Rxq%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;strong&gt;OK&lt;/strong&gt; button to save the parameter value and return to the sidebar.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdxS1vOukpJbwJaqedjVb1Q6iv5LjQMhMX0kRWvukHv7Osg_jDNAvY9pXtjinqBi4gobSP_x_q3Mr7E0I7-85c6LG1gjPWEBShfvNETfwbVBK_73ZTDPMAugAERvr4NnOTX7akBzUDVy_fyKojHJhVB6Sw%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdxS1vOukpJbwJaqedjVb1Q6iv5LjQMhMX0kRWvukHv7Osg_jDNAvY9pXtjinqBi4gobSP_x_q3Mr7E0I7-85c6LG1gjPWEBShfvNETfwbVBK_73ZTDPMAugAERvr4NnOTX7akBzUDVy_fyKojHJhVB6Sw%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the &lt;strong&gt;OK&lt;/strong&gt; button to save the Dynamic Operation Property configuration and return to the Boomi process canvas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sending out the email&lt;/strong&gt;&lt;br&gt;
With Boomi, you have multiple options for receiving these anomalies. This guide uses the Boomi Mail connector. To learn how to configure the Mail connector,  check out the &lt;a href="https://help.boomi.com/docs/atomsphere/integration/connectors/r-atm-mail_connector_4e32e771-5351-4e2c-b1fd-d7bd1bd82f1a/#:~:text=Use%20the%20Mail%20connector%20to,exchanging%20data%20between%20trading%20partners." rel="noopener noreferrer"&gt;Boomi Mail connector&lt;/a&gt; documentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcm3IToDYv0Go3Tfxt7xhlYf-WeAiebi0O2MdvwcWZinrJGJ-AR48vyOratxlSf4r15m2xOepLJ6tSxmb-rZ7ju5zHOY3lflqYsGDGBzlTe6QmwhFbl-Hr_Ifhba5G5TwKpoQ0DkUQHwatCbZnUHpKEAcc%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcm3IToDYv0Go3Tfxt7xhlYf-WeAiebi0O2MdvwcWZinrJGJ-AR48vyOratxlSf4r15m2xOepLJ6tSxmb-rZ7ju5zHOY3lflqYsGDGBzlTe6QmwhFbl-Hr_Ifhba5G5TwKpoQ0DkUQHwatCbZnUHpKEAcc%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are the anomalies from the CPU stress test received in the mail.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"new"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T18:43:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ended"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:27:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"667c6193d58419f64f4cb403"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"items"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"node"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Operating System. undefined"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"system"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2ce746c5-1ee3-45d1-b23f-bae56bc5d51a"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Committed Virtual Memory Size"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"metric_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"int"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"aggregation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"avg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"severe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T18:42:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:12:00Z"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"5523ee20-2af2-4b8e-8390-3d2cb4410018"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"System CPU Load"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"metric_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"double"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"aggregation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"avg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:25:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:26:00Z"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"a59df24a-e9ec-4c4c-a087-ea1375d4b9c7"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Process CPU Load"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"metric_type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"double"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"aggregation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"avg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:26:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:27:00Z"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"closed"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"severity"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"low"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"started"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T18:49:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"ended"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:37:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"updated"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2024-06-26T19:37:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"667c62f7d58419f64f4cb426"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"items"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This alert has anomalies whose values have been updated. This is because the environment used in this tutorial has been running for a while and has experienced different anomalies and changes to these anomalies. In your environment, these anomalies might appear in the new object.&lt;/p&gt;

&lt;p&gt;The anomalies are in the Operating System node. A node is a group of metrics that work together.  Refer to the official documentation to understand the &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/47153153" rel="noopener noreferrer"&gt;list of nodes and the metrics underneath these nodes&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This node has a couple of anomalies on the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Committed Virtual Memory Size&lt;/strong&gt;: This metric is flagged as severe, indicating that the Committed Virtual Memory size metric significantly deviates from past observed behavior and has the highest likelihood of being a disruptive anomaly. To learn more about the severity property, check out the official &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/9076908" rel="noopener noreferrer"&gt;documentation on Alerting&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System CPU Load&lt;/strong&gt;: This metric indicates the overall CPU load on the system. It has a medium severity, meaning that the metric occasionally deviates from the previously observed and learned behavior. A medium severity indicates a moderate probability that this is an anomaly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process CPU Load&lt;/strong&gt;: This metric indicates the CPU load of a specific process and has a severity value of medium.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, you can see a Closed Anomalies array containing a previously detected low severity anomaly that has been resolved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deciphering the data using Grafana
&lt;/h2&gt;

&lt;p&gt;Now that you have received the alert about the anomalies in the host system let's view these alerts on a Grafana dashboard.&lt;/p&gt;

&lt;p&gt;Grafana is fantastic for many reasons, one of which is that it simplifies the visualization of your JSON data and aids in monitoring system metrics. To learn more about how Grafana can benefit you, check out this article on &lt;a href="https://eyer.ai/blog/observability-with-grafana-and-eyer/" rel="noopener noreferrer"&gt;Observability with Grafana and Eyer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up Grafana with Eyer&lt;/strong&gt;&lt;br&gt;
To set up and connect Grafana to visualize your Eyer data, follow these steps:&lt;/p&gt;

&lt;p&gt;1.Log in to your Grafana account.&lt;br&gt;
2.Launch Grafana Cloud:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the Grafana Cloud portal page, click the "Launch" button. This will take you to your Grafana Cloud page. Click the "Launch" button again to go to the Dashboard page.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdK9FQaAsZHp5CMeFKwgQedvXORlNiFfmjUg7d3uDVuiaTY-y2I8XL-rL4u4-_dhJPdvYfv_fnEecq76UyeNyhWbsInLYE1htm5rJfrm8czuDq9qxlzjg7OaLkVGroHnosv29fVT23PeJhwTQVexOuwiUCl%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3.Add a new connection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the Dashboard page, navigate to the left-hand side menu and select the "Add new connection" tab.&lt;/li&gt;
&lt;li&gt;In the "Add new connection" page, select "InfluxDB."
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeYt3g83d9gEcE_nvAOFbyTWtLx20Weei2oMrcAJXjEIzjgnsOgMjllVWzzbVvrolD6Yo4Sts3HAqiAIIR1X00I5tsBwmVI29pkC3tJ5FWQufCKXzjYFxlO82zuyXCn3VZb-JB81MFItoE9-RgNU6Ru3HKs%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc9nyGlbrr0B8iHXZWdeEln55QoU_Zq1slVCYtFvcJDy4ey6-aeIZ46dcHk541oCskx0sxv1KU4VaRsKulGZ_3I-H42DSh4hIvqxg5uelUlVLeBXSqSZSO0ivVTF_JcW0Ki8SAiAZH8bJtX8ERGyYc7rDqw%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc9nyGlbrr0B8iHXZWdeEln55QoU_Zq1slVCYtFvcJDy4ey6-aeIZ46dcHk541oCskx0sxv1KU4VaRsKulGZ_3I-H42DSh4hIvqxg5uelUlVLeBXSqSZSO0ivVTF_JcW0Ki8SAiAZH8bJtX8ERGyYc7rDqw%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.Add a new data source:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the InfluxDB page, click “Add new data source.”
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdnHLgY9W-9AOaFS2NPVoeyTmtqIPY1EylEps0DKVL6m1o_g3uOqNOmy_-C6j3xCq3EAmm6lOpTc3YYzn5a_kbR_DXS15jF17o167emZzlX3i8FznPoW5P9MR4cMr35ivtf8O4uDOzhV-Ei_Jm32V6FEdXR%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;5.Configure the data source settings: On the settings page, set the following configuration fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set “Query language” to “Flux.”&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set “URL” to “&lt;a href="https://westeurope-1.azure.cloud2.influxdata.com.%E2%80%9D" rel="noopener noreferrer"&gt;https://westeurope-1.azure.cloud2.influxdata.com.”&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdJR5EimJ6uDskE9bD7cRyUTNM_kRNjL3hR-cKlYvEutWXXjV0_pvr6sDfHmOAxXT6ZhHkU8jWLVnX2LcJi81l5E8nwVYRgnEsYnxUgf3tSOJq8giOhsnP2hRxRXvss1oRyYIrrGDNzBB96Q92MvIo43Gqn%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdJR5EimJ6uDskE9bD7cRyUTNM_kRNjL3hR-cKlYvEutWXXjV0_pvr6sDfHmOAxXT6ZhHkU8jWLVnX2LcJi81l5E8nwVYRgnEsYnxUgf3tSOJq8giOhsnP2hRxRXvss1oRyYIrrGDNzBB96Q92MvIo43Gqn%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the “Auth” section, turn off basic auth.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under “InfluxDB Details,” enter the “Organization” and “Token” values you received with your InfluxDB details.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd6giWBXqBFxEYXINN2RVuROwCSuLVSIRLo6Wwjb3R-wOhsfQ9_W2fk1yOP3QfSINjMh6WfTdemn1WOtLJ6N0lRK4FQTNh3z8v8Fj4CgpYR0MkmCMzU2sK57ifoZoJ7y8jVbpaRZ37r9opA3K1ULMvGtJ8U%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd6giWBXqBFxEYXINN2RVuROwCSuLVSIRLo6Wwjb3R-wOhsfQ9_W2fk1yOP3QfSINjMh6WfTdemn1WOtLJ6N0lRK4FQTNh3z8v8Fj4CgpYR0MkmCMzU2sK57ifoZoJ7y8jVbpaRZ37r9opA3K1ULMvGtJ8U%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;6.Save and test the connection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click “Save and test.” If the connection works, a notification should pop up, and you can proceed to the next step. 
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXd4z9OqkLe-HTd4KIeOA0AQxI57mw9T6Twwr2lZ3rRRHY1H9WGv-IcfsEmVq-gEEuGx65I4jL6czsPRdzdOd4YG2gdhtN0DS8w_0iD_j0fITNSyCkzWbiDGEJ6v3SmuXY6n6Ta4cIKMb3Ico7TP3JWDHVd2%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If it doesn't work, double-check your settings. If the problem persists, &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1" rel="noopener noreferrer"&gt;send a support request&lt;/a&gt; or contact us on the &lt;a href="https://discord.gg/yCeM3NFcQM" rel="noopener noreferrer"&gt;official Eyer Discord channel&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;7.Create a dashboard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the "building a dashboard" link in this connection pop-up. This will take you to the “Start your new dashboard by adding a visualization” page.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfZrmw7Pf7g1hXa_7lFph45rdcD6KIWJDuXHzMav8-KvbG8yrm0_PvcDSJVL-de48OOMzx4RDLgPFvfs0xA4jTjpjPg_77GuuWHWrvjtkb4HWKnLgIEWi5epjNRVUJ0KX0HbPWFy1VoTe8HrFmYVsA1fR0d%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;8.Import a dashboard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;On the next page, click “Import a dashboard” to go to the Import dashboard page.&lt;/li&gt;
&lt;li&gt;Click “Upload dashboard JSON file” on this page and select the JSON file you received with your InfluxDB details.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeJ3PGdSuTgSlTxRpRMtQJHvv3h4yikP60rxQYY0MY0rNHmQBd1edmYhDGkLIf6dCbH-TcfoGggs3gvCjIJgbvTCj9i36e9yV26eiSDPtgN4NfekxPP2YLH3RF7TRAfvHbEi1hviK7L4nPFvytdoMKMHiDN%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should now have a dashboard containing the core metrics monitored by Eyer, including multiple baselines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeuf_2BMUWkwKxVPMBcITY8Z3IuL57NpHo5xxIybUIFvQs4FOdXglnYqmslVLUWf6IGbTXXCOwLQKcK6bm3TQgTQvyVNHUb7ezEzzZ2g4UItjdR-CyPCFkmwV6Cqmz4l7ejYXY_e61xjmqUeqTRyVnBN4tG%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeuf_2BMUWkwKxVPMBcITY8Z3IuL57NpHo5xxIybUIFvQs4FOdXglnYqmslVLUWf6IGbTXXCOwLQKcK6bm3TQgTQvyVNHUb7ezEzzZ2g4UItjdR-CyPCFkmwV6Cqmz4l7ejYXY_e61xjmqUeqTRyVnBN4tG%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the image above, you can see some of the core metrics monitored by Eyer, including their data points and primary and secondary baselines, represented in different colors. These data points and baselines are plotted on a time axis (x-axis) and a value axis (y-axis). The different colored lines and shaded areas represent the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data points on your metrics are in yellow&lt;/li&gt;
&lt;li&gt;The primary baselines, which indicate the main behaviors of your system, are in red.&lt;/li&gt;
&lt;li&gt;The secondary behaviors and baselines are distinguished by two different shades of blue. For more information on baselines and the behaviors they represent,  refer to the documentation on &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/9043994" rel="noopener noreferrer"&gt;Onboarding, preprocessing, and filtering of the data&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is important to note that the purple shading in the graph results from the overlap between the main and secondary baselines, which are red and blue, respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding the anomalies from the Grafana dashboards&lt;/strong&gt; &lt;br&gt;
To understand how to recognize anomalies from the Grafana dashboards, this section will look at the Committed Virtual Memory Size, Process CPU Load, and the System CPU Load metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdiF5DuQB9t10_MrX-PZBjZ9SQhktaoZunXh7GI0OqCNgDbaEwTNNWrCW8WSg1UQnvTXC6Vpa_k85XLE9paI08ts-es-DT6WSgF3d49FK40mix6rwzzHYVF9L5bP2R75EOeNvyBV7D0hgahpF1TJg8DSkTt%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdiF5DuQB9t10_MrX-PZBjZ9SQhktaoZunXh7GI0OqCNgDbaEwTNNWrCW8WSg1UQnvTXC6Vpa_k85XLE9paI08ts-es-DT6WSgF3d49FK40mix6rwzzHYVF9L5bP2R75EOeNvyBV7D0hgahpF1TJg8DSkTt%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" alt="Process CPU Load"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc1tTaIUoSj20ne1PB75GUrSoVtqiFIakhBWmbUe7EwrRwTJWKifHZIrzoSzFtGtUmEDY3VAGXs4RYDSZrOSXVninZhgD9PgY--zWTkxPBKY0_ZmeShqsDDBB0-n63JQzhqH-RONEQFeRlgcCq1VmGVz6B0%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc1tTaIUoSj20ne1PB75GUrSoVtqiFIakhBWmbUe7EwrRwTJWKifHZIrzoSzFtGtUmEDY3VAGXs4RYDSZrOSXVninZhgD9PgY--zWTkxPBKY0_ZmeShqsDDBB0-n63JQzhqH-RONEQFeRlgcCq1VmGVz6B0%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" alt="System CPU Load"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the images above, the Process CPU Load and System CPU Load data points exist outside the primary baseline (red-shaded areas) but within the secondary baselines, coinciding with their medium severity level. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc802u5qcwTv-Deh4l8NgCpdhtwxF4xelzg7_VPETD3YSFrVBFo0PJvY75SpGC92q_NcHs9JFOWxMEk1HO7VTELMNEm70oX8oNl7cELbuC-O_EfCCsBoxsGb1TL2emYbMQnkbtphGmLJpRF-UVLXw47-eA%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc802u5qcwTv-Deh4l8NgCpdhtwxF4xelzg7_VPETD3YSFrVBFo0PJvY75SpGC92q_NcHs9JFOWxMEk1HO7VTELMNEm70oX8oNl7cELbuC-O_EfCCsBoxsGb1TL2emYbMQnkbtphGmLJpRF-UVLXw47-eA%3Fkey%3DUsOZgP7RJZA9pZe7mJU_fA" alt="Committed Virtual Memory Size"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, an interesting anomaly observation is the Committed Virtual Memory Size, with data points existing outside the main and secondary baselines. This observation coincides with its severity level.&lt;/p&gt;

&lt;h2&gt;
  
  
  In summary
&lt;/h2&gt;

&lt;p&gt;Machine learning and artificial intelligence is changing almost everything around us, especially the monitoring and observability space. With modern software development becoming more and more complex, AI-powered insights can be the difference between quickly identifying and resolving issues or experiencing prolonged downtime and performance degradation.&lt;/p&gt;

&lt;p&gt;This article demonstrates the power of AI-powered observability by walking through the process of injecting anomalies into a system, querying these anomalies with the Eyer connector, and visualizing them using Grafana.&lt;/p&gt;

&lt;p&gt;However, this is just the beginning of what AI-powered insights can do for you. To learn more about Eyer and Grafana and to get started, check out &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portals" rel="noopener noreferrer"&gt;the official Eyer documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>grafana</category>
      <category>aiops</category>
      <category>observability</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Automating user creation with Bash Scripting</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Thu, 04 Jul 2024 11:23:41 +0000</pubDate>
      <link>https://forem.com/amaraiheanacho/user-creation-in-bash-script-1617</link>
      <guid>https://forem.com/amaraiheanacho/user-creation-in-bash-script-1617</guid>
      <description>&lt;p&gt;Managing users and groups is a fundamental aspect of system administration. Whether you're overseeing a small personal server or a large enterprise network, having a streamlined process for user creation is essential for maintaining security, organization, and efficiency. One of the most powerful tools for automating this task is Bash scripting.&lt;/p&gt;

&lt;p&gt;This article walks you through solving user management issues with a Bash script. The script will cover creating accounts, assigning them to personal and general groups, and managing and securing their passwords.&lt;/p&gt;

&lt;p&gt;This project is available on GitHub. Check out the &lt;a href="https://github.com/Iheanacho-ai/User-creation-script" rel="noopener noreferrer"&gt;repository for the complete script&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem statement
&lt;/h2&gt;

&lt;p&gt;You are presented with a problem.&lt;/p&gt;

&lt;p&gt;Your company has hired many new developers, and you need to automate the creation of user accounts and passwords for each of them.&lt;/p&gt;

&lt;p&gt;As a SysOps engineer, write a Bash script that reads a text file containing the employees’ usernames and group names, where each line is formatted as &lt;code&gt;username;groups&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The text file can also specify multiple groups for the user, formatted as &lt;code&gt;username; group1, group2&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;In addition to the multiple groups specified by the text file, each user must have a personal group named after their username.&lt;/p&gt;

&lt;p&gt;The script should create users and groups as specified, set up home directories with appropriate permissions and ownership, and generate random user passwords.&lt;/p&gt;

&lt;p&gt;Additionally, store the generated passwords securely in &lt;code&gt;/var/secure/user_passwords.csv&lt;/code&gt;, and log all actions to &lt;code&gt;/var/log/user_management.log&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;How do you automate this workflow with Bash scripting?&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up the project
&lt;/h2&gt;

&lt;p&gt;Before diving head first into creating the script itself, let's define what it needs to automate. The script must:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Read User and Group Information:&lt;/strong&gt; The script will rely on a text file containing user and group information&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create User and Group:&lt;/strong&gt; For each user specified in the file, the script will create a user account and a personal group&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assign Additional Groups:&lt;/strong&gt; If additional groups are listed for a user (e.g., &lt;code&gt;user;group1,group2&lt;/code&gt;), the script will also assign the user to those groups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Home Directories:&lt;/strong&gt; Each user will have a dedicated home directory created.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate Random Passwords:&lt;/strong&gt; Secure random passwords will be generated for each user and stored in &lt;code&gt;/var/secure/user_passwords.csv&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log Actions:&lt;/strong&gt; The script will log all its activities to &lt;code&gt;/var/log/user_management.log&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Creating the user and group text file&lt;/strong&gt;&lt;br&gt;
The Bash script relies on a text file to define the users and groups it needs to create.&lt;/p&gt;

&lt;p&gt;To create this text file, navigate to your project’s root directory and create a file named &lt;code&gt;text_file.txt&lt;/code&gt;. This file should contain lines formatted as follows:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

    villanelle; sudo, dev
    eve; dev, www-data


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can replace the usernames and groups with any names you’d like.&lt;/p&gt;

&lt;p&gt;Now that you have the text file prepared let's create the Bash script that interacts with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the Bash script&lt;/strong&gt;&lt;br&gt;
Within your project's root directory, create a new file named &lt;code&gt;create_user.sh&lt;/code&gt;. This script will handle user creation, group assignment, activity logging, and more.&lt;/p&gt;

&lt;p&gt;This script will be made up of different parts, and these are:&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;Ensuring root privileges&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The script requires elevated privileges to perform actions like creating users, groups, and modifying permissions. So, you will begin by checking if the user running the script has root access.&lt;br&gt;
In your &lt;code&gt;create_user.sh&lt;/code&gt; file, add the following commands to check if the user is root:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
    &lt;span class="c"&gt;# Check if the current user is a superuser, exit if the user is not&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$EUID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-ne&lt;/span&gt; 0 &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Please run as root"&lt;/span&gt;
      &lt;span class="nb"&gt;exit &lt;/span&gt;1
    &lt;span class="k"&gt;fi&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This code ensures only users with root access can execute the script. If you're not logged in as root, running the script will display an error message and exit.&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Check if the text file was passed in as an argument&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, you need to ensure the script receives the text file (&lt;code&gt;text_file.txt&lt;/code&gt;) as an argument. To perform this check, add the following lines of code to your &lt;code&gt;create_user.sh&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="c"&gt;# Check if the file was passed into the script&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then 
        &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Please pass the file parameter"&lt;/span&gt;
        &lt;span class="nb"&gt;exit &lt;/span&gt;1
    &lt;span class="k"&gt;fi&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;In the code block above, these commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;[ -z "$1" ]&lt;/code&gt;: This checks if the first argument ($1) is empty.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;echo "Please pass the file parameter"&lt;/code&gt;: This message informs the user if the script is missing the required file.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;exit 1&lt;/code&gt;: Exits the script with an error code if the check fails.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By including this code, you guarantee the script receives the necessary text file to function correctly.&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;Create environment variables for the file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, you will create environment variables to hold the paths for the input text file (&lt;code&gt;text_file.txt&lt;/code&gt;), the log file (&lt;code&gt;/var/log/user_management.log&lt;/code&gt;), and the password file (&lt;code&gt;/var/secure/user_passwords.csv&lt;/code&gt;).&lt;br&gt;
To create these variables, add the following lines of code to your &lt;code&gt;create_user.sh&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="c"&gt;# Define the file paths for the log file, and the password file&lt;/span&gt;
    &lt;span class="nv"&gt;INPUT_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nv"&gt;LOG_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/var/log/user_management.log"&lt;/span&gt;
    &lt;span class="nv"&gt;PASSWORD_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/var/secure/user_passwords.csv"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;4.&lt;strong&gt;Create the log and password files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, create the log and password files and give them the necessary permissions with this command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="c"&gt;# Generate logfiles and password files and grant the user the permissions to edit the password file&lt;/span&gt;
    &lt;span class="nb"&gt;touch&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;
    &lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /var/secure
    &lt;span class="nb"&gt;chmod &lt;/span&gt;700 /var/secure
    &lt;span class="nb"&gt;touch&lt;/span&gt; &lt;span class="nv"&gt;$PASSWORD_FILE&lt;/span&gt;
    &lt;span class="nb"&gt;chmod &lt;/span&gt;600 &lt;span class="nv"&gt;$PASSWORD_FILE&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The commands in the code block above are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;touch $LOG_FILE&lt;/code&gt;: Creates a &lt;code&gt;/var/log/user_management.log&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;mkdir -p /var/secure&lt;/code&gt;: Creates a &lt;code&gt;/var/secure&lt;/code&gt; directory that will hold the password file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;chmod 700 /var/secure&lt;/code&gt;: Sets the permissions so that only the user has read, write, and execute permissions for the &lt;code&gt;/var/secure&lt;/code&gt; directory&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;touch $PASSWORD_FILE&lt;/code&gt;: Creates a &lt;code&gt;/var/secure/user_passwords.csv&lt;/code&gt; file&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;chmod 600 $PASSWORD_FILE&lt;/code&gt;: Sets the permissions so that only the user has read and write permissions for the &lt;code&gt;/var/secure/user_passwords.csv&lt;/code&gt; file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;5.&lt;strong&gt;Generate the passwords&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, create the &lt;code&gt;log_message()&lt;/code&gt; and &lt;code&gt;generate_password()&lt;/code&gt; functions. These functions will handle creating log messages for each action and generating user passwords:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="c"&gt;# Generate logs and passwords &lt;/span&gt;
    log_message&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s1"&gt;'+%Y-%m-%d %H:%M:%S'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; - &lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    generate_password&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        openssl rand &lt;span class="nt"&gt;-base64&lt;/span&gt; 12
    &lt;span class="o"&gt;}&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here's what each function does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;log_message()&lt;/code&gt;: This function appends a log message with the current date and time (formatted as &lt;code&gt;%Y-%m-%d %H:%M:%S&lt;/code&gt;) to the &lt;code&gt;$LOG_FILE&lt;/code&gt;. It takes a positional parameter &lt;code&gt;$1 representing&lt;/code&gt; the message to be logged.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;generate_password()&lt;/code&gt;: This function uses OpenSSL to generate a random password each time it is called, which is then printed to standard output.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To learn more about the OpenSSL library, refer to the &lt;a href="https://www.openssl.org/docs/" rel="noopener noreferrer"&gt;official OpenSSL documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6.&lt;strong&gt;Creating the users and groups&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You have verified that the user is running this script as a superuser. Additionally, you have created variables that hold different file paths pointing to the log, password, and input files.&lt;/p&gt;

&lt;p&gt;Next, the script should loop through each entry in the input text file, split these entries by usernames and groups, and then create the users and their respective groups.&lt;/p&gt;

&lt;p&gt;To achieve this, include the following commands in your &lt;code&gt;create_user.sh&lt;/code&gt; file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="c"&gt;# Read the input file line by line and save them into variables&lt;/span&gt;
    &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;';'&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; username &lt;span class="nb"&gt;groups&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
        &lt;/span&gt;&lt;span class="nv"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | xargs&lt;span class="si"&gt;)&lt;/span&gt;
        &lt;span class="nb"&gt;groups&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$groups&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | xargs&lt;span class="si"&gt;)&lt;/span&gt;
        &lt;span class="c"&gt;# Check if the personal group exists, create one if it doesn't&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; getent group &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Group &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt; does not exist, adding it now"&lt;/span&gt;
            groupadd &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            log_message &lt;span class="s2"&gt;"Created personal group &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
        &lt;span class="k"&gt;fi&lt;/span&gt;

        &lt;span class="c"&gt;# Check if the user exists&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt; exists"&lt;/span&gt;
            log_message &lt;span class="s2"&gt;"User &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt; already exists"&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;

            &lt;span class="c"&gt;# Create a new user with the created group if the user does not exist&lt;/span&gt;
            useradd &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="nv"&gt;$username&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /bin/bash &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            log_message &lt;span class="s2"&gt;"Created a new user &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
        &lt;span class="k"&gt;fi&lt;/span&gt;

        &lt;span class="c"&gt;# Check if the groups were specified&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$groups&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
            &lt;span class="c"&gt;# Read through the groups saved in the groups variable created earlier and split each group by ','&lt;/span&gt;
            &lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;','&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; group_array &lt;span class="o"&gt;&amp;lt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$groups&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            &lt;span class="c"&gt;# Loop through the groups &lt;/span&gt;
            &lt;span class="k"&gt;for &lt;/span&gt;group &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;group_array&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;

                &lt;span class="c"&gt;# Remove the trailing and leading whitespaces and save each group to the group variable&lt;/span&gt;
                &lt;span class="nv"&gt;group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | xargs&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# Remove leading/trailing whitespace&lt;/span&gt;
                &lt;span class="c"&gt;# Check if the group already exists&lt;/span&gt;
                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; getent group &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
                    &lt;span class="c"&gt;# If the group does not exist, create a new group&lt;/span&gt;
                    groupadd &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
                    log_message &lt;span class="s2"&gt;"Created group &lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;."&lt;/span&gt;
                &lt;span class="k"&gt;fi&lt;/span&gt;

                &lt;span class="c"&gt;# Add the user to each group&lt;/span&gt;
                usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
                log_message &lt;span class="s2"&gt;"Added user &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt; to group &lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;."&lt;/span&gt;
            &lt;span class="k"&gt;done
        fi&lt;/span&gt;

        &lt;span class="c"&gt;# Create and set a user password&lt;/span&gt;
        &lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;generate_password&lt;span class="si"&gt;)&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$password&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | chpasswd
        &lt;span class="c"&gt;# Save user and password to a file&lt;/span&gt;
        &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="nv"&gt;$password&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$PASSWORD_FILE&lt;/span&gt;
    &lt;span class="k"&gt;done&lt;/span&gt; &amp;lt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    log_message &lt;span class="s2"&gt;"User created successfully"&lt;/span&gt;

    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Users have been created and added to their groups successfully"&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s break down the code snippet to understand what each part does:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Read the Input File Line by Line:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

   &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;';'&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; username &lt;span class="nb"&gt;groups&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
       &lt;span class="c"&gt;# Code to create the user groups&lt;/span&gt;
    &lt;span class="k"&gt;done&lt;/span&gt; &amp;lt; “&lt;span class="nv"&gt;$INPUT_FILE&lt;/span&gt;”


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;while …; do … done&lt;/code&gt;: This loop iterates over each line in the input field and executes the commands within the &lt;code&gt;do&lt;/code&gt; block for each of the line&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;IFS=';'&lt;/code&gt;: Sets the &lt;a href="https://unix.stackexchange.com/questions/184863/what-is-the-meaning-of-ifs-n-in-bash-scripting" rel="noopener noreferrer"&gt;Internal Field Separator (IFS)&lt;/a&gt; to a semicolon (;). This tells the read command to split each line in the input file based on semicolons.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;read -r username groups&lt;/code&gt;: Reads each line of the input file and splits it into two variables: &lt;code&gt;username&lt;/code&gt; and &lt;code&gt;groups&lt;/code&gt;. You will need these variables for creating usernames and groups&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;|| [ -n "$username" ]&lt;/code&gt;: This ensures that the loop continues processing the last line even if it doesn't end with a newline character.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trimming Whitespace:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="nv"&gt;username&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | xargs&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;groups&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$groups&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | xargs&lt;span class="si"&gt;)&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;br&gt;
    - This step removes any leading or trailing spaces from the &lt;code&gt;username&lt;/code&gt; and &lt;code&gt;groups&lt;/code&gt; variables. Extra spaces can cause issues with user and group creation.&lt;br&gt;
    - &lt;code&gt;xargs&lt;/code&gt;: This command removes any whitespace at the beginning or end of the variable values.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Checking and Creating the Personal Group:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; getent group &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Group &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt; does not exist, adding it now"&lt;/span&gt;
      groupadd &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      log_message &lt;span class="s2"&gt;"Created personal group &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;br&gt;
    - &lt;code&gt;getent group "$username" &amp;amp;&amp;gt;/dev/null&lt;/code&gt;: This checks if a group with the username exists. If it doesn't, the &lt;code&gt;groupadd&lt;/code&gt; command creates the group, and a log message is recorded.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Checking and Creating the User:&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt; &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
      &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"User &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt; exists"&lt;/span&gt;
      log_message &lt;span class="s2"&gt;"User &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt; already exists"&lt;/span&gt;
    &lt;span class="k"&gt;else
      &lt;/span&gt;useradd &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="nv"&gt;$username&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; /bin/bash &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      log_message &lt;span class="s2"&gt;"Created a new user &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;fi&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;id -u "$username" &amp;amp;&amp;gt;/dev/null&lt;/code&gt;: This checks if a user with the specified username exists. If the user does not exist, the useradd function is used to create one, and a log message is recorded.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;useradd -m -g "$username" -s /bin/bash "$username"&lt;/code&gt;: This command creates a new user with the specified name (&lt;code&gt;$username&lt;/code&gt;), a home directory (&lt;code&gt;-m&lt;/code&gt;), the user's name as the primary group (&lt;code&gt;-g $username&lt;/code&gt;), and the Bash shell as the default login shell (&lt;code&gt;-s /bin/bash&lt;/code&gt;). For more details, check out &lt;a href="https://linuxize.com/post/how-to-create-users-in-linux-using-the-useradd-command/" rel="noopener noreferrer"&gt;how to create users in Linux using the &lt;code&gt;useradd&lt;/code&gt; command&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Checking and Assigning Groups:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

     &lt;span class="c"&gt;# Check if the groups were specified&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$groups&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
        &lt;span class="c"&gt;# Read through the groups saved in the groups variable created earlier and split each group by ','&lt;/span&gt;
        &lt;span class="nv"&gt;IFS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;','&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; group_array &lt;span class="o"&gt;&amp;lt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$groups&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

        &lt;span class="c"&gt;# Loop through the groups &lt;/span&gt;
        &lt;span class="k"&gt;for &lt;/span&gt;group &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;group_array&lt;/span&gt;&lt;span class="p"&gt;[@]&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
            &lt;span class="c"&gt;# Remove the trailing and leading whitespaces and save each group to the group variable&lt;/span&gt;
            &lt;span class="nv"&gt;group&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | xargs&lt;span class="si"&gt;)&lt;/span&gt; &lt;span class="c"&gt;# Remove leading/trailing whitespace&lt;/span&gt;

            &lt;span class="c"&gt;# Check if the group already exists&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; getent group &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then&lt;/span&gt;
                &lt;span class="c"&gt;# If the group does not exist, create a new group&lt;/span&gt;
                groupadd &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
                log_message &lt;span class="s2"&gt;"Created group &lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;."&lt;/span&gt;
            &lt;span class="k"&gt;fi&lt;/span&gt;

            &lt;span class="c"&gt;# Add the user to each group&lt;/span&gt;
            usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
            log_message &lt;span class="s2"&gt;"Added user &lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt; to group &lt;/span&gt;&lt;span class="nv"&gt;$group&lt;/span&gt;&lt;span class="s2"&gt;."&lt;/span&gt;
        &lt;span class="k"&gt;done
    fi&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;[ -n "$groups" ]&lt;/code&gt;: This checks if the &lt;code&gt;$groups&lt;/code&gt; variable is not empty&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;IFS=',' read -r -a group_array &amp;lt;&amp;lt;&amp;lt; "$groups"&lt;/code&gt;: This splits the &lt;code&gt;$groups&lt;/code&gt; variable by commas, storing each group name as a separate element in an array named &lt;code&gt;group_array&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;for&lt;/code&gt; loop: This iterates through each group name in the &lt;code&gt;group_array&lt;/code&gt; and runs the following commands:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;group=$(echo "$group" | xargs)&lt;/code&gt;: This removes any leading or trailing spaces from the current group name in the loop.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;if ! getent group "$group" &amp;amp;&amp;gt;/dev/null&lt;/code&gt;: This checks if a group exists. If the group does not exist, the &lt;code&gt;groupadd&lt;/code&gt; function creates the group, and a log message is recorded.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;code&gt;usermod -aG "$group" "$username"&lt;/code&gt;: This command adds the user (&lt;code&gt;$username&lt;/code&gt;) to the current group (&lt;code&gt;$group&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Generating and Setting a User Password:&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="nv"&gt;password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;generate_password&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$password&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | chpasswd
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$username&lt;/span&gt;&lt;span class="s2"&gt;,&lt;/span&gt;&lt;span class="nv"&gt;$password&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$PASSWORD_FILE&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;password=$(generate_password)&lt;/code&gt;: Calls the &lt;code&gt;generate_password()&lt;/code&gt; function created earlier and stores its output in a &lt;code&gt;password&lt;/code&gt; variable&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;echo "$username:$password" | chpasswd&lt;/code&gt;: Sets the user's password using the &lt;code&gt;password&lt;/code&gt; variable&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;echo "$username,$password" &amp;gt;&amp;gt; $PASSWORD_FILE&lt;/code&gt;: Saves the username and password to the password file at the path &lt;code&gt;var/secure/user_passwords.csv&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feeding the Input File into the While Loop&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

    &lt;span class="k"&gt;done&lt;/span&gt; &amp;lt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$INPUT_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Explanation:&lt;/strong&gt;&lt;br&gt;
    - &lt;code&gt;done &amp;lt; "$INPUT_FILE"&lt;/code&gt;: This redirection operator, &lt;code&gt;&amp;lt;&lt;/code&gt;, tells the while loop to read its input from the file specified by &lt;code&gt;$INPUT_FILE&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;When you are done with this section, your &lt;code&gt;create_user.sh&lt;/code&gt; file should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/Iheanacho-ai/a572c299428c68e309b549d8d4d4cb4e" rel="noopener noreferrer"&gt;https://gist.github.com/Iheanacho-ai/a572c299428c68e309b549d8d4d4cb4e&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this, you have successfully created a script that effectively manages users and groups in your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing this script
&lt;/h2&gt;

&lt;p&gt;Once you've written your script, it's time to verify that it works as intended. Here's how to test it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Running the Script in a Linux Environment&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You'll need a terminal that supports Linux commands to run the script. Some options include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ubuntu.com/desktop/wsl" rel="noopener noreferrer"&gt;Windows Subsystem for Linux (WSL)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/downloads" rel="noopener noreferrer"&gt;Git Bash&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.redhat.com/en/topics/virtualization/what-is-a-virtual-machine#:~:text=Linux%20containers%20and%20virtual%20machines,the%20rest%20of%20the%20system." rel="noopener noreferrer"&gt;A Linux virtual machine&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;2.&lt;strong&gt;Making the Script Executable&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before running the script, you need to grant it execute permission with this command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

    chmod +x create_user.sh


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.&lt;strong&gt;Running the Script&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, execute the script with this command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

    ./create_user.sh ./text_file.txt


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If your script is running as it should, you should see this in your terminal:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_DF2D4B371372CCF8FEECCA2C9FBE5B56C3A32A198F67E92F574E6C90BC984A94_1720190754904_Screenshot%2B2024-07-05%2Bat%2B15.43.48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_DF2D4B371372CCF8FEECCA2C9FBE5B56C3A32A198F67E92F574E6C90BC984A94_1720190754904_Screenshot%2B2024-07-05%2Bat%2B15.43.48.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, check your &lt;code&gt;/var/log/user_management.log&lt;/code&gt; file to see your logs by running this command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /var/log/user_management.log
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_DF2D4B371372CCF8FEECCA2C9FBE5B56C3A32A198F67E92F574E6C90BC984A94_1720190805609_Screenshot%2B2024-07-05%2Bat%2B15.44.30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_DF2D4B371372CCF8FEECCA2C9FBE5B56C3A32A198F67E92F574E6C90BC984A94_1720190805609_Screenshot%2B2024-07-05%2Bat%2B15.44.30.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, check your &lt;code&gt;/var/secure/user_passwords.csv&lt;/code&gt; file with this command to see the users and their passwords:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /var/secure/user_passwords.csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_DF2D4B371372CCF8FEECCA2C9FBE5B56C3A32A198F67E92F574E6C90BC984A94_1720190889214_Screenshot%2B2024-07-05%2Bat%2B15.45.07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpaper-attachments.dropboxusercontent.com%2Fs_DF2D4B371372CCF8FEECCA2C9FBE5B56C3A32A198F67E92F574E6C90BC984A94_1720190889214_Screenshot%2B2024-07-05%2Bat%2B15.45.07.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  In summary
&lt;/h2&gt;

&lt;p&gt;Throughout this article, you have walked through using Bash to manage users, groups, and their passwords. You have also logged all the actions into a file so that you and anybody else can look back on them and troubleshoot. But this is only the tip of the iceberg when it comes to Bash's capabilities.&lt;/p&gt;

&lt;p&gt;With Bash, you can automate a wide variety of administrative tasks, streamline your workflows, and enhance the efficiency of your system management. Whether it's scheduling regular maintenance tasks with cron jobs, managing system updates, or monitoring system performance, Bash provides a powerful &lt;br&gt;
toolset for system administrators.&lt;/p&gt;

&lt;p&gt;So stay curious and check out &lt;a href="https://linuxcommand.org/lc3_resources.php" rel="noopener noreferrer"&gt;Bash resources&lt;/a&gt; for more resources on scripting with Bash for Linux systems.&lt;/p&gt;

</description>
      <category>bash</category>
      <category>linux</category>
      <category>devops</category>
      <category>sysops</category>
    </item>
    <item>
      <title>Deploying a static website with AWS EC2 using Nginx</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Mon, 01 Jul 2024 13:26:55 +0000</pubDate>
      <link>https://forem.com/amaraiheanacho/deploying-a-static-website-with-aws-ec2-using-nginx-2pc3</link>
      <guid>https://forem.com/amaraiheanacho/deploying-a-static-website-with-aws-ec2-using-nginx-2pc3</guid>
      <description>&lt;p&gt;&lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;Amazon Web Services (AWS)&lt;/a&gt; is one of the most popular cloud computing platforms worldwide. It offers a comprehensive suite of services that enable developers and businesses to build, deploy, and scale applications with ease. &lt;/p&gt;

&lt;p&gt;One of the key advantages of AWS is its flexibility, allowing users to choose from a variety of services to suit their specific needs. This guide will focus on the AWS EC2 service, teaching you how to leverage its virtual machine capabilities to create your own server environment specifically designed to host your static website using the efficient and lightweight Nginx web server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To get started with this tutorial, you must have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;An AWS account&lt;/strong&gt;: If you don't have one already, you can create a free tier account &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS website&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Basic understanding of HTML and CSS&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Creating the Static website
&lt;/h2&gt;

&lt;p&gt;To start this project, you need to create the static website you want to serve with your NGINX server. This tutorial uses a simple HTML and CSS webpage.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;

&lt;span class="cp"&gt;&amp;lt;!DOCTYPE html&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;html&lt;/span&gt; &lt;span class="na"&gt;lang=&lt;/span&gt;&lt;span class="s"&gt;"en"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;head&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;charset=&lt;/span&gt;&lt;span class="s"&gt;"UTF-8"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;meta&lt;/span&gt; &lt;span class="na"&gt;name=&lt;/span&gt;&lt;span class="s"&gt;"viewport"&lt;/span&gt; &lt;span class="na"&gt;content=&lt;/span&gt;&lt;span class="s"&gt;"width=device-width, initial-scale=1.0"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;title&amp;gt;&lt;/span&gt;My Information&lt;span class="nt"&gt;&amp;lt;/title&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;style&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;body&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nl"&gt;font-family&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Arial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;sans-serif&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;text-align&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;center&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nc"&gt;.container&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nl"&gt;max-width&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;margin&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="nb"&gt;auto&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;border&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1px&lt;/span&gt; &lt;span class="nb"&gt;solid&lt;/span&gt; &lt;span class="m"&gt;#ccc&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;padding&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;20px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;border-radius&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8px&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nl"&gt;box-shadow&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="m"&gt;10px&lt;/span&gt; &lt;span class="n"&gt;rgba&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="m"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nt"&gt;h1&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;#333&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/style&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/head&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;body&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;div&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"container"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Hello there, its great that you are checking out my article!&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;    
    &lt;span class="nt"&gt;&amp;lt;/div&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/html&amp;gt;&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfntwJAUEoVN99FZK6qIvZXAXonAMFhrHL_qje7kyoBOZ39WwqzY-sS8aQ94js9XDNQwH38WyP_MGYSL1Vp6BoN3Q77WUu3-64EqinEHthZ2RBxVm2By5wAyejQXc0Q1I_4V6JIeWA0GfZlKgEKCcZnt_yU%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfntwJAUEoVN99FZK6qIvZXAXonAMFhrHL_qje7kyoBOZ39WwqzY-sS8aQ94js9XDNQwH38WyP_MGYSL1Vp6BoN3Q77WUu3-64EqinEHthZ2RBxVm2By5wAyejQXc0Q1I_4V6JIeWA0GfZlKgEKCcZnt_yU%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating an EC2 instance in your AWS console
&lt;/h2&gt;

&lt;p&gt;An &lt;a href="https://aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;Amazon Web Service EC2 instance&lt;/a&gt; is one of the most popular AWS services worldwide.  It is a virtual server in the AWS Cloud that provides the computing resources your applications and services need to run, such as CPU, memory, storage, and networking.&lt;/p&gt;

&lt;p&gt;To create an EC2 instance, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to your &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS console&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, click the search icon at the top, type in EC2, and select it from the menu. This action redirects you to the &lt;strong&gt;Resources&lt;/strong&gt; page. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXdIS8nHuT4JKjd8AgX4S3I20Wd1G6IcT-WHucso08HRWDaLh-msIhFkTIBePgGppmhSGhZ5gdP9B33y26A5qVd6ljgVtgRAaMNNS9lp4okr3EXCR6OHD0-Ni-8Qyxycr_0NAQ4dkr1x9TDQFpQnwScIaFs%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXdIS8nHuT4JKjd8AgX4S3I20Wd1G6IcT-WHucso08HRWDaLh-msIhFkTIBePgGppmhSGhZ5gdP9B33y26A5qVd6ljgVtgRAaMNNS9lp4okr3EXCR6OHD0-Ni-8Qyxycr_0NAQ4dkr1x9TDQFpQnwScIaFs%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Resources page, click on &lt;strong&gt;Instances (running)&lt;/strong&gt; to get redirected to the Instances page on your AWS console. On the &lt;strong&gt;Instances&lt;/strong&gt; page, click the &lt;strong&gt;Launch instance&lt;/strong&gt; button to define configuration settings for your EC2 instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;On the &lt;strong&gt;Instances&lt;/strong&gt; page, click the &lt;strong&gt;Launch Instance&lt;/strong&gt; button and configure your instance as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Name and Tags:&lt;/strong&gt; Give your EC2 instance a recognizable name.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Machine Image (AMI):&lt;/strong&gt; An AMI is a template used to create virtual servers in Amazon EC2. It contains the operating system, software packages, and configurations needed for launching instances.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This tutorial will use the default Ubuntu AMI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfy4aJa1S9XwPEawicHb8TituCAt3oPo_S7MBEwIN8sNw37K6scc-TlmjPc3W46eDEN7esuaX9vvYH-ViTfeWqM7MQy33IfYLrsYrqonCCnbHWwk-kzoJzoP0eDub1NnK8RxyovaU8Ff4LbIru-ojKYoqby%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfy4aJa1S9XwPEawicHb8TituCAt3oPo_S7MBEwIN8sNw37K6scc-TlmjPc3W46eDEN7esuaX9vvYH-ViTfeWqM7MQy33IfYLrsYrqonCCnbHWwk-kzoJzoP0eDub1NnK8RxyovaU8Ff4LbIru-ojKYoqby%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Instance Type:&lt;/strong&gt; Select an instance type. This tutorial will use the default t2.micro instance type, which offers 1 vCPU and 1 GiB memory. If you prefer a different instance type with greater system capabilities, simply click the dropdown and choose from the various available options.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXeTGujte1pljMbmewumiIR4i45vjCSws5Sy3sRMqat-buQD4mbZFTGhvY_HXOoiWTin8Kr-W5avrqNDCRmrSbhzyz3cvqO57oMDDb9y2S9casxziFMPXz97vIPrtY9Be5sPCBXAuLZoyxwSiix7buJL45yc%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXeTGujte1pljMbmewumiIR4i45vjCSws5Sy3sRMqat-buQD4mbZFTGhvY_HXOoiWTin8Kr-W5avrqNDCRmrSbhzyz3cvqO57oMDDb9y2S9casxziFMPXz97vIPrtY9Be5sPCBXAuLZoyxwSiix7buJL45yc%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Key Pair (Login):&lt;/strong&gt; Key pairs provide a secure method for connecting to your instance. To create a new key pair, click the &lt;strong&gt;Create new key pair&lt;/strong&gt; link, enter a name for the key pair, and click &lt;strong&gt;Create key pair&lt;/strong&gt; once more to download your newly created key pair.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfjCFpVd2g5mYLnboeeOU4XFnXK8HLBxkD-1iSq16KGaDqZsbPHtSj-hT3N_nuAghpTtqUlakcx1EXA9nxCgFxKEIWnwnvE_aBB1bcShp9h5Yr18zYmvAA5am_pLAwrn-hGMFwELvO3thSJ_LM0hazVUzE%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfjCFpVd2g5mYLnboeeOU4XFnXK8HLBxkD-1iSq16KGaDqZsbPHtSj-hT3N_nuAghpTtqUlakcx1EXA9nxCgFxKEIWnwnvE_aBB1bcShp9h5Yr18zYmvAA5am_pLAwrn-hGMFwELvO3thSJ_LM0hazVUzE%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXdMhtEixG6JdQYw4yDZius4MdVlNvX__vF-XgtxkW6Vso7rvb7KQe-SXulyz9D6ZzR5q1eUlTG-0goy5aC8hniUZbYpRnA2J6vWiCowljut9b7bFTS8ZTt76XC1WTSOC982id9O3xeTQcPZUri8AT8r26nL%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXdMhtEixG6JdQYw4yDZius4MdVlNvX__vF-XgtxkW6Vso7rvb7KQe-SXulyz9D6ZzR5q1eUlTG-0goy5aC8hniUZbYpRnA2J6vWiCowljut9b7bFTS8ZTt76XC1WTSOC982id9O3xeTQcPZUri8AT8r26nL%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Network Settings:&lt;/strong&gt; Network settings define the firewall rules that limit or allow website traffic to your instance. In this section, check the boxes for &lt;strong&gt;Allow SSH traffic from Anywhere&lt;/strong&gt;, &lt;strong&gt;Allow HTTPS traffic from the Internet&lt;/strong&gt;, and &lt;strong&gt;Allow HTTP traffic from the Internet.&lt;/strong&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXdmCSnlA1lP0RRHGwqaiGQXLyuf3JAwgprmTSGfacnaPdZ3gqlU7pE0NBmuvvG7hgYJ01AFCVhAZL-pN-kmgIxzrB6t64IDkQROxFdrFhWq38vrcEWpWw21LZEu-GYf4b46Q2eczP70icJWDxxznn2ZpCQ9%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;5.&lt;strong&gt;Launch your Instance:&lt;/strong&gt; Once you've reviewed your configuration, click the &lt;strong&gt;Launch Instances&lt;/strong&gt; button to create your virtual machine on AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Logging into your AWS virtual machine&lt;/strong&gt;&lt;br&gt;
After creating your AWS EC2 instance, click on the instance ID to navigate to the Instances page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfYZKm-HVHSSRaPwpfTxuRzngVOrL9g2eMrg0Sz9tcKPXfzHQYW3k8c1LEmknxE2TkqDwruwhqFwT1nwwnO8BtSe6LpMaicMfMi6SpwX9D6li1eZp0Y9as6kYxhbejyS8Nql9VzS_102Wn00jsO3azc1BrO%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfYZKm-HVHSSRaPwpfTxuRzngVOrL9g2eMrg0Sz9tcKPXfzHQYW3k8c1LEmknxE2TkqDwruwhqFwT1nwwnO8BtSe6LpMaicMfMi6SpwX9D6li1eZp0Y9as6kYxhbejyS8Nql9VzS_102Wn00jsO3azc1BrO%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On this page, select your instance by checking the checkbox next to it, then click the &lt;strong&gt;Connect&lt;/strong&gt; button at the top to initiate the connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfPqwpTRYueFZ0P28u40ENa7ACITUUUXwMWlY9R23TgM14VnxeTL-6LQ1V8UzKB0YULzjFOYA7LMbOiW0u6hjzfsMGaI6lN3vZAAJO6P1eOdGL-HIQfjp2_VWLhQ3cv4EsuQL_vLcAhxgMAYPbHYw9aCdkJ%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfPqwpTRYueFZ0P28u40ENa7ACITUUUXwMWlY9R23TgM14VnxeTL-6LQ1V8UzKB0YULzjFOYA7LMbOiW0u6hjzfsMGaI6lN3vZAAJO6P1eOdGL-HIQfjp2_VWLhQ3cv4EsuQL_vLcAhxgMAYPbHYw9aCdkJ%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This action will take you to the &lt;strong&gt;Connect to instance&lt;/strong&gt; page, where you should click the &lt;strong&gt;Connect&lt;/strong&gt; button to establish a connection to your EC2 instance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXc5KxK0BvbP5_-arRRjOEP21ccJ7JBOFY3E_6KDuNdAx0MjNigFwJ3P7gGl1WAkbvIPRvkjOkyYsnO7F7zCavg-W31WmqsDniFglmn4VLANVL40xEQbtvrrWnZP7bVxMAtMgV4KlSIeZ0m-X26amJzr3Pc%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXc5KxK0BvbP5_-arRRjOEP21ccJ7JBOFY3E_6KDuNdAx0MjNigFwJ3P7gGl1WAkbvIPRvkjOkyYsnO7F7zCavg-W31WmqsDniFglmn4VLANVL40xEQbtvrrWnZP7bVxMAtMgV4KlSIeZ0m-X26amJzr3Pc%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once connected, the EC2 instance will open, and you will see the Ubuntu terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXeLx_lMB_OqokAsV30RYo3z_7YiLp1vvuwsg0K0duEF3DAur4D8koEPtcPV8F7QqBI7AuDX5vL-Dnma_HFt787biCwRGq5JX7P-CE-slkUwJvSWgUUEqpGx4__qqssi_enbHJLkfNpKTueJQpbTRAhU4NYX%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXeLx_lMB_OqokAsV30RYo3z_7YiLp1vvuwsg0K0duEF3DAur4D8koEPtcPV8F7QqBI7AuDX5vL-Dnma_HFt787biCwRGq5JX7P-CE-slkUwJvSWgUUEqpGx4__qqssi_enbHJLkfNpKTueJQpbTRAhU4NYX%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that your EC2 instance is all setup, the next step is to configure the NGINX web server to handle web requests and serve your static website.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing the Nginx server
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://nginx.org/en/" rel="noopener noreferrer"&gt;Nginx&lt;/a&gt; is an open-source web server designed to handle HTTP requests. It processes requests from web browsers and delivers the corresponding web content.&lt;br&gt;
Nginx excels at efficiently delivering static content; it can handle many connections at once, making it suitable for high-traffic websites. Additionally, Nginx is great as a reverse proxy, forwarding requests to other servers or services running on your EC2 instance. To learn more about Nginx, please check out the official Nginx documentation.&lt;/p&gt;

&lt;p&gt;To create an Nginx server in your EC2 instance. Run the following commands in your virtual machine Ubuntu terminal:&lt;/p&gt;

&lt;p&gt;1.Run this command to switch to the root user and gain the elevated privileges needed to download Nginx:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

sudo -i


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;2.After logging in as the root user, use these commands to update the package index and install NGINX in your system:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

apt-get update
apt-get install nginx


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;3.Next, check the status of the Nginx with this command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

service nginx status


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Running the service nginx status command provides information about the status of the NGINX service, including whether it is running or stopped. If the NGINX server is running correctly, you will see the corresponding status in your terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXesbbj2KIVCpeMIbQX48Fu4lBxuIaGMKQW35KrEYH74vvC906KWeDq0T9hY7zW_MAihT-Ha8rnLoAHYvIS2ACdA6A1tj9LuBRJYaBMf-2G5uHBWljLuizb2zmmkT_E9sCftpqisUQ2-pa5YL7sLDXpcta79%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXesbbj2KIVCpeMIbQX48Fu4lBxuIaGMKQW35KrEYH74vvC906KWeDq0T9hY7zW_MAihT-Ha8rnLoAHYvIS2ACdA6A1tj9LuBRJYaBMf-2G5uHBWljLuizb2zmmkT_E9sCftpqisUQ2-pa5YL7sLDXpcta79%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4.To view the default webpage hosted by the NGINX server, copy the Public IP address of your EC2 instance and paste it into your web browser. &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXe8Qn7kSd07ZlVuhvo4Q6Je4jmsUPtaa31aTraw8TO-uFD63KMgxd7dCa3TXe2B7stneEn7e19AT5FtTLvrQocna3wjMXwrtl31IsLX0HGHVjtjeUyOsmoLJeojUgoyyoI-zFJch-DV4K9TIVbE0Ui0lHAn%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXe8Qn7kSd07ZlVuhvo4Q6Je4jmsUPtaa31aTraw8TO-uFD63KMgxd7dCa3TXe2B7stneEn7e19AT5FtTLvrQocna3wjMXwrtl31IsLX0HGHVjtjeUyOsmoLJeojUgoyyoI-zFJch-DV4K9TIVbE0Ui0lHAn%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should see a static website like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXeA-xp2ncTAwGKudcSORYTbbIvql-nUPf6NyNaHGEgnXm24bAIRYn_1JD_JhyBlkj6FIoUFxpe5AWYuzFEhcW-TorW7O0QBsv3SHm-SHiXRMXC-SwWE-y5LD1CEetCOXqVlZMg3M3SrfcDBuRrKtE2MrTX9%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXeA-xp2ncTAwGKudcSORYTbbIvql-nUPf6NyNaHGEgnXm24bAIRYn_1JD_JhyBlkj6FIoUFxpe5AWYuzFEhcW-TorW7O0QBsv3SHm-SHiXRMXC-SwWE-y5LD1CEetCOXqVlZMg3M3SrfcDBuRrKtE2MrTX9%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Serving the static website
&lt;/h2&gt;

&lt;p&gt;To serve your static website, replace the HTML code on the index page of the NGINX server with your own website’s HTML code.&lt;br&gt;
To find the index webpage in the NGINX server, navigate to the &lt;code&gt;/var/www/html&lt;/code&gt; directory and list the files located there using this command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cd  /var/www/html/
ls


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfue6OTBeCm2VL5A4yCrKZxiS9QXkE8XQw9LQ-06ggH_Xil4uEy15_RcYJpLVE7RR2nBcGw0XSP2AoO0miU3GetugZ3RIrZ5Js_X8Vxi1QPYRKXz0TTYKMgltpAtUWd4hkfIDsQjiLMfQxnM_cfMFVmEnoA%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXfue6OTBeCm2VL5A4yCrKZxiS9QXkE8XQw9LQ-06ggH_Xil4uEy15_RcYJpLVE7RR2nBcGw0XSP2AoO0miU3GetugZ3RIrZ5Js_X8Vxi1QPYRKXz0TTYKMgltpAtUWd4hkfIDsQjiLMfQxnM_cfMFVmEnoA%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this directory, you will see the &lt;code&gt;index.nginx-debian.html&lt;/code&gt; file, which is the index page of the NGINX server.  To view the content of this file, run the following command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

cat index.nginx-debian.html


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXe1DMkQTzae0fmaMrvrvCuOAiCmOmLMMPfwW2Gq1XL095_l4gr_v0IUweL_eyRfLjdMjNdSxch6v1GtJT6xSIXgbWSLaQjCXfcqGVLcnMks9WF_8F13zSRHth4Ec_NAWCxmkRuK7NOGktBpDfyfG83NPnsz%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXe1DMkQTzae0fmaMrvrvCuOAiCmOmLMMPfwW2Gq1XL095_l4gr_v0IUweL_eyRfLjdMjNdSxch6v1GtJT6xSIXgbWSLaQjCXfcqGVLcnMks9WF_8F13zSRHth4Ec_NAWCxmkRuK7NOGktBpDfyfG83NPnsz%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to replace the HTML code in the &lt;code&gt;index.nginx-debian.html&lt;/code&gt; file with the HTML code of your static website.&lt;/p&gt;

&lt;p&gt;To do this, open the &lt;code&gt;index.nginx-debian.html&lt;/code&gt; file in an editor of your choice. This guide will use the nano editor:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

nano index.nginx-debian.html


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next, use the arrow keys to navigate and edit the file. When you are done with your edits, press Ctrl + O (the letter 'O', not zero) to save the file. Nano will then prompt you for the file name, index.nginx-debian.html. Press Enter to confirm and save the file.&lt;/p&gt;

&lt;p&gt;To exit Nano, press Ctrl + X.&lt;/p&gt;

&lt;p&gt;Refresh your Public IP page in your web browser to see your static website.&lt;br&gt;
Congratulations, you have successfully deployed your static website on an AWS EC2 instance using NGINX.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXetL5fCi3oaDLeiSEs8Q0BQZyL9HD2i6m2r3pBFRC-hlBZI5ZbLUGkkaydCSkeqFiKcdiuukWRFG5vOJxdRs3QrI2VwxZOaearimzsEUpU2HcyhWtobRyh1dO9var5ET29jXNxChTBeX3-H-mg33-8ZQkY%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-us.googleusercontent.com%2Fdocsz%2FAD_4nXetL5fCi3oaDLeiSEs8Q0BQZyL9HD2i6m2r3pBFRC-hlBZI5ZbLUGkkaydCSkeqFiKcdiuukWRFG5vOJxdRs3QrI2VwxZOaearimzsEUpU2HcyhWtobRyh1dO9var5ET29jXNxChTBeX3-H-mg33-8ZQkY%3Fkey%3Dns8a3bbSBgFGW8-8bXzdug"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  In summary
&lt;/h2&gt;

&lt;p&gt;This article introduces AWS EC2 instances, one of Amazon Web Services' most popular offerings. It guides you through creating and configuring your own EC2 instance, connecting to it, and setting up NGINX to serve your static website. This hands-on approach equips you with the fundamentals of web hosting and cloud infrastructure management.&lt;/p&gt;

&lt;p&gt;However, this is just the beginning. AWS offers a wide range of services that are worth exploring. As you follow my journey, we will cover some of these services. Additionally, you can dive deeper into NGINX for more complex configuration options and learn about security best practices for managing EC2 instances.&lt;/p&gt;

&lt;p&gt;Happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>nginx</category>
      <category>css</category>
      <category>ec2</category>
    </item>
    <item>
      <title>Observability with Grafana and Eyer</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Sat, 22 Jun 2024 10:57:48 +0000</pubDate>
      <link>https://forem.com/eyer-ai/observability-with-grafana-and-eyer-5de3</link>
      <guid>https://forem.com/eyer-ai/observability-with-grafana-and-eyer-5de3</guid>
      <description>&lt;p&gt;Modern infrastructure is becoming increasingly complex, with microservices, cloud deployments, and distributed architectures making it challenging to understand how everything functions together. This complexity has begged the need for the unparalleled visibility that observability promises.&lt;/p&gt;

&lt;p&gt;Observability provides a comprehensive view of your system, allowing you to identify issues before they escalate. Tools like Eyer play a crucial role in achieving observability. Eyer helps gather and analyze system data, revealing anomalies, affected nodes, and potential future problems. With this insight, you can quickly pinpoint issues using Eyer, leading to less downtime and a smoother user experience.&lt;/p&gt;

&lt;p&gt;However, the raw data from Eyer might be difficult for non-technical individuals or teams to understand. This is where &lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt; comes in. As a powerful visualization tool, Grafana transforms this data into clear and insightful dashboards, making it accessible to everyone who needs it.&lt;/p&gt;

&lt;p&gt;This article explores Eyer, its importance in modern observability discussions, and the added value of integrating it with Grafana.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Eyer and its capabilities
&lt;/h2&gt;

&lt;p&gt;Eyer is an AI-powered observability tool that provides insights into your Boomi integrations. &lt;a href="http://boomi.com/"&gt;Boomi&lt;/a&gt; has become an integration superpower, uniting diverse applications and data sources with its simple and intuitive drag-and-drop design. With Eyer, you can take that impeccable user experience to the next level.&lt;/p&gt;

&lt;p&gt;By &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portal/1/article/30015491"&gt;installing and using the Eyer connector&lt;/a&gt;, you can collect data from your Boomi integrations, send it to the Eyer machine learning pipeline, and gain insights into what's wrong with your Boomi process. &lt;/p&gt;

&lt;p&gt;The machine learning pipeline learns the user Boomi Atom’s behavior and establishes what normal behavior or baselines are for your Atom. So, any significant and prolonged deviations from the normal behavior are flagged. &lt;/p&gt;

&lt;p&gt;For example, if your Boomi Atom is using more memory than normal or CPU utilization is higher than usual, the Eyer connector will send you a JSON alert. You can choose to receive this alert conveniently via email or even as a file saved directly on your host machine, thanks to the flexibility of Boomi's connectors.&lt;/p&gt;

&lt;p&gt;JSON format alerts are advantageous for many reasons: they are structured, human-readable, lightweight, language-agnostic, and can be easily integrated into various systems for automated processing and response. However, while JSON alerts do not have inherent visualization capabilities, tools like Grafana lend them the ability to visualize data over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Grafana: The solution to all your visualization problems
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt; is an open-source analytics and interactive visualization web application tool used to monitor application performance. This section explores how Grafana allows Eyer users to query, visualize, and understand their JSON alerts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of the Grafana integration with Eyer
&lt;/h3&gt;

&lt;p&gt;By integrating Grafana with Eyer, developers have access to powerful visualization capabilities. Some key benefits of this integration include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visualization&lt;/strong&gt;: If you only remember one thing from this article, remember that Grafana is the king of data visualization. This visualization is amazing for democratizing data use. Visual representation allows users to quickly understand the data and see patterns and trends that might be missed when looking at raw JSON. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Historical analysis&lt;/strong&gt;: Another powerful feature of Grafana is its ability to store and visualize historical alert data.  Imagine trying to understand a month's worth of system activity by manually reviewing individual JSON alerts; this gets tiring quickly.  Grafana offers a much better solution.  By aggregating all the data for a specific metric into a single graph, you can easily see trends over days, weeks, or months.  This historical view allows you to identify potential issues, forecast future resource needs, and gain insights into long-term performance patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Collaboration and sharing&lt;/strong&gt;: Grafana makes it easy to share dashboards and visualizations with team members. That way, more people can watch and understand what's happening in your systems. These shared insights make it easier for teams to work effectively to address and resolve issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Open-source, extensible&lt;/strong&gt;: As an open-source tool, Grafana is free to use and allows users to access and modify the source code, enabling customization to meet specific needs. Its extensibility is one of its core strengths, with a vast ecosystem of plugins available that extend its functionality, including integrations with a wide range of data sources, custom visualizations, and alerting mechanisms. This flexibility makes Grafana adaptable to various use cases and industries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Large community support:&lt;/strong&gt;  Grafana benefits from a large and active community of users and developers. This community support is invaluable, providing a wealth of shared knowledge, tutorials, forums, and plugins. The collaborative nature of the community ensures continuous improvements and updates, keeping Grafana at the forefront of monitoring and visualization tools. This robust support network also means that users can easily find help and resources to solve problems and optimize their platform use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrating Eyer with Grafana:&lt;/strong&gt; In addition to being easy to use, one of the things that makes Eyer stand out is its clear documentation. To learn how to integrate Eyer with Grafana, check out the &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/topic/4a74722a-1bf5-46d8-8b40-6352ecd62cfb"&gt;Grafana section&lt;/a&gt; on the &lt;a href="https://customer.support.eyer.ai/servicedesk/customer/portals"&gt;Eyer documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summing it up
&lt;/h2&gt;

&lt;p&gt;The more complex modern infrastructure becomes, the more visibility you need to ensure that these infrastructures do not collapse underneath its own complexity.  Observability tools like Eyer give you this visibility. With Eyer acting as a watchdog over your processes and Boomi integrations, you can sleep well at night knowing that if something is about to or does go wrong, you will be alerted immediately.&lt;/p&gt;

&lt;p&gt;Eyer's strengths are elevated even further with the integration of visualization tools like Grafana, which translates these Eyer JSON into clear dashboards. These dashboards allow you to see, at a glance, the health of your Boomi integrations. &lt;/p&gt;

&lt;p&gt;With Grafana visualizations, you can quickly identify trends, predict potential problems, and troubleshoot issues. In short, Eyer and Grafana working together provide you with the comprehensive visibility you need to ensure the smooth operation of your complex modern infrastructure, giving you peace of mind and allowing you to focus on more strategic initiatives.&lt;/p&gt;

&lt;p&gt;To gain AI-powered insights and visualization for your Boomi integration, check the &lt;a href="https://eyer.ai/"&gt;Eyer website&lt;/a&gt; and join the &lt;a href="https://discord.gg/gjTfhHTvBt"&gt;Discord community&lt;/a&gt; for more information and support.&lt;/p&gt;

</description>
      <category>grafana</category>
      <category>observability</category>
      <category>aiops</category>
      <category>ai</category>
    </item>
    <item>
      <title>Giving Back to the Boomi Community: How Your Contributions Make a Difference</title>
      <dc:creator>Amarachi Iheanacho</dc:creator>
      <pubDate>Fri, 07 Jun 2024 20:11:02 +0000</pubDate>
      <link>https://forem.com/eyer-ai/giving-back-to-the-boomi-community-how-your-contributions-make-a-difference-3g4a</link>
      <guid>https://forem.com/eyer-ai/giving-back-to-the-boomi-community-how-your-contributions-make-a-difference-3g4a</guid>
      <description>&lt;p&gt;Everybody wants to be part of a community—a group of people who validate a person’s thoughts and feelings and help them out during difficult situations. In the &lt;a href="https://discord.gg/SyTRyWpbgq"&gt;Boomi community by Eyer&lt;/a&gt;, this group of people are Boomi engineers. &lt;/p&gt;

&lt;p&gt;The community provides a sense of belonging and support that transcends individual achievements,  fosters a culture of collaboration, and forges friendships that can last a lifetime. However, as with any partnership or relationship of any substance, the community and its people thrive on the principle of give and take. &lt;/p&gt;

&lt;p&gt;In this article, you will learn what giving back to the Boomi community can do for you and, more importantly, the best way to give back.&lt;/p&gt;

&lt;h2&gt;
  
  
  What do you gain from giving back to the Boomi community?
&lt;/h2&gt;

&lt;p&gt;Helping your community is undeniably a good thing, but sometimes that “warm fuzzy feeling" isn't enough motivation for everyone. The good news is that giving back offers a ton of benefits beyond just feeling good. Here are some of the advantages you can gain by getting involved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Skills development&lt;/strong&gt;: Helping out community members is a guaranteed way to enhance your Boomi integration skills. It's a fantastic opportunity to learn new techniques, refine existing skills, and apply your experience to solve interesting problems in new ways.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Networking opportunities&lt;/strong&gt;: Giving back and volunteering are amazing ways to meet like-minded people who share your passion for Boomi and mentorship. These activities guarantee valuable connections that can benefit you personally and professionally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build a professional brand&lt;/strong&gt;: By consistently sharing your Boomi knowledge or volunteering to help others, you'll rapidly establish yourself as the go-to expert for all things Boomi. This includes opportunities, inquiries, and much more, and the best part is that your reputation extends beyond the Boomi ecosystem!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Leaving a positive impact&lt;/strong&gt;: Contributing to the Boomi community is a powerful way to leave a positive mark. Sharing your knowledge and skills empowers others to grow and achieve their goals. Your insights can inspire and uplift fellow developers, creating a supportive and innovative space for everyone. This not only strengthens the entire community but also solidifies your reputation as a valuable and generous member.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What are the different ways to contribute to the Boomi community?
&lt;/h2&gt;

&lt;p&gt;While how you contribute to a community can vary based on its needs (some might value open-source contributions or leadership roles), here are some universal ways to give back, especially within the Boomi community by Eyer:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Time commitment:&lt;/strong&gt; Actively participate in discussions within the Boomi community by Eyer. Provide insightful solutions and clear explanations to Boomi developers.&lt;/p&gt;

&lt;p&gt;Volunteer at Boomi-sponsored, Eyer-sponsored, and Boomi-related events in your local tech community. Lend a hand with logistics and setup or even lead breakout sessions on specific Boomi topics.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skill-based contributions:&lt;/strong&gt; One of the most efficient ways to become a thought leader in the Boomi community is to share your expertise. You can create blog posts, tutorials, or short guides addressing common integration challenges other Boomi developers face. &lt;/p&gt;

&lt;p&gt;Guide and support fellow Boomi users, particularly those new to the platform. Offer advice, answer questions on the Boomi community forum or the &lt;a href="https://discord.gg/SyTRyWpbgq"&gt;Boomi community by Eyer discord&lt;/a&gt;, and help them navigate the integration world.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Acts of kindness (within the Boomiverse):&lt;/strong&gt; Finally, if you are all out of time and you see yourself not being able to commit as much as you would like to, you can contribute to the community by recognizing valuable contributions made by other members by upvoting their responses in &lt;a href="https://community.boomi.com/s/"&gt;forum discussions&lt;/a&gt;. This helps elevate quality content and ensures others find the information they need.&lt;/p&gt;

&lt;p&gt;Extend a warm welcome to new members in the forums or online events. Offer to answer basic questions and help them navigate the Boomiverse’s resources.&lt;/p&gt;

&lt;p&gt;Additionally, when someone goes above and beyond to help you, acknowledge their effort with a positive comment or a "thank you."&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping up
&lt;/h2&gt;

&lt;p&gt;Hopefully, this article was enough to incentivize you to contribute more to your Boomi communities. Helping Boomi developers out, volunteering, and showing gratitude in the smallest ways are some of the best ways to provide value. &lt;/p&gt;

&lt;p&gt;Sure, the idea is to give without expecting anything in return, but you can think of the rewards as a natural consequence of doing good. People will trust your Boomi expertise more because they've seen it in action and benefited from it. They can vouch for your character because you chose to volunteer when you didn't have to. This is an amazing position in a world run by referrals and recommendations.&lt;/p&gt;

&lt;p&gt;So, take the first steps, join the &lt;a href="https://discord.gg/SyTRyWpbgq"&gt;Boomi community by Eyer&lt;/a&gt;, and start your journey today!&lt;/p&gt;

</description>
      <category>community</category>
      <category>boomi</category>
    </item>
  </channel>
</rss>
