<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Justine Devasia</title>
    <description>The latest articles on Forem by Justine Devasia (@justinepdevasia).</description>
    <link>https://forem.com/justinepdevasia</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/justinepdevasia"/>
    <language>en</language>
    <item>
      <title>Building Production-Ready Nomad Clusters on AWS with Terraform</title>
      <dc:creator>Justine Devasia</dc:creator>
      <pubDate>Fri, 11 Jul 2025 22:23:21 +0000</pubDate>
      <link>https://forem.com/justinepdevasia/building-production-ready-nomad-clusters-on-aws-with-terraform-14e</link>
      <guid>https://forem.com/justinepdevasia/building-production-ready-nomad-clusters-on-aws-with-terraform-14e</guid>
      <description>&lt;p&gt;Setting up a proper production Nomad cluster on AWS involves significant infrastructure complexity. After implementing this setup across multiple projects, I've created a reusable Terraform infrastructure for teams with existing AWS and infrastructure automation experience.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;: This requires solid experience with AWS, Terraform, and preferably some Nomad knowledge. The infrastructure is designed for teams who understand these tools but want to avoid rebuilding service discovery and cluster management from scratch.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What's Included
&lt;/h2&gt;

&lt;p&gt;This infrastructure provides a complete AWS setup for running Nomad clusters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multi-AZ VPC&lt;/strong&gt; with proper subnet design&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consul cluster&lt;/strong&gt; for service discovery and configuration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nomad servers&lt;/strong&gt; with auto-scaling groups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Specialized client pools&lt;/strong&gt; for different workload types&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Load Balancers&lt;/strong&gt; with SSL termination&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security groups&lt;/strong&gt; following least-privilege principles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 + CloudFront&lt;/strong&gt; for static asset delivery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD pipeline&lt;/strong&gt; configurations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Choose Nomad?
&lt;/h2&gt;

&lt;p&gt;Kubernetes excels with dedicated platform engineering teams, but Nomad offers a simpler alternative for smaller teams or when operational complexity needs to be minimized. Nomad provides straightforward container orchestration with a significantly reduced learning curve.&lt;/p&gt;

&lt;p&gt;The job specification syntax is minimal and readable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;job&lt;/span&gt; &lt;span class="s2"&gt;"my-app"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;datacenters&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="s2"&gt;"web"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;

    &lt;span class="nx"&gt;task&lt;/span&gt; &lt;span class="s2"&gt;"app"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;driver&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt;
      &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my-app:latest"&lt;/span&gt;
        &lt;span class="nx"&gt;ports&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach eliminates the complexity of services, ingresses, config maps, and other Kubernetes abstractions while maintaining production-grade orchestration capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Process
&lt;/h2&gt;

&lt;p&gt;Getting this infrastructure running involves several steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Build custom AMI&lt;/strong&gt; using the included Packer configuration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure remote state&lt;/strong&gt; with the provided Terraform backend setup&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy infrastructure&lt;/strong&gt; after updating variables for your environment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy applications&lt;/strong&gt; using Nomad job specifications&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The infrastructure implements security best practices:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No hardcoded secrets&lt;/li&gt;
&lt;li&gt;Least-privilege IAM roles&lt;/li&gt;
&lt;li&gt;Private subnets for workloads&lt;/li&gt;
&lt;li&gt;Configurable CIDR blocks&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Important&lt;/strong&gt;: This isn't a one-click deployment. You'll need to understand the provisioning scripts, adjust networking configurations, and modify instance sizes for your specific requirements.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Specialized Node Pools
&lt;/h2&gt;

&lt;p&gt;The infrastructure creates different node pools optimized for specific workloads:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pool Type&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Constraints&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Django&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Python web applications&lt;/td&gt;
&lt;td&gt;&lt;code&gt;node.class = "django"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Elixir&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Phoenix applications&lt;/td&gt;
&lt;td&gt;&lt;code&gt;node.class = "elixir"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Celery&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Background job processing&lt;/td&gt;
&lt;td&gt;&lt;code&gt;node.class = "celery"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;RabbitMQ&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Message queue services&lt;/td&gt;
&lt;td&gt;&lt;code&gt;node.class = "rabbit"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Datastore&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Database workloads&lt;/td&gt;
&lt;td&gt;&lt;code&gt;node.class = "datastore"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;APM&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Monitoring tools&lt;/td&gt;
&lt;td&gt;&lt;code&gt;node.class = "apm"&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Nomad's constraint system automatically places jobs on appropriate nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Environment Support
&lt;/h2&gt;

&lt;p&gt;The configuration supports different environments with varying security profiles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Development&lt;/strong&gt;: More permissive settings for easier testing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Staging&lt;/strong&gt;: Production-like with additional debugging capabilities
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production&lt;/strong&gt;: Locked-down security with comprehensive monitoring&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation Considerations
&lt;/h2&gt;

&lt;p&gt;Most infrastructure examples online fall into two categories: simplified demos that fail in production environments, or enterprise-grade solutions requiring dedicated platform teams. This infrastructure targets the middle ground - production-ready without excessive complexity.&lt;/p&gt;

&lt;p&gt;The implementation has proven reliable across multiple projects, significantly reducing time spent on service discovery configuration and cluster bootstrapping.&lt;/p&gt;

&lt;p&gt;However, this represents opinionated infrastructure decisions based on specific use cases. Production deployments will likely require modifications for different:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instance types and sizing&lt;/li&gt;
&lt;li&gt;Networking requirements&lt;/li&gt;
&lt;li&gt;Compliance standards&lt;/li&gt;
&lt;li&gt;Organizational policies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The codebase serves as a foundation for teams with the infrastructure expertise to adapt it appropriately.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;The complete infrastructure is available as open source:&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;&lt;a href="https://github.com/justinepdevasia/aws-nomad-terraform-infrastructure" rel="noopener noreferrer"&gt;AWS Nomad Terraform Infrastructure&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Quick Start
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. Build custom AMI&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;packer/
packer build ami.pkr.hcl

&lt;span class="c"&gt;# 2. Setup remote state&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;tf-remote-state/dev/
terraform init &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; terraform apply

&lt;span class="c"&gt;# 3. Deploy infrastructure  &lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;tf-infra/
terraform init &lt;span class="nt"&gt;-backend-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"backend_develop.conf"&lt;/span&gt;
terraform apply &lt;span class="nt"&gt;-var-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"develop.tfvars"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The solution is suitable for teams experienced with Kubernetes complexity who want to evaluate Nomad, or those already familiar with HashiCorp tooling. The README provides deployment instructions, though understanding the underlying Terraform modules is recommended before production use.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Notes
&lt;/h2&gt;

&lt;p&gt;This infrastructure is provided as-is for teams comfortable with the required technology stack. The code is meant to be read, understood, and modified for your specific environment rather than used as a black box.&lt;/p&gt;

&lt;p&gt;Bug reports and improvements via pull requests are welcome.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What has been your experience with container orchestration platforms in production environments? How do you evaluate trade-offs between operational complexity and feature completeness?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>nomad</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>How to Deploy Hashicorp Nomad Cluster on Vultr</title>
      <dc:creator>Justine Devasia</dc:creator>
      <pubDate>Tue, 18 Jun 2024 07:43:30 +0000</pubDate>
      <link>https://forem.com/justinepdevasia/how-to-deploy-hashicorp-nomad-cluster-on-vultr-3f6c</link>
      <guid>https://forem.com/justinepdevasia/how-to-deploy-hashicorp-nomad-cluster-on-vultr-3f6c</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Hashicorp Nomad is a workload orchestrator to deploy and manage applications across large clusters of servers. Nomad is a single binary application which can run in both server and client modes. The server manages the cluster state and applications are deployed in the client machine.&lt;/p&gt;

&lt;p&gt;Nomad support running various type of deployments using docker, binary files, java jar files, and Linux VMs using QEMU driver.&lt;/p&gt;

&lt;p&gt;This article demonstrates step-by-step process to build a Nomad cluster on Vultr.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin, you must have basic knowledge of Linux and Vultr services.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.vultr.com/introduction-to-vultr-virtual-private-cloud-2-0" rel="noopener noreferrer"&gt;Vultr VPC 2.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.vultr.com/vultr-startup-scripts-quickstart-guide" rel="noopener noreferrer"&gt;Vultr Startup Script&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.vultr.com/vultr-load-balancers" rel="noopener noreferrer"&gt;Vultr Load Balancer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example Nomad Cluster
&lt;/h2&gt;

&lt;p&gt;The cluster consists of one Nomad server operating in a VM and three Nomad clients, each running on separate VMs, all forming a single compute plane. After deployment, the server will be accessible in the public IP on port 4646. To access client application, a load balancer will be attached to all clients on port 80. The entire cluster is located inside a VPC with a private IP range.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/N29Q1Qhb" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfc4uoidqmpyfzteblgn.png" alt="nomad-diagram.png" width="639" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set Up VPC 2.0
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://my.vultr.com/" rel="noopener noreferrer"&gt;Vultr Customer Portal&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Network&lt;/strong&gt; and click &lt;strong&gt;VPC 2.0&lt;/strong&gt; on the main navigation menu.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose Add &lt;strong&gt;VPC 2.0 Network&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/w7n74b07" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1k58on35oyee7nzz4rtp.png" alt="create-vpc2-0.png" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select a location.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure an IP range in the VPC, for example, 10.1.0.0/20.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Name the VPC 2.0 and click &lt;strong&gt;Add Network&lt;/strong&gt;  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/N2fHY9HF" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fce9qa5zyovricfuw8wh4.png" alt="choose-vpc2-0-options.png" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This creates a vultr VPC 2.0 where the Nomad cluster will be deployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use Startup Script to Install Nomad and Docker
&lt;/h2&gt;

&lt;p&gt;The bash script given below install Hashicorp Nomad and Docker in a Ubuntu VM.&lt;br&gt;
A startup Script in Vultr allows the script to reuse by multiple VMs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
    &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;

    &lt;span class="c"&gt;# Disable interactive apt prompts&lt;/span&gt;
    &lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;DEBIAN_FRONTEND&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;noninteractive

    &lt;span class="nv"&gt;NOMAD_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NOMAD_VERSION&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="p"&gt;.7.3&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="c"&gt;# Update packages&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; update
    &lt;span class="c"&gt;# Install software-properties-common&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; software-properties-common

    &lt;span class="c"&gt;# Add HashiCorp GPG key&lt;/span&gt;
    curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://apt.releases.hashicorp.com/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-key add -
    &lt;span class="c"&gt;# Add HashiCorp repository&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-add-repository &lt;span class="s2"&gt;"deb [arch=amd64] https://apt.releases.hashicorp.com &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; main"&lt;/span&gt;
    &lt;span class="c"&gt;# Update packages again&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; update

    &lt;span class="c"&gt;# Install Nomad&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nv"&gt;nomad&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;NOMAD_VERSION&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nt"&gt;-1&lt;/span&gt;


    &lt;span class="c"&gt;# Disable the firewall&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;ufw disable &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"ufw not installed"&lt;/span&gt;

    &lt;span class="c"&gt;# Install Docker and associated dependencies&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; update
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        ca-certificates &lt;span class="se"&gt;\&lt;/span&gt;
        curl &lt;span class="se"&gt;\&lt;/span&gt;
        gnupg
    &lt;span class="c"&gt;# Add Docker’s official GPG key&lt;/span&gt;
    &lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings
    curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.gpg
    &lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.gpg
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
        &lt;span class="s2"&gt;"deb [arch="&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
        "&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; /etc/os-release &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$VERSION_CODENAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;" stable"&lt;/span&gt; |
        &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null  
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; update

    &lt;span class="c"&gt;# Install Docker and required packages&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

    &lt;span class="c"&gt;# Create daemon.json if it doesn't exist&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; /etc/docker/daemon.json &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;&lt;span class="nb"&gt;sudo touch&lt;/span&gt; /etc/docker/daemon.json
    &lt;span class="k"&gt;fi&lt;/span&gt;

    &lt;span class="c"&gt;# Restart Docker&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart docker

    &lt;span class="c"&gt;# Add the current user to the docker group&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://my.vultr.com/" rel="noopener noreferrer"&gt;Vultr Customer Portal&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Orchestration&lt;/strong&gt; and click &lt;strong&gt;Scripts&lt;/strong&gt; on the main navigation menu.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose Add &lt;strong&gt;Startup Script&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/PNZbMYtb" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpmr9yrt085pt1kfsdjd.png" alt="create-startup-script.png" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Give the script a &lt;strong&gt;name&lt;/strong&gt; and choose Type as &lt;strong&gt;Boot&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the above given bash script and paste it in the &lt;strong&gt;Script&lt;/strong&gt; location&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click Add Script&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/0rJnHhCw" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7e6t1jg4z1w2b98g6d4.png" alt="choose-startup-script-options.png" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This creates a Vultr startup script that can be used to deploy multiple VMs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nomad server User Data
&lt;/h2&gt;

&lt;p&gt;Nomad uses a config file to run the server with its required configuration. The default file location is /etc/nomad.d/nomad.hcl. The bash script shown below will add the Nomad server configuration to the config file and run the server as a systemd service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
    &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-Eeuo&lt;/span&gt; pipefail

    &lt;span class="c"&gt;# Add Nomad client configuration to /etc/nomad.d/ folder&lt;/span&gt;

    &lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; &amp;gt;/etc/nomad.d/nomad.hcl
    datacenter = "dc1"
    data_dir   = "/opt/nomad/data"
    bind_addr = "0.0.0.0"
    log_level = "INFO"

    advertise {
      http = "{{ GetInterfaceIP &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt;enp8s0&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt; }}"
      rpc  = "{{ GetInterfaceIP &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt;enp8s0&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt; }}"
      serf = "{{ GetInterfaceIP &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt;enp8s0&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt; }}"
    }

    server {
      enabled          = true
      bootstrap_expect = "1"
      encrypt          = "z8geXx7U+JPk6u/vlBRDhh81h5W12AXBN+7AUo5eXMI="
      server_join {
        retry_join = ["127.0.0.1"]
      }
    }

    acl {
      enabled = false
    }
&lt;/span&gt;&lt;span class="no"&gt;    EOF

&lt;/span&gt;    &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; nomad
    &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart nomad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The Vultr &lt;strong&gt;Cloud-Init User-Data&lt;/strong&gt; is a script that can be used to launch the server when it is booted for the first time.&lt;/li&gt;
&lt;li&gt;Add the script to the &lt;strong&gt;Cloud-Init User-Data&lt;/strong&gt; during the server launch.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deploying Nomad server
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://my.vultr.com/" rel="noopener noreferrer"&gt;Vultr Customer Portal&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Compute&lt;/strong&gt; on the main navigation menu.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose &lt;strong&gt;Deploy Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/ThH0kfmG" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhuez8kkmk634pi3hgr4h.png" alt="deploy-server.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Type&lt;/strong&gt; of Server&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Location same as VPC&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Ubuntu 22.04 LTS&lt;/strong&gt; as the Operating System&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select a &lt;strong&gt;Plan&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;strong&gt;Additional Features&lt;/strong&gt; section select &lt;strong&gt;Virtual Private Cloud 2.0&lt;/strong&gt; and &lt;strong&gt;Cloud-Init User-Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/75tW00V1" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69oo3idpu20nwohv374v.png" alt="server-config-1.png" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the Nomad server User Data from the previous section and add it in the user data input section&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the VPC name created in the previous section&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/fSYVjWZD" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r1enp9w9w9m65qq8cwy.png" alt="server-vpc-config.png" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Server Settings&lt;/strong&gt; section, choose the &lt;strong&gt;nomad-startup-script&lt;/strong&gt; created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add &lt;strong&gt;Server Hostname &amp;amp; Label&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Deploy Now&lt;/strong&gt; to launch the Nomad server&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/mzWkZc0R" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9khd03ik72qhrkmvymm1.png" alt="server-config-2.png" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Nomad server is now deployed in the VPC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fetching the Private IP of Nomad server
&lt;/h2&gt;

&lt;p&gt;To connect Nomad client with server, the private IP of Nomad server is required.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;VPC 2.0&lt;/strong&gt; in the navigation menu and click the VPC ID&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/vgnYpQBj" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv65ynbl4o3wu244o63vr.png" alt="private-IP-1.png" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Note down the private IP of the Nomad server from the &lt;strong&gt;Attached Nodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/7JT2t88Y" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5508tvhomuodllxf5iiv.png" alt="private-IP-2.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Nomad client User Data
&lt;/h2&gt;

&lt;p&gt;The given bash script run a Nomad client as a systemd service along with its configuration during a server boot. Update the &lt;strong&gt;private IP&lt;/strong&gt; of the Nomad server in the &lt;strong&gt;server_join&lt;/strong&gt; block as shown below to make the script connect with the server.&lt;/p&gt;

&lt;p&gt;Add the modified script to the &lt;strong&gt;Cloud-Init User-Data&lt;/strong&gt; during the server launch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    &lt;span class="c"&gt;#!/usr/bin/env bash&lt;/span&gt;
    &lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-Eeuo&lt;/span&gt; pipefail

    &lt;span class="c"&gt;# Add Nomad server configuration to /etc/nomad.d/ folder&lt;/span&gt;

    &lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt; &amp;gt;/etc/nomad.d/nomad.hcl
    datacenter = "dc1"
    data_dir  = "/opt/nomad/data"
    bind_addr = "0.0.0.0"
    log_level = "INFO"

    # add nomad advertise address
    advertise {
      http = "{{ GetInterfaceIP &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt;enp8s0&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt; }}"
      rpc  = "{{ GetInterfaceIP &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt;enp8s0&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt; }}"
      serf = "{{ GetInterfaceIP &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt;enp8s0&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="sh"&gt; }}"
    }


    # Enable the client
    client {
      enabled = true
      options {
        "driver.raw_exec.enable"    = "1"
        "docker.privileged.enabled" = "true"
      }
      # Configure the server_join option
      server_join {
        retry_join = [ "10.1.0.3" ]
      }

      # Configure the network interface
      network_interface = "enp8s0"

    }

    # Enable the docker plugin
    plugin "docker" {
      config {
        endpoint = "unix:///var/run/docker.sock"

        volumes {
          enabled      = true
          selinuxlabel = "z"
        }
        allow_privileged = true
      }
    }
&lt;/span&gt;&lt;span class="no"&gt;    EOF

&lt;/span&gt;    &lt;span class="c"&gt;# Enables nomad systemd service&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable&lt;/span&gt; &lt;span class="nt"&gt;--now&lt;/span&gt; nomad

    &lt;span class="c"&gt;# Runs nomad service&lt;/span&gt;
    &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart nomad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying Nomad clients
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://my.vultr.com/" rel="noopener noreferrer"&gt;Vultr Customer Portal&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Compute&lt;/strong&gt; on the main navigation menu.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose &lt;strong&gt;Deploy Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/ThH0kfmG" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhuez8kkmk634pi3hgr4h.png" alt="deploy-server.png" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the &lt;strong&gt;Type&lt;/strong&gt; of Server&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Location same as VPC&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;Ubuntu 22.04 LTS&lt;/strong&gt; as the Operating System&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select a &lt;strong&gt;Plan&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;In the &lt;strong&gt;Additional Features&lt;/strong&gt; section select &lt;strong&gt;Virtual Private Cloud 2.0&lt;/strong&gt; and &lt;strong&gt;Cloud-Init User-Data&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/75tW00V1" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69oo3idpu20nwohv374v.png" alt="server-config-1.png" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the &lt;strong&gt;Nomad client User Data&lt;/strong&gt; from the previous section and add it in the user data input section&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select the VPC name created in the previous section&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/fSYVjWZD" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2r1enp9w9w9m65qq8cwy.png" alt="server-vpc-config.png" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the &lt;strong&gt;Server Settings&lt;/strong&gt; section, choose the &lt;strong&gt;nomad-startup-script&lt;/strong&gt; created.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Increase the &lt;strong&gt;Server Qty&lt;/strong&gt; to 3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add &lt;strong&gt;Server Hostname &amp;amp; Label&lt;/strong&gt; for each server&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Deploy Now&lt;/strong&gt; to launch all 3 Nomad clients&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/G4MT4rW4" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdkbk4cpnba5y8iimbvtl.png" alt="nomad-client-deploy.png" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After both the server and clients are deployed in the VPC, a cluster is formed between the server and clients with the help of the  &lt;strong&gt;server_join&lt;/strong&gt; configuration in Nomad.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/qtmM8h38" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3jo14h7inxnyjqm2phg.png" alt="vultr-servers.png" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Accessing Nomad UI
&lt;/h2&gt;

&lt;p&gt;Fetch the public IP of &lt;strong&gt;Nomad server&lt;/strong&gt; from Vultr UI and browse &lt;a href="http://public-ip:4646" rel="noopener noreferrer"&gt;http://public-ip:4646&lt;/a&gt;. The Nomad UI is accessible on port 4646, as shown in the screenshot below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/MMRbcScJ" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8ud4urf0cg9dchalna3.png" alt="nomad-ui-front.png" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the &lt;strong&gt;Clients&lt;/strong&gt; and &lt;strong&gt;Server&lt;/strong&gt; page in sidebar to view connected clients and servers.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nomad server&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/RW6mJsTL" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrvprvae3tdjo6ca5h5i.png" alt="nomad-server.png" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nomad clients&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/VSZNPr4K" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7gw7rimp2o98wxq4xxp.png" alt="nomad-ui-clients.png" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Attach Vultr Load Balancer to Nomad Clients
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Open the &lt;a href="https://my.vultr.com/" rel="noopener noreferrer"&gt;Vultr Customer Portal&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Select &lt;strong&gt;Load Balancers&lt;/strong&gt; on the main navigation menu and select &lt;strong&gt;Add Load Balancer&lt;/strong&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/hJcmtdLg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwkhzj9gqzazv3mm1nkl.png" alt="create-load-balancer.png" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the same location as the VPC 2.0&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use the default &lt;strong&gt;Load Balancer Configuration&lt;/strong&gt; and &lt;strong&gt;Forwarding Rules&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the VPC Network created in the previous section&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose &lt;strong&gt;Add Load Balancer&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After the Load Balancer is created, add the nomad clients as targets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/VdWHNjML" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko954d51ccnlzyqq9d0f.png" alt="load-balancer-attach-instances.png" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You have attached a Load Balancer to the Nomad clients.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying Sample Web Application
&lt;/h2&gt;

&lt;p&gt;You can deploy web application in a Nomad cluster using Nomad Job files. A Job file must include a &lt;strong&gt;.nomad&lt;/strong&gt; extension written in the HCL (Hashicorp configuration language).&lt;/p&gt;

&lt;p&gt;A sample nomad job file is shown below, which deploys a web api on the Nomad cluster.&lt;br&gt;
The api is deployed on port 80 and uses a docker image called &lt;strong&gt;traefik/whoami&lt;/strong&gt;, which returns the client IP address and port number when invoked.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;    &lt;span class="nx"&gt;job&lt;/span&gt; &lt;span class="s2"&gt;"webapp"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;datacenters&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

      &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"service"&lt;/span&gt;

      &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="s2"&gt;"webapp"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;

        &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="nx"&gt;mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"host"&lt;/span&gt;
           &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="s2"&gt;"http"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
             &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
             &lt;span class="nx"&gt;static&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
           &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="nx"&gt;task&lt;/span&gt; &lt;span class="s2"&gt;"server"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;WHOAMI_PORT_NUMBER&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${NOMAD_PORT_http}"&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;

          &lt;span class="nx"&gt;driver&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt;

          &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"traefik/whoami"&lt;/span&gt;
            &lt;span class="nx"&gt;ports&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"http"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Open Nomad UI&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Navigate to &lt;strong&gt;Jobs&lt;/strong&gt; and select &lt;strong&gt;Run Job&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/d7JcDHk3" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2a5lr67g9b5ozu0egeff.png" alt="nomad-job.png" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the above job file in text input and click &lt;strong&gt;Plan&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/7bXh4R6B" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjcu5f2b6ia7c31d57w8h.png" alt="nomad-job-plan.png" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click &lt;strong&gt;Run&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/Lgpb0htX" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuqbkhoj572upi6ov4rd.png" alt="nomad-job-run.png" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After the job is in running state, Open the browser and navigate to load balancer IP on port 80, and you can see the response from the application container. Refresh the page to see the load balancer in action.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/ZWtx8XLb" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frt3db7ucevtr58arx52y.png" alt="nomad-job-running.png" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://postimg.cc/3WjSxX5f" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1szuit7xtrzfizl18gt.png" alt="final-response.png" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You have created a Nomad cluster in the VPC and deployed a web application with load balancer. You can now access the Nomad UI and deploy various applications on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Information
&lt;/h2&gt;

&lt;p&gt;For more information, please see:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/nomad/intro" rel="noopener noreferrer"&gt;Hashicorp Nomad documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nomadproject.io/" rel="noopener noreferrer"&gt;Hashicorp Nomad website&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>docker</category>
      <category>infrastructureascode</category>
      <category>kubernetes</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Docker Log Observability: Analyzing Container Logs in HashiCorp Nomad with Vector, Loki, and Grafana</title>
      <dc:creator>Justine Devasia</dc:creator>
      <pubDate>Fri, 19 Apr 2024 17:45:44 +0000</pubDate>
      <link>https://forem.com/justinepdevasia/docker-log-observability-analyzing-container-logs-in-hashicorp-nomad-with-vector-loki-and-grafana-4dp4</link>
      <guid>https://forem.com/justinepdevasia/docker-log-observability-analyzing-container-logs-in-hashicorp-nomad-with-vector-loki-and-grafana-4dp4</guid>
      <description>&lt;p&gt;Monitoring application logs is a crucial aspect of the software development and deployment lifecycle. In this post, we'll delve into the process of observing logs generated by Docker container applications operating within HashiCorp Nomad. With the aid of &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt;, &lt;a href="https://vector.dev/" rel="noopener noreferrer"&gt;Vector&lt;/a&gt;, and &lt;a href="https://grafana.com/oss/loki/" rel="noopener noreferrer"&gt;Loki&lt;/a&gt;, we'll explore effective strategies for log analysis and visualization, enhancing visibility and troubleshooting capabilities within your Nomad environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Nomad in Linux&lt;/li&gt;
&lt;li&gt;Install Docker&lt;/li&gt;
&lt;li&gt;Run Nomad in both server and client mode&lt;/li&gt;
&lt;li&gt;Run Loki in Nomad&lt;/li&gt;
&lt;li&gt;Deploy Logging app&lt;/li&gt;
&lt;li&gt;Deploy Vector in Nomad&lt;/li&gt;
&lt;li&gt;Deploy Grafana&lt;/li&gt;
&lt;li&gt;Observe logs in Grafana using Loki Datasource&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Application structure
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90jzhlc3qhvq7gpz0a4a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90jzhlc3qhvq7gpz0a4a.png" alt=" " width="591" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Nomad locally (Linux)
&lt;/h2&gt;

&lt;p&gt;Nomad, much like Kubernetes, serves as a powerful container orchestration tool, facilitating seamless application deployment and management. In this guide, we'll walk through the installation process on a Linux machine, specifically Ubuntu 22.04 LTS. Let's dive in:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install required packages&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;wget gpg coreutils
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Add the HashiCorp GPG key&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;wget &lt;span class="nt"&gt;-O-&lt;/span&gt; https://apt.releases.hashicorp.com/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /usr/share/keyrings/hashicorp-archive-keyring.gpg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Add official Hasicorp Linux Repository&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;lsb_release &lt;span class="nt"&gt;-cs&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; main"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/hashicorp.list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Update and Install&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;nomad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command will install the latest Nomad binary. The installation can be confirmed by checking the version of nomad by running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;nomad &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install Docker
&lt;/h2&gt;

&lt;p&gt;To deploy containers, docker is essential and can be installed by the following steps. Nomad will detect the docker in a system using docker driver and use it to deploy containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up Docker's &lt;code&gt;apt&lt;/code&gt; repository&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Add Docker's official GPG key:&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;ca-certificates curl
&lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings
&lt;span class="nb"&gt;sudo &lt;/span&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.asc
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.asc

&lt;span class="c"&gt;# Add the repository to Apt sources:&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
  &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; /etc/os-release &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$VERSION_CODENAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Install Docker&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install &lt;/span&gt;docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Restart docker and update permission&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart docker
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker ubuntu
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Run Nomad in both server and client mode
&lt;/h2&gt;

&lt;p&gt;Nomad binary can run in both server and client mode. The server manages the state of the cluster and clients are used to deploy applications. In production, multiple Nomad servers and clients running in separate machines are interconnected to form a &lt;a href="https://developer.hashicorp.com/nomad/tutorials/enterprise/production-reference-architecture-vm-with-consul#ra" rel="noopener noreferrer"&gt;cluster&lt;/a&gt;. Here, we will use a single machine to run both server and client.&lt;/p&gt;

&lt;p&gt;Nomad require a config file to run the application in server and client mode. The config file is also required to add more capabilities to nomad such as adding docker driver, setting telemetry, autopilot and so on.&lt;/p&gt;

&lt;p&gt;Here is the nomad configuration file which can be used to run nomad. It also contains the information required to run docker containers. Create a file &lt;code&gt;nomad.hcl&lt;/code&gt; and copy the below content to the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;datacenter&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dc1"&lt;/span&gt;
&lt;span class="nx"&gt;data_dir&lt;/span&gt;  &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/opt/nomad/data"&lt;/span&gt;
&lt;span class="nx"&gt;bind_addr&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"0.0.0.0"&lt;/span&gt;
&lt;span class="nx"&gt;log_level&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"INFO"&lt;/span&gt;

&lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;enabled&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;bootstrap_expect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
   &lt;span class="nx"&gt;search&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;fuzzy_enabled&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;limit_query&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
    &lt;span class="nx"&gt;limit_results&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;
    &lt;span class="nx"&gt;min_term_length&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Enable the client&lt;/span&gt;
&lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="s2"&gt;"driver.raw_exec.enable"&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1"&lt;/span&gt;
    &lt;span class="s2"&gt;"docker.privileged.enabled"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"true"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;server_join&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;retry_join&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="s2"&gt;"127.0.0.1"&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;plugin&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;endpoint&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"unix:///var/run/docker.sock"&lt;/span&gt;

    &lt;span class="nx"&gt;extra_labels&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"job_name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"job_id"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"task_group_name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"task_name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"namespace"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"node_name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"node_id"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="nx"&gt;volumes&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;enabled&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="nx"&gt;selinuxlabel&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"z"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;allow_privileged&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;

  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;telemetry&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;collection_interval&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"15s"&lt;/span&gt;
  &lt;span class="nx"&gt;disable_hostname&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;prometheus_metrics&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;publish_allocation_metrics&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;publish_node_metrics&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run the following command in the terminal where the above file is located to start Nomad.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nomad agent &lt;span class="nt"&gt;-config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;nomad.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will start the application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;==&amp;gt; Loaded configuration from nomad.hcl
==&amp;gt; Starting Nomad agent...

==&amp;gt; Nomad agent configuration:

       Advertise Addrs: HTTP: 192.168.1.32:4646; RPC: 192.168.1.32:4647; Serf: 192.168.1.32:4648
            Bind Addrs: HTTP: [0.0.0.0:4646]; RPC: 0.0.0.0:4647; Serf: 0.0.0.0:4648
                Client: true
             Log Level: INFO
               Node Id: 2921dae9-99dc-a65d-1a1f-25d9822c1500
                Region: global (DC: dc1)
                Server: true
               Version: 1.7.7

==&amp;gt; Nomad agent started! Log data will stream in below:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Nomad UI will be available in the &lt;strong&gt;advertise address&lt;/strong&gt;. Here it is, &lt;code&gt;192.168.1.32:4646&lt;/code&gt;, This will be based on the network interface you are connected and the value will be different for different machines. Going to this IP and port in web browser will open nomad UI as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzuhbg9gt7m5vj0qamddg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzuhbg9gt7m5vj0qamddg.png" alt=" " width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have installed both nomad and docker in the system and ready to deploy the observability applications!&lt;/p&gt;

&lt;h2&gt;
  
  
  Run Loki in Nomad
&lt;/h2&gt;

&lt;p&gt;Loki is a scalable log aggregation system and is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream. It is configured such that the logs are persisted to the file system. It can also be configured to store the data in various datastores like AWS S3, Minio and so on.&lt;/p&gt;

&lt;p&gt;The nomad job file used to run loki is given below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;job&lt;/span&gt; &lt;span class="s2"&gt;"loki"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;datacenters&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"service"&lt;/span&gt;

  &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="s2"&gt;"loki"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

    &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"host"&lt;/span&gt;
      &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="s2"&gt;"loki"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3100&lt;/span&gt;
        &lt;span class="nx"&gt;static&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3100&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;service&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"loki"&lt;/span&gt;
      &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"loki"&lt;/span&gt;
      &lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"nomad"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;task&lt;/span&gt; &lt;span class="s2"&gt;"loki"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;driver&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt;
      &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"root"&lt;/span&gt;
      &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"grafana/loki:2.9.7"&lt;/span&gt;
        &lt;span class="nx"&gt;args&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="s2"&gt;"-config.file"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="s2"&gt;"local/config.yml"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="nx"&gt;volumes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/loki_data:/loki"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="nx"&gt;ports&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"loki"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOH&lt;/span&gt;&lt;span class="sh"&gt;
auth_enabled: false
server:
  http_listen_port: 3100
ingester:
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  # Any chunk not receiving new logs in this time will be flushed
  chunk_idle_period: 1h
  # All chunks will be flushed when they hit this age, default is 1h
  max_chunk_age: 1h
  # Loki will attempt to build chunks up to 1.5MB, flushing if chunk_idle_period or max_chunk_age is reached first
  chunk_target_size: 1048576
  # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
  chunk_retain_period: 30s
  max_transfer_retries: 0     # Chunk transfers disabled
  wal:
    enabled: true
    dir: "/loki/wal"
schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h
storage_config:
  boltdb_shipper:
    active_index_directory: /loki/boltdb-shipper-active
    cache_location: /loki/boltdb-shipper-cache
    cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
    shared_store: filesystem
  filesystem:
    directory: /loki/chunks
compactor:
  working_directory: /loki/boltdb-shipper-compactor
  shared_store: filesystem
limits_config:
  reject_old_samples: true
  reject_old_samples_max_age: 168h
chunk_store_config:
  max_look_back_period: 0s
table_manager:
  retention_deletes_enabled: false
  retention_period: 0s
&lt;/span&gt;&lt;span class="no"&gt;EOH
&lt;/span&gt;        &lt;span class="nx"&gt;destination&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"local/config.yml"&lt;/span&gt;
        &lt;span class="nx"&gt;change_mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"restart"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;cpu&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt; &lt;span class="c1"&gt;#Mhz&lt;/span&gt;
        &lt;span class="nx"&gt;memory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt; &lt;span class="c1"&gt;#MB&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the above job configuration and add it in the nomad UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzk60s6qyfmbo87x4bmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzk60s6qyfmbo87x4bmw.png" alt=" " width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1v2ibeupukr9a4pcwcpl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1v2ibeupukr9a4pcwcpl.png" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy Logging app
&lt;/h2&gt;

&lt;p&gt;The logging app is a simple docker container which emits logs randomly. It send 4 log levels, INFO, ERROR, WARNING and DEBUG to stdout. The job file is give below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;job&lt;/span&gt; &lt;span class="s2"&gt;"logger"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;datacenters&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"service"&lt;/span&gt;

  &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="s2"&gt;"logger"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

    &lt;span class="nx"&gt;task&lt;/span&gt; &lt;span class="s2"&gt;"logger"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

      &lt;span class="nx"&gt;driver&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt;

      &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"chentex/random-logger:latest"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;cpu&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="c1"&gt;# 100 MHz&lt;/span&gt;
        &lt;span class="nx"&gt;memory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="c1"&gt;# 100MB&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;


    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the above configuration and deploy another job from Nomad UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy Vector in Nomad
&lt;/h2&gt;

&lt;p&gt;Vector is log collecting agent which support many source and destination for log ingestion and export. Here, vector will collect the log data using &lt;strong&gt;docker_logs&lt;/strong&gt; source and send to &lt;strong&gt;loki&lt;/strong&gt;. The job file is given below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;job&lt;/span&gt; &lt;span class="s2"&gt;"vector"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;datacenters&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="c1"&gt;# system job, runs on all nodes&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"system"&lt;/span&gt;

  &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="s2"&gt;"vector"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

    &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="s2"&gt;"api"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8686&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;ephemeral_disk&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;size&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;
      &lt;span class="nx"&gt;sticky&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;task&lt;/span&gt; &lt;span class="s2"&gt;"vector"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;driver&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt;
      &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"timberio/vector:0.30.0-debian"&lt;/span&gt;
        &lt;span class="nx"&gt;ports&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"api"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="nx"&gt;volumes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/var/run/docker.sock:/var/run/docker.sock"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;VECTOR_CONFIG&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"local/vector.toml"&lt;/span&gt;
        &lt;span class="nx"&gt;VECTOR_REQUIRE_HEALTHY&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"false"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;cpu&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="c1"&gt;# 100 MHz&lt;/span&gt;
        &lt;span class="nx"&gt;memory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="c1"&gt;# 100MB&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="c1"&gt;# template with Vector's configuration&lt;/span&gt;
      &lt;span class="nx"&gt;template&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;destination&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"local/vector.toml"&lt;/span&gt;
        &lt;span class="nx"&gt;change_mode&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"signal"&lt;/span&gt;
        &lt;span class="nx"&gt;change_signal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SIGHUP"&lt;/span&gt;
        &lt;span class="c1"&gt;# overriding the delimiters to [[ ]] to avoid conflicts with Vector's native templating, which also uses {{ }}&lt;/span&gt;
        &lt;span class="nx"&gt;left_delimiter&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"[["&lt;/span&gt;
        &lt;span class="nx"&gt;right_delimiter&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"]]"&lt;/span&gt;
        &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOH&lt;/span&gt;&lt;span class="sh"&gt;
          data_dir = "alloc/data/vector/"
          [api]
            enabled = true
            address = "0.0.0.0:8686"
            playground = true
          [sources.logs]
            type = "docker_logs"
          [sinks.out]
            type = "console"
            inputs = [ "logs" ]
            encoding.codec = "json"
            target = "stdout"
          [sinks.loki]
            type = "loki"
            compression = "snappy"
            encoding.codec = "json"
            inputs = ["logs"] 
            endpoint = "http://[[ range nomadService "loki" ]][[.Address]]:[[.Port]][[ end ]]"
            healthcheck.enabled = true
            out_of_order_action = "drop"
            # remove fields that have been converted to labels to avoid having the field twice
            remove_label_fields = true
              [sinks.loki.labels]
              # See https://vector.dev/docs/reference/vrl/expressions/#path-example-nested-path
              job = "{{label.\"com.hashicorp.nomad.job_name\" }}"
              task = "{{label.\"com.hashicorp.nomad.task_name\" }}"
              group = "{{label.\"com.hashicorp.nomad.task_group_name\" }}"
              namespace = "{{label.\"com.hashicorp.nomad.namespace\" }}"
              node = "{{label.\"com.hashicorp.nomad.node_name\" }}"
&lt;/span&gt;&lt;span class="no"&gt;        EOH
&lt;/span&gt;      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nx"&gt;kill_timeout&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"30s"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy the above configuration and deploy the job from Nomad UI.Once vector is deployed, it will start collecting the logs from docker logs and send it to Loki.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy Grafana
&lt;/h2&gt;

&lt;p&gt;Grafana is a popular tool to visualize logs, metrics and traces. Here we will use grafana and its Loki Data source connector to view and explore log data sent by docker applications running in Nomad.&lt;br&gt;
The grafana nomad job file is given below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;job&lt;/span&gt; &lt;span class="s2"&gt;"grafana"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;datacenters&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"dc1"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"service"&lt;/span&gt;


  &lt;span class="nx"&gt;group&lt;/span&gt; &lt;span class="s2"&gt;"grafana"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

    &lt;span class="nx"&gt;network&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;mode&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"host"&lt;/span&gt;

      &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="s2"&gt;"grafana"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;
        &lt;span class="nx"&gt;static&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3000&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nx"&gt;task&lt;/span&gt; &lt;span class="s2"&gt;"grafana"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;driver&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"docker"&lt;/span&gt;

      &lt;span class="nx"&gt;env&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;GF_LOG_LEVEL&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ERROR"&lt;/span&gt;
        &lt;span class="nx"&gt;GF_LOG_MODE&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"console"&lt;/span&gt;
        &lt;span class="nx"&gt;GF_PATHS_DATA&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/var/lib/grafana"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"root"&lt;/span&gt;

      &lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"grafana/grafana:10.4.2"&lt;/span&gt;
        &lt;span class="nx"&gt;ports&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"grafana"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="nx"&gt;volumes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"/grafana_volume:/var/lib/grafana"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

      &lt;span class="nx"&gt;resources&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;cpu&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2000&lt;/span&gt;
        &lt;span class="nx"&gt;memory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2000&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Deploy this job using the Nomad UI as described in previous section.&lt;/p&gt;

&lt;p&gt;After deploying all the jobs, the Nomad UI looks like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawx4gmtsvql872k25q96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawx4gmtsvql872k25q96.png" alt=" " width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting to Loki Data source in Grafana
&lt;/h2&gt;

&lt;p&gt;After confirming all jobs are running as expected, open the grafana UI on :3000. For me the address is my host machine's private IP which is &lt;a href="http://192.168.1.32:3000" rel="noopener noreferrer"&gt;http://192.168.1.32:3000&lt;/a&gt;.  Login to the Dashboard, using default username &lt;code&gt;admin&lt;/code&gt; and password &lt;code&gt;admin&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnz6eofec79rwzche2ebc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnz6eofec79rwzche2ebc.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the &lt;strong&gt;Data sources&lt;/strong&gt; option, press &lt;strong&gt;Add Data sources&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobhx9urc72eb9ws84aqg.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobhx9urc72eb9ws84aqg.jpeg" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search for &lt;strong&gt;loki&lt;/strong&gt; and select it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm018p1oux9nhcf90nvb.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm018p1oux9nhcf90nvb.jpeg" alt=" " width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Update the connection URL of loki. Based on the loki nomad job file, it is listening on static port &lt;strong&gt;3100&lt;/strong&gt;. The connection url will be like this, &lt;a href="http://(machine-ip):3100" rel="noopener noreferrer"&gt;http://(machine-ip):3100&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxo229pse6kda6w87xju.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxo229pse6kda6w87xju.jpeg" alt=" " width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;keep other options default and click on &lt;strong&gt;Save &amp;amp; Test&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;If the connection is successful, it shows &lt;strong&gt;Data source successfully connected&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Visualising Loki Logs in Grafana
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Select the &lt;strong&gt;Explore&lt;/strong&gt; option in Grafana&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbufsc44h1q2z8a0aotj2.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbufsc44h1q2z8a0aotj2.jpeg" alt=" " width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select Loki&lt;/li&gt;
&lt;li&gt;Select the Label &lt;strong&gt;jobs&lt;/strong&gt; and choose &lt;strong&gt;logger&lt;/strong&gt; Nomad job.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6amcf3iv0dwyd8rfd6v.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi6amcf3iv0dwyd8rfd6v.jpeg" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click &lt;strong&gt;Run Query&lt;/strong&gt; to view the logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0zrilgcrxx4fzuhyxg5.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0zrilgcrxx4fzuhyxg5.jpeg" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have successfully installed Nomad, Docker, and run applications on top of it, and queried the logs emitted by that application in Grafana.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://atodorov.me/2021/07/09/logging-on-nomad-and-log-aggregation-with-loki/" rel="noopener noreferrer"&gt;https://atodorov.me/2021/07/09/logging-on-nomad-and-log-aggregation-with-loki/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developer.hashicorp.com/nomad/docs" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/nomad/docs&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>docker</category>
      <category>devops</category>
      <category>sre</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Running Docker based web applications in Hashicorp Nomad with Traefik Load balancing</title>
      <dc:creator>Justine Devasia</dc:creator>
      <pubDate>Fri, 15 Mar 2024 09:19:37 +0000</pubDate>
      <link>https://forem.com/justinepdevasia/running-docker-based-web-applications-in-hashicorp-nomad-with-traefik-load-balancing-1el6</link>
      <guid>https://forem.com/justinepdevasia/running-docker-based-web-applications-in-hashicorp-nomad-with-traefik-load-balancing-1el6</guid>
      <description>&lt;p&gt;In &lt;a href="https://dev.to/justinepdevasia/building-hashicorp-nomad-cluster-in-vultr-cloud-using-terraform-55mp"&gt;previous post&lt;/a&gt;, we discussed creating a basic Nomad cluster in the Vultr cloud. Here, we will use the cluster created to deploy a load-balanced sample web app using the service discovery capability of &lt;a href="https://www.nomadproject.io/" rel="noopener noreferrer"&gt;Nomad&lt;/a&gt; and its native integration with the &lt;a href="https://traefik.io/traefik/" rel="noopener noreferrer"&gt;Traefik&lt;/a&gt; load balancer. The source code is available &lt;a href="https://github.com/justinepdevasia/nomad-traefik" rel="noopener noreferrer"&gt;here&lt;/a&gt; for the reference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvrliwd8h8frvf1zk6s0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvrliwd8h8frvf1zk6s0.png" alt="nomad-traefik" width="651" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Traefik acts as an API gateway for your online services, ensuring that incoming requests are properly routed to the appropriate parts of your system. It can perform host-based or path-based routing, allowing the server to run a wide variety of services with varied domain names.&lt;/p&gt;

&lt;p&gt;What makes traefik stand out is its ability to automatically detect and configure services without any extra effort on your part. It's like having a helpful assistant that knows exactly where everything belongs. This information is obtained through connection to Nomad services, a process we will undertake in this article.&lt;/p&gt;

&lt;p&gt;To run the service, all we need is a &lt;a href="https://github.com/justinepdevasia/nomad-traefik/blob/main/traefik/traefik.nomad" rel="noopener noreferrer"&gt;Nomad job file&lt;/a&gt; and a &lt;a href="https://dev.to/justinepdevasia/building-hashicorp-nomad-cluster-in-vultr-cloud-using-terraform-55mp"&gt;nomad cluster&lt;/a&gt;. A nomad job file is configuration file which contains information about the application to run, its networking and the service discovery information.&lt;/p&gt;

&lt;p&gt;This is how the job file looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;job "traefik" {
  datacenters = ["dc1"]
  type        = "system"

  group "traefik" {

    network {
      mode = "host"
      port  "http"{
         static = 80
      }
      port  "admin"{
         static = 8080
      }
    }

    task "server" {
      driver = "docker"
      config {
        image = "traefik:2.11"
        ports = ["admin", "http"]
        args = [
          "--api.dashboard=true",
          "--api.insecure=true", # not for production
          "--entrypoints.web.address=:${NOMAD_PORT_http}",
          "--entrypoints.traefik.address=:${NOMAD_PORT_admin}",
          "--providers.nomad=true",
          "--providers.nomad.endpoint.address=http://&amp;lt;nomad server ip&amp;gt;:4646" 
        ]
      }

      resources {
        cpu    = 100 # Mhz
        memory = 100 # MB
      }
    }

    service {
      name = "traefik-http"
      provider = "nomad"
      port = "http"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main components of a Job file are the job, group and task stanzas. A job file can have multiple groups, and each group can have multiple tasks. The group contains networking information like the type of network, ports exposed and the service info. If you are interested in nomad networking, this &lt;a href="https://mrkaran.dev/posts/nomad-networking-explained/" rel="noopener noreferrer"&gt;article&lt;/a&gt; is a great source of information. The task can run docker containers using a driver and configs&lt;/p&gt;

&lt;p&gt;To successfully route traffic, the traefik proxy needs to have information about the IP address and port of applications running in the cluster. The arguments in the configuration contain information about how traefik will obtain details about applications running in Nomad. Nomad offers a native service discovery option, and in this case, traefik takes advantage of this service discovery information to retrieve application details. While running the job, it is important to modify the endpoint address to the Nomad server address, here:  &lt;code&gt;providers.nomad.endpoint.address=http://&amp;lt;nomad server ip&amp;gt;:4646&lt;/code&gt;. More configuration options are available in the traefik &lt;a href="https://doc.traefik.io/traefik/providers/nomad/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running Traefik Job file in the cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let us try to run the job file in a nomad cluster. In my &lt;a href="https://dev.to/justinepdevasia/building-hashicorp-nomad-cluster-in-vultr-cloud-using-terraform-55mp"&gt;previous&lt;/a&gt; blog, I have created a nomad cluster in vultr cloud and will use the same cluster to deploy this job file. The cluster consists of 1 server and 3 clients, both traefik and sample web app will be deployed in the cluster and we will observe traefik distributing traffic effectively with minimal configuration.&lt;/p&gt;

&lt;p&gt;There are two ways to run a job, manually deploying the job file from Nomad UI or running it with nomad cli. In this case manually triggering the job from UI is suitable. Before triggering the job, I will update the job file with nomad server endpoint address, here the endpoint is the address of nomad server load balancer we created earlier.&lt;/p&gt;

&lt;p&gt;Traefik job file in nomad UI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmybnev28r6e5pupjl6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmybnev28r6e5pupjl6i.png" alt="traefik job" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Planning:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w2f2xyc3ip4h306e80m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4w2f2xyc3ip4h306e80m.png" alt="traefik plan" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Job run Result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4oq4ap0d4ezojf9tktq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4oq4ap0d4ezojf9tktq4.png" alt="traefik running" width="800" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once traefik started running, it will be accessible in port 80 of the vultr load balancer created. The configuration of attaching loadbalancer to the vm is done with the help of terraform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xiwqx3u4rnwh4wqrcpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xiwqx3u4rnwh4wqrcpi.png" alt="404 error" width="617" height="38"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 404 error is expected as there is no running services present. Now we are confident that the traefik is running and waiting for redirecting the requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running sample webapp in the cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A sample web app is also deployed along with traefik and its function is to return the IP of the machine it is running. This is ideal for testing to verify the proxy is working as expected.The job file for the web app is present &lt;a href="https://github.com/justinepdevasia/nomad-traefik/blob/main/traefik/webapp.nomad" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;job "webapp" {
  datacenters = ["dc1"]

  type = "service"

  group "webapp" {
    count = 1

    network {
       mode = "host"
       port "http" {
         to = 80
       }
    }

    service {
      name = "webapp"
      port = "http"
      provider = "nomad"

      tags = [
        "traefik.enable=true",
        "traefik.http.routers.webapp.rule=Path(`/`)",
      ]
    }

    task "server" {
      env {
        WHOAMI_PORT_NUMBER = "${NOMAD_PORT_http}"
      }

      driver = "docker"

      config {
        image = "traefik/whoami"
        ports = ["http"]
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a simple job configuration for the sample web app and the important thing to note is the tags.&lt;/p&gt;

&lt;p&gt;Traefik proxy find the service to load balance using this tags and based on the tag information, it can do host based or path based routing. With this approach, the proxy get information about the web app without any modification in its configuration and restarts unlike proxies like nginx. This is a great advantage of using traefik for load balancing.&lt;/p&gt;

&lt;p&gt;Deploying the job:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxjslogb0qjaoblbzr50.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxjslogb0qjaoblbzr50.png" alt="Job deployment" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jobs successfully ran:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg16boz9bxxjhapqsiqbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg16boz9bxxjhapqsiqbh.png" alt="Job ran" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we have both webapp and traefik running in nomad cluster. Doing a curl on the client load balancer give us back the response from any of these clients with the help of traefik.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo5tgkyx6zrk954b3ghx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo5tgkyx6zrk954b3ghx.png" alt="curl" width="462" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we do curl on the client load balancer, we get back a different remote address each time, indicating the proxy is doing load balancing between the 3 instances of the web app deployed.&lt;/p&gt;

&lt;p&gt;In this blog post, we explored the seamless integration of HashiCorp Nomad and Traefik for deploying a load-balanced web application infrastructure in the Vultr cloud. By utilizing Nomad's service discovery and Traefik's dynamic routing capabilities, we demonstrated the straightforward setup and management of a scalable cluster environment. In the upcoming posts, we will deploy more services and dig deeper into the nomad ecosystem.&lt;/p&gt;

&lt;p&gt;Related blog posts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://dev.to/justinepdevasia/accelerating-deployment-creating-hashicorp-nomad-machine-images-on-vultr-cloud-via-packer-16c8"&gt;VM image creation&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/justinepdevasia/building-hashicorp-nomad-cluster-in-vultr-cloud-using-terraform-55mp"&gt;Nomad infrastructure creation&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>webdev</category>
      <category>devops</category>
      <category>cloud</category>
      <category>docker</category>
    </item>
    <item>
      <title>Building HashiCorp Nomad Cluster in Vultr Cloud using Terraform</title>
      <dc:creator>Justine Devasia</dc:creator>
      <pubDate>Mon, 11 Mar 2024 13:31:45 +0000</pubDate>
      <link>https://forem.com/justinepdevasia/building-hashicorp-nomad-cluster-in-vultr-cloud-using-terraform-55mp</link>
      <guid>https://forem.com/justinepdevasia/building-hashicorp-nomad-cluster-in-vultr-cloud-using-terraform-55mp</guid>
      <description>&lt;p&gt;&lt;a href="https://www.nomadproject.io/" rel="noopener noreferrer"&gt;Nomad&lt;/a&gt; is really awesome!&lt;/p&gt;

&lt;p&gt;In this blog post, let us see how to build and automate a Nomad cluster using Terraform and the Vultr cloud computing platform. The &lt;a href="https://dev.to/justinepdevasia/accelerating-deployment-creating-hashicorp-nomad-machine-images-on-vultr-cloud-via-packer-16c8"&gt;previous blog post&lt;/a&gt; discusses how to create a machine image in Packer. Here we will use the machine image created to deploy a Nomad cluster with both servers and clients. They are attached to Vultr load balancers and protected by firewalls.&lt;br&gt;
The source code is available &lt;a href="https://github.com/justinepdevasia/nomad-terraform-vultr-cloud" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;HashiCorp Nomad is a workload orchestrator to deploy applications in the cloud or on-premise. It enables deployment and management of various workloads including containers, jar files, exec jobs etc.. It is highly scalable and can run millions of containers in a single cluster.&lt;/p&gt;

&lt;p&gt;Terraform is an infrastructure as code application where cloud infrastructure deployment can be automated and codified.&lt;/p&gt;

&lt;p&gt;This Nomad cluster will have a single server and 3 clients. The server manages the state of the cluster and job deployments. The clients are the machines on which actual applications run.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99473v3lzz5eu70fhxzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99473v3lzz5eu70fhxzv.png" alt="Nomad Cluster" width="581" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To deploy the infrastructure, we will require Terraform, Vultr cloud, and some shell scripts. All the scripts required to spawn up infra are available &lt;a href="https://github.com/justinepdevasia/nomad-terraform-vultr-cloud" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Firstly, let us install Terraform using HashiCorp's &lt;a href="https://developer.hashicorp.com/terraform/install" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. Once Terraform is set up, we need to create a &lt;a href="https://www.vultr.com/?ref=9581131" rel="noopener noreferrer"&gt;Vultr&lt;/a&gt; account. Vultr cloud is providing free $250 credits for first-time signup. Generate an API key from the dashboard and keep it safe, as it is required for creating the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps to build the cluster&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Clone the git repo &lt;a href="https://github.com/justinepdevasia/nomad-terraform-vultr-cloud" rel="noopener noreferrer"&gt;https://github.com/justinepdevasia/nomad-terraform-vultr-cloud&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open terminal inside the repo and &lt;code&gt;cd terraform&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Replace the &lt;a href="https://github.com/justinepdevasia/nomad-terraform-vultr-cloud/blob/main/terraform/terraform.tfvars" rel="noopener noreferrer"&gt;terraform.tfvars&lt;/a&gt; variables with the values you require. The VM will be created from the snapshot created, and the proper snapshot ID needs to be provided in the file. If you haven't done the snapshot creation, please read this &lt;a href="https://dev.to/justinepdevasia/accelerating-deployment-creating-hashicorp-nomad-machine-images-on-vultr-cloud-via-packer-16c8"&gt;article&lt;/a&gt;. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The terraform.tfvars file looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;region = "bom"
plan = "vc2-1c-2gb"
snapshot_id = "c31a3a09-8b8b-4b96-a56f-a020606d4cd4"
private_network_label = "nomad-network"
nomad_server_hostname_prefix = "nomad-server"
nomad_client_hostname_prefix = "nomad-client"
lb_server_name = "nomad-servers-lb"
lb_client_name = "nomad-clients-lb"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Run command &lt;code&gt;export VULTR_API_KEY="your-vultr-api-key"&lt;/code&gt; - this is to store the API key value in an environment variable. We do not want to expose it in the code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run command &lt;code&gt;terraform init&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run command &lt;code&gt;terraform plan -var="vultr_api_key=${VULTR_API_KEY}"&lt;/code&gt; to get information about the changes happening to infra when the application is deployed. Here the variable Vultr API key is fetched from the environment variable.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73xvhjb34m5fwnqkq8ra.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73xvhjb34m5fwnqkq8ra.png" alt="terraform plan" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run command &lt;code&gt;terraform apply -var="vultr_api_key=${VULT
R_API_KEY}"&lt;/code&gt; to build the infra.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomg7fdgowjbk5w3i372t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomg7fdgowjbk5w3i372t.png" alt="terraform apply" width="579" height="133"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this process, the Terraform will connect to Vultr cloud and execute the creation of various services like Virtual machines, load balancers, and firewall. The output of the Terraform shows the load balancer IPs.&lt;/p&gt;

&lt;p&gt;The Vultr cloud shows both Nomad servers and clients:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxa0azbs80nv4obwa3s7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcxa0azbs80nv4obwa3s7.png" alt="vultr cloud" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can verify the cluster is active by pasting the IP of the Nomad server in the browser. It will be like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4926jlf6c27wzeeviud8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4926jlf6c27wzeeviud8.png" alt="Nomad server" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Clients and servers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukrw0fkc2q4l3pgmyu9w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukrw0fkc2q4l3pgmyu9w.png" alt="clients and servers" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using Terraform, we can easily add more clients to the clusters by increasing the count of Nomad clients in the client configuration. On updating the number of clients to 3 and running Terraform apply, I was able to add 2 more clients to the system. More clients, more compute!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F817camf1jmf3wnaijtfg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F817camf1jmf3wnaijtfg.png" alt="three clients" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Destroying the Cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The entire infrastructure can be dismantled by running the command &lt;code&gt;terraform destroy -var="vultr_api_key=${VULT&lt;br&gt;
R_API_KEY}"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Terraform provides us a quick and convenient way to build and tear down infrastructures on demand. This enables the creation of multiple identical environments for test, prod, and QA. In the current setup, the Terraform state file is stored in the local machine, but it can also be stored in a remote backend like S3 or Terraform Cloud as well.&lt;/p&gt;

&lt;p&gt;source github repo: &lt;a href="https://github.com/justinepdevasia/nomad-terraform-vultr-cloud/tree/main" rel="noopener noreferrer"&gt;https://github.com/justinepdevasia/nomad-terraform-vultr-cloud/tree/main&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the upcoming post, we will discuss how to run various workloads in Nomad including stateless and stateful applications.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>kubernetes</category>
      <category>devops</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Accelerating Deployment: Creating HashiCorp Nomad Machine Images on Vultr Cloud via Packer</title>
      <dc:creator>Justine Devasia</dc:creator>
      <pubDate>Sun, 03 Mar 2024 06:16:12 +0000</pubDate>
      <link>https://forem.com/justinepdevasia/accelerating-deployment-creating-hashicorp-nomad-machine-images-on-vultr-cloud-via-packer-16c8</link>
      <guid>https://forem.com/justinepdevasia/accelerating-deployment-creating-hashicorp-nomad-machine-images-on-vultr-cloud-via-packer-16c8</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje5trq6nqroqvq4mhr3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fje5trq6nqroqvq4mhr3u.png" alt="packer vultr logo" width="655" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Immutable infrastructure creation in the cloud requires on-demand machine images. Whenever we need to make changes to the system, the old image is torn down and a new one is deployed. Packer is a tool provided by HashiCorp to build machine images easily, which can then be used to deploy cloud VMs. In this post, we will see how we can create a machine image in Vultr Cloud using Packer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.vultr.com/?ref=9581131" rel="noopener noreferrer"&gt;Vultr&lt;/a&gt; is a cloud computing platform which enables us to deploy virtual machines and various cloud services like loadbalancers, CDNs etc. . In Vultr, the machine image is called snapshots. We will install Nomad and Docker within the snapshot. Later, when a cluster is created, this snapshot ID can be utilized to swiftly spawn the cluster with all the packages pre-installed.&lt;/p&gt;

&lt;p&gt;Firstly, let us install packer using hashicorp &lt;a href="https://developer.hashicorp.com/packer/tutorials/docker-get-started/get-started-install-cli#installing-packer" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. Once packer is setup, we need to create a &lt;a href="https://www.vultr.com/?ref=9581131" rel="noopener noreferrer"&gt;vultr&lt;/a&gt; account. Vultr cloud is providing free $250 credits for first time signup. Generate api key from dashboard and keep it safe, as it is required for creating the snapshot.&lt;/p&gt;

&lt;p&gt;Once both packer and vultr is ready, let us use the script and config file. The source code is available &lt;a href="https://github.com/justinepdevasia/nomad-terraform-vultr-cloud/tree/main/packer" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Two files are used in creating the image,&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/justinepdevasia/nomad-terraform-vultr-cloud/blob/main/packer/setup.sh" rel="noopener noreferrer"&gt;setup.sh&lt;/a&gt; - contains some basic shell script to install docker and nomad in an ubuntu machine.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/justinepdevasia/nomad-terraform-vultr-cloud/blob/main/packer/ubuntu.pkr.hcl" rel="noopener noreferrer"&gt;ubuntu.pkr.hcl&lt;/a&gt; - packer template to build the image&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let us go through the contents of ubuntu.pkr.hcl file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt; &lt;span class="nx"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"vultr_api_key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;type&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;
   &lt;span class="nx"&gt;default&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${env("&lt;/span&gt;&lt;span class="nx"&gt;VULTR_API_KEY&lt;/span&gt;&lt;span class="s2"&gt;")}"&lt;/span&gt;
   &lt;span class="nx"&gt;sensitive&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="nx"&gt;packer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;required_plugins&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;vultr&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
       &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"&amp;gt;=v2.3.2"&lt;/span&gt;
       &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"github.com/vultr/vultr"&lt;/span&gt;
     &lt;span class="p"&gt;}&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="s2"&gt;"vultr"&lt;/span&gt; &lt;span class="s2"&gt;"ubuntu-nomad"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;api_key&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.vultr_api_key}"&lt;/span&gt;
   &lt;span class="nx"&gt;os_id&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2179"&lt;/span&gt;
   &lt;span class="nx"&gt;plan_id&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vc2-1c-2gb"&lt;/span&gt;
   &lt;span class="nx"&gt;region_id&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"bom"&lt;/span&gt;
   &lt;span class="nx"&gt;snapshot_description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Ubuntu 23.04 Nomad ${formatdate("&lt;/span&gt;&lt;span class="nx"&gt;YYYY-MM-DD&lt;/span&gt; &lt;span class="nx"&gt;hh&lt;/span&gt;&lt;span class="err"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;mm&lt;/span&gt;&lt;span class="s2"&gt;", timestamp())}"&lt;/span&gt;
   &lt;span class="nx"&gt;ssh_username&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"root"&lt;/span&gt;
   &lt;span class="nx"&gt;state_timeout&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"25m"&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;

 &lt;span class="nx"&gt;build&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="nx"&gt;sources&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"source.vultr.ubuntu-nomad"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

   &lt;span class="nx"&gt;provisioner&lt;/span&gt; &lt;span class="s2"&gt;"shell"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;script&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"setup.sh"&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The variable &lt;strong&gt;vultr_api_key&lt;/strong&gt; is used to connect with vultr cloud securely. It acquires the value from &lt;strong&gt;VULTR_API_KEY&lt;/strong&gt; env variable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The above &lt;code&gt;packer&lt;/code&gt; stanza download the correct vultr plugin to use in the machine image creation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;source&lt;/code&gt; stanza contains information about the cloud provider, the region in which image required, base os used etc..&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;build&lt;/code&gt; stanza connects to a source and use a provisioner to build the image. Here it execute the build process by running the setup.sh file provided by us.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now let us see how to run this setup to build our customized os image.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Clone the git repo &lt;a href="https://github.com/justinepdevasia/nomad-terraform-vultr-cloud" rel="noopener noreferrer"&gt;https://github.com/justinepdevasia/nomad-terraform-vultr-cloud&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;open terminal inside the repo and &lt;code&gt;cd packer&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;run command &lt;code&gt;export VULTR_API_KEY="your-vultr-api-key"&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;run command &lt;code&gt;packer init .&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;run command &lt;code&gt;packer build .&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbdmygcc5zwgfctgd338.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdbdmygcc5zwgfctgd338.png" alt="packer build ongoing" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will trigger a packer build session, the application will connect to vultr cloud, spawn a temporary vm and install the packages. Once setup.sh file is fully executed, snapshot creation is triggered and we get back the snapshot id.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2e5xyauh05ui47l7vfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2e5xyauh05ui47l7vfv.png" alt="packer build completion" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This snapshot id can be used to spawn new vms with all the packages, in this case nomad and docker in built.&lt;br&gt;
Finally we can see the created snapshot in Vultr cloud dashboard.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk364ufke0pc3dbm9l5sc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk364ufke0pc3dbm9l5sc.png" alt=" " width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Packer has really enabled the creation of machine images a smooth and enjoyable process. Combining it with the simplicity and ease of use of &lt;a href="https://www.vultr.com/?ref=9581131" rel="noopener noreferrer"&gt;vultr&lt;/a&gt;, we get a powerful system for automation of cloud VM image creation. With this approach we can pre-install any package in a snapshot and spawn the VMs without any additional installation after launch.&lt;/p&gt;

&lt;p&gt;In the upcoming article, we will discuss how to use this snapshot to deploy a Nomad cluster using Terraform.&lt;/p&gt;

</description>
      <category>nomad</category>
      <category>packer</category>
      <category>infrastructureascode</category>
      <category>devops</category>
    </item>
    <item>
      <title>Embracing Simplicity: The Advantages of Nomad over Kubernetes</title>
      <dc:creator>Justine Devasia</dc:creator>
      <pubDate>Sat, 16 Dec 2023 13:07:29 +0000</pubDate>
      <link>https://forem.com/justinepdevasia/embracing-simplicity-the-advantages-of-nomad-over-kubernetes-3kjm</link>
      <guid>https://forem.com/justinepdevasia/embracing-simplicity-the-advantages-of-nomad-over-kubernetes-3kjm</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5u8sq9tgephouhj1q8p.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5u8sq9tgephouhj1q8p.jpg" alt=" " width="695" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the rapidly evolving landscape of container orchestration and management, two prominent players have emerged: &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; and HashiCorp's &lt;a href="https://www.nomadproject.io/" rel="noopener noreferrer"&gt;Nomad&lt;/a&gt;. While Kubernetes has gained widespread adoption and popularity, Nomad provides a compelling alternative that stands out for its simplicity and efficiency. In this blog post, we'll explore the advantages of using Nomad over Kubernetes and why it might be the right choice for certain use cases.&lt;/p&gt;

&lt;p&gt;HashiCorp Nomad stands out as an easy-to-use and flexible cluster scheduler, capable of efficiently running a diverse range of workloads, including micro-services, batch processes, and both containerized and non-containerized applications. This powerful tool seamlessly integrates with HashiCorp Consul for service discovery and HashiCorp Vault for secrets management, enhancing its capabilities. Nomad's architecture is notably simpler than that of Kubernetes, as it operates as a single binary for both clients and servers, eliminating the need for external coordination or storage services. Its distributed, highly available, and operationally simple design has made it a go-to choice for companies looking to deploy containerized workloads efficiently and manage clusters at any scale, all while minimizing costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplicity&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In terms of simplicity, Nomad offers a streamlined architecture compared to Kubernetes. While Kubernetes relies on over half a dozen interoperating services, including etcd for coordination and storage, Nomad stands out as a single binary for both clients and servers, requiring no external services. Nomad combines a lightweight resource manager and scheduler into one system, defaulting to a distributed, highly available, and operationally simple setup. This architectural simplicity makes Nomad an efficient choice for users seeking straightforward container orchestration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wide variety of workload support&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unlike Kubernetes, which is tailored for Docker containers, Nomad boasts a broader purpose. Nomad accommodates virtualized, containerized, and standalone applications, encompassing technologies such as Docker, Java, IIS on Windows, Qemu, and more. Nomad's design emphasizes extensible drivers, and ongoing efforts aim to expand support for all prevalent drivers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency in Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Challenges in achieving consistent deployment often arise with full Kubernetes installations in production due to their time-consuming, operationally complex, and resource-intensive nature. Although the Kubernetes community has introduced streamlined versions like minikube, kubeadm, and k3s to ease development and testing, this approach results in inconsistencies in capabilities, configuration, and management when transitioning to production.&lt;/p&gt;

&lt;p&gt;Nomad, in contrast, offers a solution to this issue. As a single lightweight binary, Nomad can be deployed consistently across various environments, including local development, production, on-premises, at the edge, and in the cloud. This uniform deployment approach ensures the same level of operational ease-of-use throughout all these environments, setting Nomad apart from the fragmented distributions of Kubernetes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kubernetes, according to documentation, supports clusters up to 5,000 nodes and 300,000 total containers. However, as the environment expands, operational complexity increases due to the compounded constraints of interoperating components. &lt;/p&gt;

&lt;p&gt;In contrast, Nomad has demonstrated real-world scalability, surpassing 10,000 nodes in production environments. It excels in handling multi-cluster deployments seamlessly across availability zones, regions, and data centers. Nomad's native support for multi-cluster deployments simplifies scaling applications across various data centers, regions, and clouds without introducing additional complexity. Rigorous scalability benchmarks, including the &lt;a href="https://www.hashicorp.com/c1m" rel="noopener noreferrer"&gt;1 million container challenge&lt;/a&gt; in 2016 and the &lt;a href="https://www.hashicorp.com/c2m" rel="noopener noreferrer"&gt;2 million container challenge&lt;/a&gt; in 2020, validate Nomad's architectural design and its ability to perform under the most demanding requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, Nomad proves to be a compelling alternative to Kubernetes, especially when prioritizing operational efficiency, cost-effectiveness, and streamlined maintenance. Its versatility extends to non-containerized workloads, making it a flexible choice for diverse application requirements. Notably, Nomad stands out for its user-friendly maintenance, well-suited for smaller teams of infrastructure engineers. With its simplicity and robust capabilities, Nomad offers an attractive solution that aligns with the evolving needs of efficient and cost-conscious deployment environments.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>ai</category>
      <category>kubernetes</category>
      <category>infrastructureascode</category>
    </item>
  </channel>
</rss>
