<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Omolayo Victor</title>
    <description>The latest articles on Forem by Omolayo Victor (@kingkonsole).</description>
    <link>https://forem.com/kingkonsole</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/kingkonsole"/>
    <language>en</language>
    <item>
      <title>This came to mind after reading some articles. I had to come right in and correct some fundamental mistakes. - Placement groups work within a region and AZ. This flexibility provides huge advantage to your infrastructure in terms of Latency or Reliability.</title>
      <dc:creator>Omolayo Victor</dc:creator>
      <pubDate>Fri, 30 May 2025 20:19:37 +0000</pubDate>
      <link>https://forem.com/kingkonsole/this-came-to-mind-after-reading-some-articles-i-had-to-come-right-in-and-correct-some-fundamental-2k0i</link>
      <guid>https://forem.com/kingkonsole/this-came-to-mind-after-reading-some-articles-i-had-to-come-right-in-and-correct-some-fundamental-2k0i</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/kingkonsole/aws-placement-groups-the-lego-analogy-4bkj" class="crayons-story__hidden-navigation-link"&gt;AWS Placement Groups: The Lego Analogy&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/kingkonsole" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F215827%2F458d2703-22c3-47e4-81c2-660b984d49b6.jpg" alt="kingkonsole profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/kingkonsole" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Omolayo Victor
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Omolayo Victor
                
              
              &lt;div id="story-author-preview-content-1589024" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/kingkonsole" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F215827%2F458d2703-22c3-47e4-81c2-660b984d49b6.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Omolayo Victor&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/kingkonsole/aws-placement-groups-the-lego-analogy-4bkj" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Sep 6 '23&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/kingkonsole/aws-placement-groups-the-lego-analogy-4bkj" id="article-link-1589024"&gt;
          AWS Placement Groups: The Lego Analogy
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/aws"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;aws&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/placementgroup"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;placementgroup&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/architecture"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;architecture&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/kingkonsole/aws-placement-groups-the-lego-analogy-4bkj" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;3&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/kingkonsole/aws-placement-groups-the-lego-analogy-4bkj#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            4 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>aws</category>
      <category>placementgroup</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Introducing vulne-soldier: A Modern AWS EC2 Vulnerability Remediation Tool</title>
      <dc:creator>Omolayo Victor</dc:creator>
      <pubDate>Tue, 14 Jan 2025 16:19:56 +0000</pubDate>
      <link>https://forem.com/kingkonsole/introducing-vulne-soldier-a-modern-aws-ec2-vulnerability-remediation-tool-3j7a</link>
      <guid>https://forem.com/kingkonsole/introducing-vulne-soldier-a-modern-aws-ec2-vulnerability-remediation-tool-3j7a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As cloud computing platforms like AWS become increasingly widespread, organisations are embracing them for their flexibility and autonomy in managing workloads and services. AWS, in particular, offers a robust infrastructure, flexible migration services that allows businesses to take control of their infrastructure destiny either on-site, hybrid or in the cloud. However, with the growing adoption of cloud services, the threat landscape also expands, necessitating effective vulnerability management tools.&lt;/p&gt;

&lt;p&gt;Most existing vulnerability management tools require manual intervention, where engineers must address each vulnerability individually. As workloads grows, the more effort is required to perform these actions. AWS provides tools like AWS Inspector and AWS Systems Manager (SSM) amongst to assess and manage software vulnerabilities and unintended network exposures. Amazon Inspector, for instance, uses the SSM agent to collect software inventory from connected resources (EC2, ECR, and Lambda), scans this data, and identifies software vulnerabilities, a crucial step in vulnerability management.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Need for Automation
&lt;/h2&gt;

&lt;p&gt;In today's fast-paced digital environment, manual vulnerability management is not only time-consuming but also prone to human error. As organizations scale their cloud infrastructure, the number of vulnerabilities that need to be managed grows exponentially, this is where automation becomes essential. Automating the vulnerability remediation process ensures that security patches are applied consistently and promptly, reducing the risk of exploitation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing vulne-soldier
&lt;/h2&gt;

&lt;p&gt;Here we present &lt;a href="https://github.com/iKnowJavaScript/terraform-aws-vulne-soldier" rel="noopener noreferrer"&gt;vulne-soldier&lt;/a&gt;, an AWS EC2 vulnerability remediation tool designed to automate the process of patching nodes managed by AWS Systems Manager. With a cup of coffee in hand, we package vulne-soldier as a gift to every organization and cloud professional concerned about the security of their systems.&lt;br&gt;
Take for example security issues like the CrowdStrike outage (due to software updates) or Log4j vulnerability (CVE-2021-44228), they were critical vulnerabilities that affected many applications, and the need to patch then were urgent. With a tool like vulne-soldier via Amazon Inspector, the process of identifying and remediating such vulnerabilities would have been automated, reducing the risk of exploitation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://registry.terraform.io/modules/iKnowJavaScript/vulne-soldier/aws/latest" rel="noopener noreferrer"&gt;vulne-soldier&lt;/a&gt; leverages Amazon Inspector findings for EC2 instances, using resource tags and finding severity to group and address vulnerabilities. It automates the remediation process by applying patches only to the affected EC2 instances, making vulnerability management as simple as possible.&lt;/p&gt;
&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated Remediation&lt;/strong&gt;: Uses AWS Systems Manager Patch Manager to automate the patching process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with Amazon Inspector&lt;/strong&gt;: Gathers findings from Amazon Inspector and groups them by severity.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Targeted Patching&lt;/strong&gt;: Applies patches only to affected EC2 instances based on resource tags and severity levels.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Integration&lt;/strong&gt;: Provisions all necessary resources using Terraform, ensuring a seamless deployment process.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AWS Inspector Findings&lt;/strong&gt;: Amazon Inspector scans EC2 instances and identifies vulnerabilities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Grouping by Severity&lt;/strong&gt;: vulne-soldier groups the findings by severity levels (e.g., CRITICAL, HIGH).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Patching&lt;/strong&gt;: Uses AWS Systems Manager Patch Manager to apply patches to the affected instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform Provisioning&lt;/strong&gt;: Deploys the necessary resources using Terraform, ensuring a consistent and repeatable setup.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Using vulne-soldier
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Download lambda
&lt;/h3&gt;

&lt;p&gt;To apply the terraform module, the compiled lambdas (.zip files) need to be available locally. They can either be downloaded from the GitHub release page or built locally.&lt;/p&gt;

&lt;p&gt;The lambdas can be downloaded manually from the &lt;a href="https://github.com/iKnowJavaScript/terraform-aws-vulne-soldier/releases" rel="noopener noreferrer"&gt;release page&lt;/a&gt; or by building the Lambda folder using Node.&lt;/p&gt;

&lt;p&gt;For local development you can build the lambdas at once using &lt;code&gt;/lambda&lt;/code&gt; or individually using &lt;code&gt;npm zip&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Here is an example configuration for deploying the &lt;code&gt;vulne-soldier&lt;/code&gt; module:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"remediation"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"iKnowJavaScript/vulne-soldier/aws"&lt;/span&gt;
  &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1.0.2"&lt;/span&gt;

  &lt;span class="nx"&gt;name&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vulne-soldier-compliance-remediate"&lt;/span&gt;
  &lt;span class="nx"&gt;environment&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dev"&lt;/span&gt;
  &lt;span class="nx"&gt;aws_region&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
  &lt;span class="nx"&gt;account_id&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2132323212"&lt;/span&gt;
  &lt;span class="nx"&gt;lambda_log_group&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/aws/lambda/vulne-soldier-compliance-remediate"&lt;/span&gt;
  &lt;span class="nx"&gt;lambda_zip&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"../../lambda.zip"&lt;/span&gt;
  &lt;span class="nx"&gt;remediation_options&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt;                                     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
    &lt;span class="nx"&gt;reboot_option&lt;/span&gt;                              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"NoReboot"&lt;/span&gt;
    &lt;span class="nx"&gt;target_ec2_tag_name&lt;/span&gt;                        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AmazonECSManaged"&lt;/span&gt;
    &lt;span class="nx"&gt;target_ec2_tag_value&lt;/span&gt;                       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"true"&lt;/span&gt;
    &lt;span class="nx"&gt;vulnerability_severities&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"CRITICAL"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"HIGH"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;override_findings_for_target_instances_ids&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Triggers Remediation Process
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbcgjn6cux6hnuplbhq9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbcgjn6cux6hnuplbhq9.png" alt="Vulnerability Remediation Trigger" width="800" height="245"&gt;&lt;/a&gt;&lt;br&gt;
On successful deployment, navigate to the AWS Systems Manager console and search for the SSM document created by the module (vulne-soldier-compliance-remediate-inspector-findings) or similar. You can trigger the remediation process by running the document on the affected EC2 instances. You can also create an AWS CloudWatch event rule to automate the process based on AWS Inspector findings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;vulne-soldier simplifies the process of managing and remediating vulnerabilities in AWS EC2 instances. By automating the patching process and integrating seamlessly with AWS Inspector, it enables you to scale your cloud security as your infrastructure grows with minimal manual intervention. Deploy vulne-soldier today and take control of your cloud security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Links;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/iKnowJavaScript/terraform-aws-vulne-soldier" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/iKnowJavaScript/terraform-aws-vulne-soldier/releases" rel="noopener noreferrer"&gt;GitHub Releases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://registry.terraform.io/modules/iKnowJavaScript/vulne-soldier/aws/latest" rel="noopener noreferrer"&gt;vulne-soldier Terraform Module&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli" rel="noopener noreferrer"&gt;Terraform Installation guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>vulnerabilities</category>
      <category>terraform</category>
      <category>ec2</category>
    </item>
    <item>
      <title>Deploying AI Models with Amazon Web Services: A Practical Guide</title>
      <dc:creator>Omolayo Victor</dc:creator>
      <pubDate>Wed, 11 Dec 2024 12:22:43 +0000</pubDate>
      <link>https://forem.com/kingkonsole/deploying-ai-models-with-amazon-web-services-a-practical-guide-14bc</link>
      <guid>https://forem.com/kingkonsole/deploying-ai-models-with-amazon-web-services-a-practical-guide-14bc</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This is my first rodeo on AI model development, and it has been an incredible learning journey. As part of Blackthorn’s just-concluded company-wide hackathon on AI and Agent Force, I embarked on a chosen project that involved deploying an AI model that generates a niche(event-related) image. This project offered an opportunity to gain in-depth knowledge about AI development, models, datasets, and the infrastructure required to support them. The results were both enlightening and rewarding, showcasing the power of modern AI and cloud technologies.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;If you’re here for the source code, it’s available on GitHub:&lt;/em&gt; &lt;a href="https://github.com/iKnowJavaScript/terraform-aws-stable-diffusion" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Why Control Your AI Infrastructure?
&lt;/h3&gt;

&lt;p&gt;One of the core advantages of deploying our AI model on AWS was gaining complete control over data handling and retention. By hosting the model on our infrastructure, we were able to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintain strict control over sensitive data, ensuring secure storage and retention policies.&lt;/li&gt;
&lt;li&gt;Implement data TTL (time-to-live) mechanisms to comply with compliance requirements.&lt;/li&gt;
&lt;li&gt;Tailor the environment for optimal performance, resource allocation, and cost efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach highlighted the importance of balancing privacy, performance, and scalability in AI solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choosing an AI Model
&lt;/h3&gt;

&lt;p&gt;We are all familiar with popular AI tools like ChatGPT, Gemini, and Claude, which showcase the power of conversational AI. While browsing the vast ocean of datasets and models available on Hugging Face was tempting, we decided to focus on leveraging an open-source model for our hackathon project. This led us to explore Stable Diffusion—a remarkable latent text-to-image diffusion model.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stable Diffusion&lt;/strong&gt; (&lt;a href="https://github.com/CompVis/stable-diffusion" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stable Diffusion stood out for its versatility as a latent text-to-image diffusion model pre-trained on a subset of the LAION-5B dataset. Some key features include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Text Encoder:&lt;/strong&gt; It uses a text encoder to condition the model on text prompts, enabling intuitive image generation from descriptions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Efficiency:&lt;/strong&gt; Lightweight enough to run on GPUs with at least 10GB VRAM, making it accessible for medium-scale deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Default Model:&lt;/strong&gt; The model "CompVis/stable-diffusion-v1-4" is pre-trained and ready for adaptation, although other versions offer varying trade-offs in terms of fidelity and inference time.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Hugging Face&lt;/strong&gt; (&lt;a href="https://huggingface.co/" rel="noopener noreferrer"&gt;Hugging Face Hub&lt;/a&gt;)&lt;br&gt;
 played a significant role in this journey. As a leading platform for sharing pre-trained AI models and datasets, Hugging Face provided access to a wide range of resources. From discovering datasets to fine-tuning models, the platform proved invaluable for quickly iterating and adapting Stable Diffusion to our project’s needs.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Infrastructure on AWS
&lt;/h3&gt;

&lt;p&gt;To host the AI model, we chose the Deep Learning OSS Nvidia Driver AMI (Amazon Linux 2) with the AMI ID &lt;code&gt;ami-002a53be89c7bb5de&lt;/code&gt;. This decision was driven by the need for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High GPU Performance:&lt;/strong&gt; The AMI’s compatibility with Nvidia drivers ensures efficient usage of GPUs for model inference.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility with Docker:&lt;/strong&gt; Using the &lt;code&gt;stable-diffusion-docker&lt;/code&gt; repository (&lt;a href="https://github.com/fboulnois/stable-diffusion-docker" rel="noopener noreferrer"&gt;GitHub Repository&lt;/a&gt;), we adapted the model for containerized deployment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Efficiency:&lt;/strong&gt; EC2’s on-demand pricing allowed us to scale resources as needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally, we explored &lt;strong&gt;&lt;em&gt;Amazon SageMaker&lt;/em&gt;&lt;/strong&gt; for internal model training and deploying models directly within the AWS ecosystem. This service provided a seamless integration for training and inference, leveraging AWS’s robust infrastructure. Further explore &lt;strong&gt;&lt;em&gt;AWS Batch&lt;/em&gt;&lt;/strong&gt; to efficiently run AI tasks as jobs for batch processing, which are invaluable for handling workloads at scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  Diving into Hugging Face
&lt;/h3&gt;

&lt;p&gt;Hugging Face is a platform that provides a repository of pre-trained models, datasets, and tools for AI development. We used it to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Discover Datasets:&lt;/strong&gt; Identify relevant datasets for fine-tuning Stable Diffusion.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create Custom Datasets:&lt;/strong&gt; Curate and upload datasets with selective questions and answers, tailored to our project needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Train the Model:&lt;/strong&gt; Fine-tune Stable Diffusion to align more closely with our domain-specific requirements.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Challenges and Solutions
&lt;/h3&gt;

&lt;p&gt;The project wasn’t without hurdles. Some notable challenges and how we addressed them include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;API Gateway Timeout:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem:&lt;/strong&gt; The default API Gateway timeout caused issues when EC2 took longer to generate images.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution:&lt;/strong&gt; We implemented an S3-based placeholder system where:

&lt;ul&gt;
&lt;li&gt;The AI-generated image was stored in an S3 bucket.&lt;/li&gt;
&lt;li&gt;A response was sent back to the client with a reference to the S3 location.

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Alternative Approaches:&lt;/strong&gt; Bidirectional communication with WebSockets, queues like SQS, or real-time protocols could have mitigated this issue further.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Fine-Tuning Stable Diffusion:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem:&lt;/strong&gt; Achieving accurate and domain-specific image generation required additional fine-tuning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution:&lt;/strong&gt; Leveraged Hugging Face datasets to train the model with targeted data, iterating to improve outcomes.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Latency Optimization:&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Problem:&lt;/strong&gt; Initial inference times averaged 32 seconds per banner, which may not scale well for high-volume usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Solution:&lt;/strong&gt; Optimized Docker configurations, utilized larger GPU instances during high-load periods, and explored model quantization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Open Source Contribution
&lt;/h3&gt;

&lt;p&gt;The entire infrastructure-as-code for this project has been made open source. The Terraform scripts used to create necessary AWS resources, pull the model, and set up datasets are available at the following repository: &lt;a href="https://github.com/iKnowJavaScript/terraform-aws-stable-diffusion" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Lessons Learned
&lt;/h3&gt;

&lt;p&gt;The project was a crash course in AI and cloud engineering. Key takeaways include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Model Choice Matters:&lt;/strong&gt; Different versions of Stable Diffusion offer varying benefits; understanding these trade-offs is essential.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure Optimization:&lt;/strong&gt; Balancing cost and performance is critical when scaling AI workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System Design:&lt;/strong&gt; Asynchronous processing with S3 helped circumvent API limitations, emphasizing the need for resilient architectures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration Tools:&lt;/strong&gt; Platforms like Hugging Face streamline model development and dataset curation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Future Directions
&lt;/h3&gt;

&lt;p&gt;For the POC, additional considerations include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scaling Infrastructure:&lt;/strong&gt; Implement autoscaling to handle varying demand.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Communication:&lt;/strong&gt; Explore WebSocket-based communication for live updates.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring and Observability:&lt;/strong&gt; Integrate CloudWatch to monitor GPU usage, latency, and system health.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Security:&lt;/strong&gt; Implement stricter IAM roles and encryption mechanisms for data in transit and at rest.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Deploying AI models with AWS provides unparalleled flexibility and control, making it an ideal choice for custom AI projects. This journey, from Stable Diffusion exploration to creating an optimized cloud-based infrastructure, has been both challenging and rewarding. The experience has laid a strong foundation for tackling future AI endeavors and scaling them to production-ready solutions.&lt;/p&gt;

&lt;p&gt;As I look forward, I’m excited to continue exploring AI models, refining cloud-based architectures, and driving innovation in AI-powered solutions.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>hackathon</category>
    </item>
    <item>
      <title>Building High-Performance, Secure Static Websites on a Budget with AWS and Terraform</title>
      <dc:creator>Omolayo Victor</dc:creator>
      <pubDate>Wed, 15 May 2024 17:50:08 +0000</pubDate>
      <link>https://forem.com/kingkonsole/building-high-performance-secure-static-websites-on-a-budget-with-aws-and-terraform-1eed</link>
      <guid>https://forem.com/kingkonsole/building-high-performance-secure-static-websites-on-a-budget-with-aws-and-terraform-1eed</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the ever-evolving landscape of the internet, websites have transitioned from simple information pages to complex systems that power businesses and personal platforms. This journey has been marked by numerous technological advancements, from the early days of static web pages served via Netscape to PHP admin to contemporary frameworks brimming with features that make development a breeze.&lt;/p&gt;

&lt;p&gt;Today's digital age demands not only functionally rich websites but also ones that are secure, performant, and cost-effective. Amazon Web Services (AWS) has emerged as a leading cloud provider offering tools to meet these needs.&lt;/p&gt;

&lt;p&gt;Herein lies the charm of this guide: we'll embark on a clear and concise journey to deploy a robust frontend architecture using AWS, all orchestrated with the mighty Terraform—an open-source Infrastructure as Code (IaC) tool that simplifies and automates deployment.&lt;/p&gt;

&lt;p&gt;This walkthrough is tailored for individuals or businesses striving for an efficient and secure online presence without breaking the bank. We will meticulously set up storage with S3, manage domain names with Route 53, handle access with IAM, accelerate content delivery with CloudFront, and shield the site with WAF. &lt;/p&gt;

&lt;p&gt;The best part? No prior extensive knowledge is required—as long as you grasp the basics of AWS services and Terraform, you're good to go. I bet it will be the easiest way for anyone to create a secure, compliant, highly available, and highly performant website from scratch before it becomes a task for AI assistance.&lt;/p&gt;

&lt;p&gt;Feel free to leap over to the &lt;a href="https://github.com/iKnowJavaScript/terraform-static-server" rel="noopener noreferrer"&gt;completed code repository on GitHub&lt;/a&gt; &lt;em&gt;(and don't forget to star it!)&lt;/em&gt; if you're eager to get your hands on the code right away.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Basic knowledge of AWS services and Terraform.&lt;/li&gt;
&lt;li&gt;Terraform installed on your local machine. If not, you can follow this &lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli" rel="noopener noreferrer"&gt;guide&lt;/a&gt; to install it.&lt;/li&gt;
&lt;li&gt;A domain name, if you choose to use a custom domain&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Terraform Configuration Explained
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Define Variables
&lt;/h3&gt;

&lt;p&gt;First, we define variables that will be used throughout our Terraform configuration. These include the name of the application, the environment, and optional custom domain settings. Add these to a new &lt;code&gt;inputs.tf&lt;/code&gt; file add the values to &lt;code&gt;inputs.auto.tfvars&lt;/code&gt; file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;&lt;span class="n"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Name of the application"&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"environment"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Name of the environment"&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"hosted_zone_domain"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;
  &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Hosted zone to add domain and CloudFront CNAME to"&lt;/span&gt;
  &lt;span class="n"&gt;nullable&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"create_custom_domain"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bool&lt;/span&gt;
  &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Whether to use a custom domain or not"&lt;/span&gt;
  &lt;span class="n"&gt;default&lt;/span&gt;     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;variable&lt;/span&gt; &lt;span class="s2"&gt;"custom_domain_name"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nb"&gt;type&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;string&lt;/span&gt;
  &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Custom domain name"&lt;/span&gt;
  &lt;span class="n"&gt;nullable&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Create S3 Bucket
&lt;/h3&gt;

&lt;p&gt;Next, create a &lt;code&gt;main.tf&lt;/code&gt; file, we'll create an S3 bucket to store our static website content.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket"&lt;/span&gt; &lt;span class="s2"&gt;"static_bucket"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.name}-${var.environment}"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Create CloudFront Origin Access Identity
&lt;/h3&gt;

&lt;p&gt;The create &lt;code&gt;cloudfront.tf&lt;/code&gt; file, we then create a CloudFront Origin Access Identity (OAI). This allows CloudFront to get objects from our S3 bucket on our behalf.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_cloudfront_origin_access_identity"&lt;/span&gt; &lt;span class="s2"&gt;"newOAI"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;comment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"OAI for ${var.name} S3 bucket"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Create CloudFront Distribution
&lt;/h3&gt;

&lt;p&gt;We create a CloudFront distribution to deliver our content to users. We configure it to use our S3 bucket as the origin and our OAI for access. We also set up caching, HTTPS redirection, and custom error responses. In &lt;code&gt;cloudfront.tf&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_cloudfront_distribution"&lt;/span&gt; &lt;span class="s2"&gt;"static_content_distribution"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;origin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;domain_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;static_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;bucket_regional_domain_name&lt;/span&gt;
    &lt;span class="n"&gt;origin_id&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"S3Origin"&lt;/span&gt;

    &lt;span class="n"&gt;s3_origin_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;origin_access_identity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_cloudfront_origin_access_identity&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;newOAI&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;cloudfront_access_identity_path&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;


  &lt;span class="n"&gt;enabled&lt;/span&gt;             &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="n"&gt;is_ipv6_enabled&lt;/span&gt;     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="n"&gt;default_root_object&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"index.html"&lt;/span&gt;
  &lt;span class="n"&gt;comment&lt;/span&gt;             &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.name} - frontend deployment"&lt;/span&gt;

  &lt;span class="n"&gt;default_cache_behavior&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;allowed_methods&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"DELETE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"HEAD"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"OPTIONS"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"PATCH"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"POST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"PUT"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;cached_methods&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"HEAD"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;target_origin_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"S3Origin"&lt;/span&gt;

    &lt;span class="n"&gt;forwarded_values&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;query_string&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;

      &lt;span class="n"&gt;cookies&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;forward&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"none"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;viewer_protocol_policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"redirect-to-https"&lt;/span&gt;
    &lt;span class="n"&gt;min_ttl&lt;/span&gt;                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="n"&gt;default_ttl&lt;/span&gt;            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;
    &lt;span class="n"&gt;max_ttl&lt;/span&gt;                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;86400&lt;/span&gt;
    &lt;span class="n"&gt;compress&lt;/span&gt;               &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;custom_error_response&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;error_code&lt;/span&gt;            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;404&lt;/span&gt;
    &lt;span class="n"&gt;response_page_path&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/index.html"&lt;/span&gt;
    &lt;span class="n"&gt;response_code&lt;/span&gt;         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
    &lt;span class="n"&gt;error_caching_min_ttl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;custom_error_response&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;error_code&lt;/span&gt;            &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;403&lt;/span&gt;
    &lt;span class="n"&gt;response_page_path&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/index.html"&lt;/span&gt;
    &lt;span class="n"&gt;response_code&lt;/span&gt;         &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;
    &lt;span class="n"&gt;error_caching_min_ttl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;price_class&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"PriceClass_100"&lt;/span&gt;

  &lt;span class="n"&gt;restrictions&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;geo_restriction&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;restriction_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"none"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;aliases&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create_custom_domain&lt;/span&gt; &lt;span class="err"&gt;?&lt;/span&gt; &lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;custom_domain_name&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;null&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;


  &lt;span class="n"&gt;dynamic&lt;/span&gt; &lt;span class="s2"&gt;"viewer_certificate"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;for_each&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create_custom_domain&lt;/span&gt; &lt;span class="err"&gt;?&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

    &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;acm_certificate_arn&lt;/span&gt;      &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dns&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;certificate_arn&lt;/span&gt;
      &lt;span class="n"&gt;ssl_support_method&lt;/span&gt;       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sni-only"&lt;/span&gt;
      &lt;span class="n"&gt;minimum_protocol_version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"TLSv1.2_2018"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;


  &lt;span class="n"&gt;dynamic&lt;/span&gt; &lt;span class="s2"&gt;"viewer_certificate"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;for_each&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create_custom_domain&lt;/span&gt; &lt;span class="err"&gt;?&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;cloudfront_default_certificate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;web_acl_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_wafv2_web_acl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;web_acl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;arn&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CloudFront distribution is configured with cache behavior, custom error responses, and TLS settings. It serves content over HTTPS, redirects HTTP traffic, and compresses content for better performance.&lt;br&gt;
We also depend on the variable &lt;code&gt;create_custom_domain&lt;/code&gt; to know whether to use a custom domain or a Cloudfront-provisioned random domain name.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 5: DNS Module for Custom Domain
&lt;/h3&gt;

&lt;p&gt;In &lt;code&gt;cloudfront.tf&lt;/code&gt; file. Add the following.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;&lt;span class="n"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"dns"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;create_custom_domain&lt;/span&gt; &lt;span class="err"&gt;?&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;

  &lt;span class="n"&gt;source&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"./modules/dns"&lt;/span&gt;

  &lt;span class="n"&gt;hosted_zone_domain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hosted_zone_domain&lt;/span&gt;
  &lt;span class="n"&gt;custom_domain_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;custom_domain_name&lt;/span&gt;
  &lt;span class="n"&gt;cloudflare_domain&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_cloudfront_distribution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;static_content_distribution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;domain_name&lt;/span&gt;
  &lt;span class="n"&gt;cloudflare_zone_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_cloudfront_distribution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;static_content_distribution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;hosted_zone_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If a custom domain is preferred, this module sets up the necessary DNS records and TLS certificate using ACM for SSL/TLS. It associates the custom domain with the CloudFront distribution. &lt;a href="https://github.com/iKnowJavaScript/terraform-static-server" rel="noopener noreferrer"&gt;code on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 6: Create IAM User and Policy
&lt;/h3&gt;

&lt;p&gt;In &lt;code&gt;iam-user.tf&lt;/code&gt; file, We create an IAM user and policy to allow full access to our S3 bucket. This user can be used to upload content to the bucket via AWS CLI or GitHub action - Please comment bellow if you need an article on either one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_user"&lt;/span&gt; &lt;span class="s2"&gt;"s3_user"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s3_full_access_user_for_${var.name}"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_access_key"&lt;/span&gt; &lt;span class="s2"&gt;"s3_user_key"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_iam_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_user_policy"&lt;/span&gt; &lt;span class="s2"&gt;"s3_full_access"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s3_full_access"&lt;/span&gt;
  &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_iam_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;s3_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;

  &lt;span class="n"&gt;policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="n"&gt;Version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;
    &lt;span class="n"&gt;Statement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Action&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"s3:*"&lt;/span&gt;
        &lt;span class="n"&gt;Effect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
        &lt;span class="n"&gt;Resource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="n"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;static_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;arn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="s2"&gt;"${aws_s3_bucket.static_bucket.arn}/*"&lt;/span&gt;
        &lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 7: Apply S3 Bucket Policy
&lt;/h3&gt;

&lt;p&gt;In &lt;code&gt;policy.tf&lt;/code&gt; file, We apply a policy to our S3 bucket to allow our CloudFront OAI to get objects and to enforce server-side encryption for all uploaded objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_s3_bucket_policy"&lt;/span&gt; &lt;span class="s2"&gt;"s3policyforOAI"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;static_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;id&lt;/span&gt;

  &lt;span class="n"&gt;policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="n"&gt;Version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;Statement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Action&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="n"&gt;Effect&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Resource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${aws_s3_bucket.static_bucket.arn}/*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;Principal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="n"&gt;AWS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ${aws_cloudfront_origin_access_identity.newOAI.id}"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"Sid"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"enforce-encryption-method"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"Effect"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"Deny"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"Principal"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"Action"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"s3:PutObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"Resource"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"${aws_s3_bucket.static_bucket.arn}/*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="s2"&gt;"Condition"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="s2"&gt;"StringNotEquals"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="s2"&gt;"s3:x-amz-server-side-encryption"&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"AES256"&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 8: Create WAF Web ACL
&lt;/h3&gt;

&lt;p&gt;Finally, In &lt;code&gt;waf.tf&lt;/code&gt; file, we create a WAF Web ACL and associate it with our CloudFront distribution to protect our website from common web exploits, managed by AWS rulesets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight lua"&gt;&lt;code&gt;&lt;span class="n"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_wafv2_web_acl"&lt;/span&gt; &lt;span class="s2"&gt;"web_acl"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;name&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.name}-waf"&lt;/span&gt;
  &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"WAF ACL for ${var.name} CloudFront distribution"&lt;/span&gt;
  &lt;span class="n"&gt;scope&lt;/span&gt;       &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"CLOUDFRONT"&lt;/span&gt;

  &lt;span class="n"&gt;default_action&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;allow&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;visibility_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;cloudwatch_metrics_enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="n"&gt;metric_name&lt;/span&gt;                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.name}-web-acl-metric"&lt;/span&gt;
    &lt;span class="n"&gt;sampled_requests_enabled&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="n"&gt;rule&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;     &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS-AWSManagedRulesCommonRuleSet"&lt;/span&gt;
    &lt;span class="n"&gt;priority&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;

    &lt;span class="n"&gt;override_action&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;none&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;statement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;managed_rule_group_statement&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;        &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWSManagedRulesCommonRuleSet"&lt;/span&gt;
        &lt;span class="n"&gt;vendor_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS"&lt;/span&gt;
        &lt;span class="n"&gt;rule_action_override&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SizeRestrictions_BODY"&lt;/span&gt;
          &lt;span class="n"&gt;action_to_use&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;count&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;visibility_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="n"&gt;cloudwatch_metrics_enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="n"&gt;metric_name&lt;/span&gt;                &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"AWS-AWSManagedRulesCommonRuleSet"&lt;/span&gt;
      &lt;span class="n"&gt;sampled_requests_enabled&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="o"&gt;#&lt;/span&gt; &lt;span class="n"&gt;Associate&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;WAF&lt;/span&gt; &lt;span class="n"&gt;Web&lt;/span&gt; &lt;span class="n"&gt;ACL&lt;/span&gt; &lt;span class="n"&gt;with&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;CloudFront&lt;/span&gt; &lt;span class="n"&gt;distribution&lt;/span&gt;
&lt;span class="o"&gt;#&lt;/span&gt; &lt;span class="n"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_wafv2_web_acl_association"&lt;/span&gt; &lt;span class="s2"&gt;"waf_assoc"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="o"&gt;#&lt;/span&gt;   &lt;span class="n"&gt;resource_arn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_cloudfront_distribution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;static_content_distribution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;web_acl_id&lt;/span&gt;
&lt;span class="o"&gt;#&lt;/span&gt;   &lt;span class="n"&gt;web_acl_arn&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aws_wafv2_web_acl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;web_acl&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;arn&lt;/span&gt;
&lt;span class="o"&gt;#&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The creation of a WAF ACL adds a strong layer of security to our CloudFront distribution. We configure it with AWS Managed Rules for common threats, which is an excellent starting point for protecting against a wide range of attacks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform CMD
&lt;/h2&gt;

&lt;p&gt;To complete these configurations and have everything running, apply your Terraform configuration by executing &lt;code&gt;terraform apply&lt;/code&gt; in your terminal. This command will provision all the defined resources in your AWS account.&lt;/p&gt;

&lt;p&gt;Remember to review the changes before applying them, ensuring that you understand what resources will be created or modified.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# To initialize Terraform and install required providers&lt;/span&gt;
terraform init

&lt;span class="c"&gt;# To plan and review the infrastructure changes&lt;/span&gt;
terraform plan

&lt;span class="c"&gt;# To apply changes and create the infrastructure&lt;/span&gt;
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After you confirm and apply the changes, Terraform will provide outputs with the necessary information such as your website URL, S3 bucket names, and IAM user credentials which you should secure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Deploying a high-performing and secure static website on AWS using Terraform can significantly simplify the process of infrastructure management. The power of Infrastructure as Code (IaC) allows you to version control your infrastructure, track changes, and quickly replicate or destroy environments as needed.&lt;/p&gt;

&lt;p&gt;In this article, we've outlined the steps to set up your static hosting environment with security and performance best practices in mind. Our approach ensures that your website remains highly available, performant under load, and resilient against common web vulnerabilities at an optimized cost.&lt;/p&gt;

&lt;p&gt;Now that you have your website deployed, you can focus on uploading your content, monitoring performance, and enhancing user experience. As your needs evolve, you can update your Terraform configurations to scale your infrastructure or integrate additional services.&lt;/p&gt;

&lt;p&gt;Be sure to check out the &lt;a href="https://github.com/iKnowJavaScript/terraform-static-server" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; for the complete code and leave a star behind if you found it helpful. If you encounter any issues or have questions, don't hesitate to comment below or open an issue on GitHub. Your contributions to improving the code are welcome!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Update:&lt;br&gt;
Checkout a simplified Terraform package based on this here &lt;a href="https://github.com/iKnowJavaScript/terraform-aws-complete-static-site" rel="noopener noreferrer"&gt;https://github.com/iKnowJavaScript/terraform-aws-complete-static-site&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Links;
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/iKnowJavaScript/terraform-static-server" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;Terraform AWS Provider Documentation&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/index.html" rel="noopener noreferrer"&gt;AWS Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli" rel="noopener noreferrer"&gt;Terraform Installation guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy Terraforming!!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>architecture</category>
      <category>security</category>
      <category>frontend</category>
    </item>
    <item>
      <title>Building the Self-Hosted On-Demand Runner Infrastructure with Terraform</title>
      <dc:creator>Omolayo Victor</dc:creator>
      <pubDate>Thu, 23 Nov 2023 05:57:19 +0000</pubDate>
      <link>https://forem.com/kingkonsole/building-the-self-hosted-on-demand-runner-infrastructure-with-terraform-2nej</link>
      <guid>https://forem.com/kingkonsole/building-the-self-hosted-on-demand-runner-infrastructure-with-terraform-2nej</guid>
      <description>&lt;p&gt;In the previous installment, we established a robust foundation of components that will power our self-hosted on-demand runner infrastructure. These components, including the GitHub App, API Gateway, Lambda functions, SQS, S3, EC2, SSM Parameters, Amazon EventBridge, and CloudWatch, work in concert to provide a scalable and cost-effective solution for GitHub runners.&lt;/p&gt;

&lt;p&gt;Now, we'll configure the GitHub App to serve as a nexus between GitHub and AWS, triggering the creation or removal of EC2 instances based on webhook events. This dynamic mechanism ensures that the number of available runners always aligns with the current workload, optimizing resource utilization and costs.&lt;/p&gt;

&lt;p&gt;We'll also delve into the practical implementation of this infrastructure using Terraform, an infrastructure as code (IaC) tool that streamlines the provisioning and management of AWS resources. With Terraform, we'll automate the deployment of EC2 instances, VPCs, and IAM roles, ensuring consistent and repeatable infrastructure setups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create GitHub App
&lt;/h2&gt;

&lt;p&gt;To begin, navigate to GitHub and establish a new app. Bear in mind that you have the option to create apps for either your organization or a specific user. For the time being, we'll use an organization-level app.&lt;/p&gt;

&lt;h4&gt;
  
  
  Step 1: Create a GitHub App
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Access GitHub and navigate to the "Settings" section for your organization.&lt;/li&gt;
&lt;li&gt;Select "Developer settings" from the left-hand sidebar.&lt;/li&gt;
&lt;li&gt;Click on "New GitHub App" in the "GitHub Apps" section.&lt;/li&gt;
&lt;li&gt;Provide a name for your app, such as "Self-Hosted Runner App".&lt;/li&gt;
&lt;li&gt;Enter a website URL for your app (mandatory, but not required for this module).&lt;/li&gt;
&lt;li&gt;Uncheck the "Enable webhook" option for now, as we will configure this later or create an alternative webhook.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Step 2: Define App Permissions
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Scroll down to the "Permissions" section and define the following permissions for all runners:

&lt;ul&gt;
&lt;li&gt;Repository: Actions: Read-only (check for queued jobs)&lt;/li&gt;
&lt;li&gt;Repository: Checks: Read-only (receive events for new builds)&lt;/li&gt;
&lt;li&gt;Repository: Metadata: Read-only (default/required)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Next, define the following permissions specifically for repo-level runners:

&lt;ul&gt;
&lt;li&gt;Repository: Administration: Read &amp;amp; write (to register runner)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Finally, define the following permissions specifically for organization-level runners:

&lt;ul&gt;
&lt;li&gt;Organization: Self-hosted runners: Read &amp;amp; write (to register runner)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Step 3: Save and Note App Details
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Click the "Save" button to finalize the app creation process.&lt;/li&gt;
&lt;li&gt;On the General page, make a note of the "App ID" and "Client ID" parameters. These will be used later in the process.
Step 4: Generate Private Key&lt;/li&gt;
&lt;li&gt;Generate a new private key using an appropriate tool or command-line utility.&lt;/li&gt;
&lt;li&gt;Save the generated private key as &lt;code&gt;app.private-key.pem&lt;/code&gt;. This file will be used later to authenticate the app with GitHub.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create Infrastructure with Terraform
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;p&gt;The following tools are required to perform this step:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A GitHub account with an organization or personal access token&lt;/li&gt;
&lt;li&gt;An AWS account with VPC and subnets already created&lt;/li&gt;
&lt;li&gt;Terraform installed and configured on your system&lt;/li&gt;
&lt;li&gt;Node.js and Yarn (for Lambda development)&lt;/li&gt;
&lt;li&gt;Bash shell or compatible shell&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you have installed and configured all the required tools, you are ready to proceed to the next step, where we will create the necessary AWS resources using Terraform.&lt;/p&gt;

&lt;h4&gt;
  
  
  Creating Resources
&lt;/h4&gt;

&lt;p&gt;We'll utilize a highly configurable and exceptional Terraform module &lt;a href="https://github.com/philips-labs/terraform-aws-github-runner"&gt;terraform-aws-github-runner&lt;br&gt;
&lt;/a&gt; to streamline the implementation of our infrastructure. This module boasts stellar maintenance and offers various approaches through internal modules, allowing you to seamlessly adapt the infrastructure to your project's specific needs. In this article, we'll focus on implementing the simple runner configuration provided by the module, which aligns perfectly with the article's requirements.&lt;/p&gt;

&lt;p&gt;All Terraform code is available &lt;a href="https://github.com/iKnowJavaScript/terraform-aws-runner-resources"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Before diving into the intricacies of the runner infrastructure, we'll begin by downloading the essential lambda function code required by the module to dynamically create and destroy our resources as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--CzEcwArW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xvzece00h5xh1idaz1g1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--CzEcwArW--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xvzece00h5xh1idaz1g1.png" alt="Lamda module code snipet" width="800" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, we'll create our runner module which will create Lambda functions(webhook, scale-up, scale-down, syncer), SQS(workflow-queue, queue-builds), EC2 Launch template(e), s3(s) among others.&lt;/p&gt;

&lt;h5&gt;
  
  
  Lambda Functions: The Heart of the Infrastructure
&lt;/h5&gt;

&lt;p&gt;Lambda functions serve as the brains of our infrastructure, orchestrating various operations and ensuring seamless responsiveness to changing demands. Each Lambda function fulfills a specific role:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;webhook: This function acts as the gateway for incoming webhook events, meticulously verifying their authenticity and ensuring they align with the expected criteria. It only processes events related to &lt;code&gt;workflow_job&lt;/code&gt;, status &lt;code&gt;queued&lt;/code&gt;, and matching the runner labels.&lt;/li&gt;
&lt;li&gt;scale-up:  Continuously monitoring the SQS queue, this Lambda function eagerly awaits incoming events. Upon receiving an event, it performs a series of checks to determine whether a new EC2 spot instance is required to accommodate the workload. If deemed necessary, it utilizes the predefined EC2 Launch template(e) to spin up new EC2 instances, expanding the runner pool.&lt;/li&gt;
&lt;li&gt;scale-down: Monitors the SQS queue triggered at an interval. This Lambda function diligently monitors the runner pool, identifying idle runners that have been removed from GitHub. Once an idle runner is detected, it prompts the termination of the corresponding EC2 instance, optimizing resource utilization and minimizing costs.&lt;/li&gt;
&lt;li&gt;syncer: Downloading the GitHub Action Runner distribution can occasionally be a time-consuming process, sometimes exceeding ten minutes. To alleviate this bottleneck, a dedicated Lambda function is introduced. This function meticulously synchronizes the action runner binary from GitHub to a designated S3 bucket(s). Subsequently, EC2 instances(e) seamlessly fetch the distribution from the S3 bucket(s) rather than relying on slower internet downloads.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dCnU4_6u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xzgdzpepjl770zkn6zmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dCnU4_6u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xzgdzpepjl770zkn6zmp.png" alt="Runner code snipet" width="800" height="1416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  GitHub App Module: Bridging the Gap Between GitHub and AWS
&lt;/h5&gt;

&lt;p&gt;To establish seamless communication between GitHub and AWS, a GitHub app module is meticulously implemented. This module integrates seamlessly with our GitHub app, enabling the creation of an API gateway. This gateway serves as the intermediary, securely handling webhook events sent by the GitHub App over HTTPS. Additionally, it relays responses back to the GitHub app, ensuring a robust and reliable communication channel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L-eqdryb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjxbb4eg7bimk3p49qjs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L-eqdryb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjxbb4eg7bimk3p49qjs.png" alt="GitHub app code snipet" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this comprehensive guide, we have embarked on a journey to build a scalable and cost-effective self-hosted runner infrastructure on AWS using Terraform, an infrastructure as code (IaC) tool. Through this exploration, we have delved into the key components that power this infrastructure, including the GitHub App, API Gateway, Lambda functions, SQS, S3, EC2, SSM Parameters, Amazon EventBridge, and CloudWatch.&lt;/p&gt;

&lt;p&gt;By leveraging these components, we ensure optimal resource utilization and cost-efficiency. We have also explored the practical implementation of this infrastructure using Terraform, automating the provisioning and management of AWS resources for consistent and repeatable setups.&lt;/p&gt;

&lt;p&gt;The self-hosted runner infrastructure we have created provides a powerful foundation for organizations seeking to enhance their CI/CD capabilities and streamline their software development processes as we currently do at &lt;a href="https://blackthorn.io/"&gt;Blackthorn&lt;/a&gt;. By harnessing the scalability and flexibility of AWS, organizations can effectively manage runner capacity and costs, ensuring that their CI/CD infrastructure aligns with their evolving needs.&lt;/p&gt;

&lt;p&gt;As you embark on your own journey to build self-hosted runner infrastructures, remember the key takeaways from this guide:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Utilize Infrastructure as Code (IaC): Leverage IaC tools like Terraform to automate the provisioning and management of AWS resources, ensuring consistent and repeatable infrastructure setups.&lt;/li&gt;
&lt;li&gt;Design for Scalability: Build your infrastructure with scalability in mind, employing components like Lambda functions and SQS to dynamically adjust runner capacity based on workload demands.&lt;/li&gt;
&lt;li&gt;Optimize Resource Utilization: Monitor resource utilization metrics and proactively scale your infrastructure to avoid resource bottlenecks and unnecessary costs.&lt;/li&gt;
&lt;li&gt;Embrace Continuous Improvement: Continuously evaluate and refine your infrastructure to adapt to changing requirements and optimize performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By adhering to these principles, you can effectively build and maintain a self-hosted runner infrastructure that empowers your organization to achieve continuous delivery excellence.&lt;/p&gt;

&lt;p&gt;Links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All Terraform code is available &lt;a href="https://github.com/iKnowJavaScript/terraform-aws-runner-resources"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/philips-labs/terraform-aws-github-runner"&gt;terraform-aws-github-runner
&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cicd</category>
      <category>infrastructureascode</category>
      <category>aws</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Scaling GitHub Actions Runners on AWS: A Cost-Effective and Scalable Approach</title>
      <dc:creator>Omolayo Victor</dc:creator>
      <pubDate>Tue, 14 Nov 2023 16:51:41 +0000</pubDate>
      <link>https://forem.com/kingkonsole/scaling-github-actions-runners-on-aws-a-cost-effective-and-scalable-approach-12en</link>
      <guid>https://forem.com/kingkonsole/scaling-github-actions-runners-on-aws-a-cost-effective-and-scalable-approach-12en</guid>
      <description>&lt;p&gt;In the realm of software development, continuous integration (CI) and continuous delivery (CD) have become indispensable practices for ensuring the quality and timely release of software applications. GitHub Actions, a cloud-based CI/CD platform, has emerged as a popular choice among developers for its ease of use and flexibility. However, as the number of repositories and workflows under management grows, the need for scalable and cost-effective runner infrastructure becomes increasingly important.&lt;/p&gt;

&lt;p&gt;To address this challenge, we have developed a self-hosted on-demand runner infrastructure on AWS that utilizes a combination of GitHub, Amazon Web Services (AWS), and other tools. This infrastructure enables us to scale our runner capacity up or down based on demand, ensuring that we have enough runners to handle the workload without incurring unnecessary costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Design Considerations
&lt;/h2&gt;

&lt;p&gt;In designing the self-hosted on-demand runner infrastructure, we focused on several key considerations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cost-effectiveness:&lt;/strong&gt; The infrastructure should minimize cloud resource consumption and avoid unnecessary costs when not in use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability:&lt;/strong&gt; The infrastructure should be able to handle fluctuating workloads by scaling up or down the number of runners dynamically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reliability:&lt;/strong&gt; The infrastructure should be highly available and ensure consistent execution of workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ease of management:&lt;/strong&gt; The infrastructure should be easy to deploy, manage, and maintain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Components of the Infrastructure
&lt;/h2&gt;

&lt;p&gt;The key components of our self-hosted on-demand runner infrastructure include:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub App:&lt;/strong&gt; This GitHub App acts as a bridge between GitHub and AWS, receiving webhook events from GitHub repositories and triggering the creation or removal of EC2 instances based on those events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API Gateway:&lt;/strong&gt; API Gateway serves as an HTTP endpoint for the webhook events sent by the GitHub App, providing a secure and reliable channel for communication.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda Functions:&lt;/strong&gt; Lambda functions are the workhorses of the infrastructure, handling the incoming webhook events, verifying their authenticity, and triggering the scaling up or scaling down of EC2 instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SQS (Simple Queue Service):&lt;/strong&gt; SQS acts as a message queue, decoupling the incoming webhook events from processing these events. This ensures that events are not lost if there are temporary delays in processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 (Simple Storage Service):&lt;/strong&gt; S3 serves as a repository for storing the runner binaries that are downloaded from GitHub. This allows EC2 instances to fetch the runner binaries locally instead of downloading them from the internet, improving performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EC2 (Elastic Compute Cloud):&lt;/strong&gt; EC2 instances provide the computational resources for running GitHub Actions workflows. The number of EC2 instances is dynamically scaled up or down based on the demand for runners.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSM Parameters:&lt;/strong&gt; SSM Parameters store configuration information for the runners, registration tokens, and secrets for the Lambdas. This centralized approach simplifies management and access control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon EventBridge:&lt;/strong&gt; Amazon EventBridge schedules Lambda functions to execute at regular intervals, ensuring that idle runners are detected and terminated when no longer needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CloudWatch:&lt;/strong&gt; CloudWatch provides real-time monitoring of the resources and applications in the AWS environment, enabling us to collect and track metrics for debugging and performance optimization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RHyjPqVn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35ki2q10a8go7tp4f4s4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RHyjPqVn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35ki2q10a8go7tp4f4s4.png" alt="Architectural overview" width="800" height="762"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Workflow and Scalability
&lt;/h2&gt;

&lt;p&gt;The self-hosted on-demand runner infrastructure operates seamlessly to handle the scaling of runners based on workflow demands. When a workflow is triggered on a pull request action, the GitHub App sends a webhook event to the API Gateway, which in turn triggers the webhook Lambda function. The Lambda function verifies the event authenticity, processes it, and posts it to an SQS queue.&lt;/p&gt;

&lt;p&gt;The scale-up Lambda function monitors the SQS queue for new events and evaluates various conditions to determine if a new EC2 spot instance needs to be created. If a new instance is required, the Lambda function requests a JIT configuration or registration token from GitHub, creates an EC2 spot instance using the launch template and user data script, and fetches the runner binary from the S3 bucket for installation. The runner registers with GitHub and starts executing workflows once it is fully configured.&lt;/p&gt;

&lt;p&gt;In contrast, the scale-down Lambda function is triggered by Amazon EventBridge at regular intervals to check for idle runners. If a runner is not busy, the Lambda function removes it from GitHub and terminates the corresponding EC2 instance, ensuring efficient resource utilization and cost savings.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;So far, we've defined a solid foundation of components that are crucial for building a cost-effective and scalable solution for GitHub runners. These components include the GitHub App, API Gateway, Lambda functions, SQS, S3, EC2, SSM Parameters, Amazon EventBridge, and CloudWatch. These components work together to provide a robust and dynamic infrastructure that can seamlessly handle fluctuating workloads.&lt;/p&gt;

&lt;p&gt;Next, we'll embark on the practical implementation of this infrastructure using Terraform, an infrastructure as code (IaC) tool. Terraform will enable us to automate the provisioning of AWS resources, ensuring consistency and repeatability in our infrastructure setup. We'll delve into the process of creating the necessary AWS resources, including EC2 instances, VPCs, and IAM roles.&lt;br&gt;
We'll also configure the GitHub App to act as a bridge between GitHub and AWS, triggering the creation or removal of EC2 instances based on webhook events. This will ensure that we always have the right number of runners available to handle the current workload.&lt;/p&gt;

&lt;p&gt;Finally, we'll set up Lambda functions to orchestrate the scaling of runner instances. Lambda functions will be responsible for verifying the authenticity of incoming webhook events, processing them, and triggering the scaling up or scaling down of EC2 instances based on demand. This will ensure that our infrastructure is always optimized for cost and performance.&lt;br&gt;
By the end of this series, you'll have a comprehensive understanding of how to build and deploy a scalable and cost-effective runner infrastructure on AWS using Terraform. You'll be able to leverage this infrastructure to improve your CI/CD performance and reduce your infrastructure costs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay tuned for Part 2: Building the Self-Hosted On-Demand Runner Infrastructure with Terraform&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>aws</category>
      <category>githubactions</category>
      <category>architecture</category>
    </item>
    <item>
      <title>AWS Placement Groups: The Lego Analogy</title>
      <dc:creator>Omolayo Victor</dc:creator>
      <pubDate>Wed, 06 Sep 2023 19:21:53 +0000</pubDate>
      <link>https://forem.com/kingkonsole/aws-placement-groups-the-lego-analogy-4bkj</link>
      <guid>https://forem.com/kingkonsole/aws-placement-groups-the-lego-analogy-4bkj</guid>
      <description>&lt;p&gt;AWS has a lot of exciting features which you can utilise while building highly optimised applications, in this article, we’ll explore Placement Groups and it’s strategies to architect better optimised application, looking at it from Lego perspective.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Placement Group?
&lt;/h2&gt;

&lt;p&gt;Placement groups let you choose the logical placement of your EC2 instances to achieve a more optimised communication, performance or fault tolerance within a Region.&lt;/p&gt;

&lt;p&gt;At heart, working with placement groups is similar to creating intricate structures with Lego blocks. Each block represents a component of your design, and how you place these blocks together can impact the overall stability and performance of your creation.&lt;br&gt;
In the world of AWS, similar principles apply when it comes to deploying your infrastructure. AWS Placement Groups offer a way to control how Amazon EC2 instances are placed within a region, much like the way you strategically position Lego blocks for optimal structural integrity.&lt;br&gt;
By understanding this Lego analogy, you can grasp the concept of AWS Placement Groups more easily and make informed decisions when it comes to architecting your AWS environment.&lt;/p&gt;

&lt;p&gt;Choosing the appropriate Placement Group strategy may result in optimal performance and reliability in your AWS environment. AWS provides three placement strategies which you can use based on the type of your workload:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-general:~:text=Cluster%20%E2%80%93%20Packs%20instances%20close%20together%20inside%20an%20Availability%20Zone.%20This%20strategy%20enables%20workloads%20to%20achieve%20the%20low%2Dlatency%20network%20performance%20necessary%20for%20tightly%2Dcoupled%20node%2Dto%2Dnode%20communication%20that%20is%20typical%20of%20high%2Dperformance%20computing%20(HPC)%20applications." rel="noopener noreferrer"&gt;Cluster Placement Groups&lt;/a&gt; - A logical grouping of instances into a single AZ&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-general:~:text=Partition%20%E2%80%93%20Spreads%20your,Cassandra%2C%20and%20Kafka." rel="noopener noreferrer"&gt;Partition Placement Groups&lt;/a&gt; - Spread instances across logical partitions to reduce likelihood of correlated hardware failure for your application.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-general:~:text=Spread%20%E2%80%93%20Strictly%20places%20a%20small%20group%20of%20instances%20across%20distinct%20underlying%20hardware%20to%20reduce%20correlated%20failures." rel="noopener noreferrer"&gt;Spread Placement Groups&lt;/a&gt; - Layout instances across distinct underlying hardware.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cluster Placement Group 
&lt;/h2&gt;

&lt;p&gt;Imagine constructing a massive Lego tower where each block is tightly integrated with one another. This technique optimizes communication between blocks and ensures stability. In AWS, Cluster Placement Groups offer similar benefits by maximizing network performance and minimizing inter-instance latency within a single AZ.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58tyi5vtounvx17h829h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58tyi5vtounvx17h829h.png" alt="Cluster Placement Group" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cluster Placement Groups is recommended when instances between the group benefits from close proximity, vast network traffic to achieve low latency and high network throughput.&lt;/p&gt;

&lt;h2&gt;
  
  
  Partition Placement Group
&lt;/h2&gt;

&lt;p&gt;Picture creating a partitioned Lego structure where each section represents a specific function or workload. Similarly, Partition Placement Groups in AWS enable you to create logical partitions, ensuring instances within the same partition share the same rack, which can be beneficial for applications that require low latency and high throughput in multiple AZs in the same Region&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvptab6unxn6apfy0fpmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvptab6unxn6apfy0fpmp.png" alt="Partition Placement Group" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Partition placement is typical for large distributed and replicated workloads like Kafka as it distribute the instances evenly across the number of partitions that you specify. You can also launch instances into a specific partition to have more control over where the instances are placed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Spread Placement Group
&lt;/h2&gt;

&lt;p&gt; Imagine creating a Lego structure by distributing each block evenly across multiple plates. This technique helps minimize the impact of hardware failures, ensuring that instances are placed on separate underlying infrastructure. In AWS, Spread Placement Groups follow a similar logic, maximizing availability by spreading instances across distinct hardware.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3n080ych9vkvmv52llf2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3n080ych9vkvmv52llf2.png" alt="Spread Placement Group" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, all the EC2 instances are located in different hardware, With the placement of different group across isolated AZs, it can help reduce simultaneous failures within the application while also promoting independent scalability across the group.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At &lt;a href="https://blackthorn.io" rel="noopener noreferrer"&gt;Blackthorn&lt;/a&gt;, we utilize the Cluster Placement Group to enhance the performance and reliability of our Server, Worker, Database, and Caching components. By ensuring close physical proximity and minimising network latency, they optimize communication and coordination among these components, resulting in improved application stability, scalability, and user experience.&lt;/p&gt;

&lt;p&gt;Understanding AWS Placement Groups can be simplified by using the Lego analogy. By visualizing how Lego structures are built, you can grasp the concept of grouping EC2 instances together for better performance and reliability. Just as master builders carefully select and position Lego blocks, AWS users can make informed decisions about their infrastructure design by leveraging the power of AWS Placement Groups.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-general" rel="noopener noreferrer"&gt;Placement group rules and Limitation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#concepts-placement-groups" rel="noopener noreferrer"&gt;Creating Placement Group &lt;/a&gt;- Follow through&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope this has been ineresting for you to read, let me know what you think, and share some other insightful features too!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>placementgroup</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
