<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Dulanjana Lakmal</title>
    <description>The latest articles on Forem by Dulanjana Lakmal (@lakmalya).</description>
    <link>https://forem.com/lakmalya</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/lakmalya"/>
    <language>en</language>
    <item>
      <title>Amazon EKS to Deprecate AL2 AMIs: How to Migrate with eksctl</title>
      <dc:creator>Dulanjana Lakmal</dc:creator>
      <pubDate>Tue, 23 Sep 2025 11:48:33 +0000</pubDate>
      <link>https://forem.com/lakmalya/amazon-eks-to-deprecate-al2-amis-how-to-migrate-with-eksctl-24ja</link>
      <guid>https://forem.com/lakmalya/amazon-eks-to-deprecate-al2-amis-how-to-migrate-with-eksctl-24ja</guid>
      <description>&lt;p&gt;Amazon Elastic Kubernetes Service (EKS) has announced a major change:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;After November 26, 2025&lt;/strong&gt;, Amazon EKS will &lt;strong&gt;no longer publish EKS-optimized Amazon Linux 2 (AL2) AMIs&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kubernetes 1.32&lt;/strong&gt; will be the &lt;strong&gt;final version&lt;/strong&gt; with AL2 AMI support.&lt;/li&gt;
&lt;li&gt;From &lt;strong&gt;Kubernetes 1.33 onwards&lt;/strong&gt;, EKS will only release &lt;strong&gt;Amazon Linux 2023 (AL2023)&lt;/strong&gt; and &lt;strong&gt;Bottlerocket&lt;/strong&gt; based AMIs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means that organizations running EKS clusters with &lt;strong&gt;AL2 worker nodes&lt;/strong&gt; must &lt;strong&gt;migrate&lt;/strong&gt; before upgrading beyond Kubernetes 1.32.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why is Amazon EKS Ending AL2 AMIs?
&lt;/h2&gt;

&lt;p&gt;Amazon Linux 2 has been the default for many workloads on AWS for years. However, AWS is now moving towards more modern operating systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Linux 2023 (AL2023):&lt;/strong&gt; Successor to AL2, providing long-term support, predictable release cycles, and better security patching.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bottlerocket:&lt;/strong&gt; A container-optimized OS with an immutable root filesystem and reduced attack surface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both offer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Improved security posture&lt;/strong&gt; (predictable updates, hardened defaults).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance optimizations&lt;/strong&gt; for cloud-native workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Future-proofing&lt;/strong&gt; for Kubernetes versions beyond 1.32.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What Does This Mean for EKS Users?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;If your clusters run &lt;strong&gt;Amazon Linux 2 node groups&lt;/strong&gt;, you can continue using them up to &lt;strong&gt;Kubernetes 1.32&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;When you plan to upgrade to &lt;strong&gt;Kubernetes 1.33 or later&lt;/strong&gt;, you must migrate your nodes to &lt;strong&gt;AL2023 or Bottlerocket&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;After &lt;strong&gt;Nov 26, 2025&lt;/strong&gt;, there will be &lt;strong&gt;no new AL2 AMIs or security patches&lt;/strong&gt;, even if you stay on older Kubernetes versions.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Migration Strategy with eksctl
&lt;/h2&gt;

&lt;p&gt;The good news: you don’t need to rebuild your cluster. With &lt;strong&gt;eksctl&lt;/strong&gt;, you can &lt;strong&gt;replace node groups&lt;/strong&gt; or &lt;strong&gt;upgrade them in place&lt;/strong&gt; while keeping your control plane and workloads intact.&lt;/p&gt;




&lt;h3&gt;
  
  
  🔹 Option 1: Create a New AL2023 Node Group
&lt;/h3&gt;

&lt;p&gt;You can add a new node group running AL2023 alongside your existing AL2 group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create nodegroup &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; my-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; al2023-ng &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--node-type&lt;/span&gt; t3.medium &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes-min&lt;/span&gt; 2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes-max&lt;/span&gt; 5 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--managed&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--ami-family&lt;/span&gt; AmazonLinux2023
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Drain old nodes&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   kubectl drain &amp;lt;old-node-name&amp;gt; &lt;span class="nt"&gt;--ignore-daemonsets&lt;/span&gt; &lt;span class="nt"&gt;--delete-local-data&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Delete old node group&lt;/strong&gt; once workloads are migrated:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   eksctl delete nodegroup &lt;span class="nt"&gt;--cluster&lt;/span&gt; my-cluster &lt;span class="nt"&gt;--name&lt;/span&gt; al2-ng
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  🔹 Option 2: Update Existing Node Group to AL2023
&lt;/h3&gt;

&lt;p&gt;If you want to upgrade in place:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;cluster.yaml (snippet):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eksctl.io/v1alpha5&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterConfig&lt;/span&gt;

&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-cluster&lt;/span&gt;
  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ap-southeast-1&lt;/span&gt;

&lt;span class="na"&gt;nodeGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;al2023-ng&lt;/span&gt;
    &lt;span class="na"&gt;instanceType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;t3.medium&lt;/span&gt;
    &lt;span class="na"&gt;desiredCapacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
    &lt;span class="na"&gt;amiFamily&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AmazonLinux2023&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the upgrade:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl upgrade nodegroup &lt;span class="nt"&gt;--config-file&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cluster.yaml &lt;span class="nt"&gt;--name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;al2023-ng
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will &lt;strong&gt;cordon, drain, and replace nodes&lt;/strong&gt; with AL2023-based ones.&lt;/p&gt;




&lt;h3&gt;
  
  
  🔹 Option 3: Move to Bottlerocket
&lt;/h3&gt;

&lt;p&gt;For security-focused or container-only workloads, Bottlerocket is a strong choice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;eksctl create nodegroup &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; my-cluster &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; bottlerocket-ng &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--node-type&lt;/span&gt; t3.medium &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes&lt;/span&gt; 3 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes-min&lt;/span&gt; 2 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--nodes-max&lt;/span&gt; 5 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--managed&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--ami-family&lt;/span&gt; Bottlerocket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Verifying Migration
&lt;/h2&gt;

&lt;p&gt;After migration, confirm that nodes are running the new OS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl get nodes &lt;span class="nt"&gt;-o&lt;/span&gt; wide
kubectl describe node &amp;lt;node-name&amp;gt; | &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="s2"&gt;"OS Image"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected outputs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AL2023: &lt;code&gt;Amazon Linux 2023&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Bottlerocket: &lt;code&gt;Bottlerocket OS&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Test first in staging&lt;/strong&gt; before production.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use rolling upgrades&lt;/strong&gt;: never drain all nodes at once.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automate configs&lt;/strong&gt;: keep &lt;code&gt;cluster.yaml&lt;/code&gt; under version control for repeatable upgrades.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitor workloads&lt;/strong&gt;: watch CloudWatch and Kubernetes metrics after migration.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Amazon EKS ending support for AL2 AMIs is a big change — but also a chance to modernize.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Short term:&lt;/strong&gt; AL2 remains usable until Kubernetes 1.32.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Long term:&lt;/strong&gt; Migrate to &lt;strong&gt;AL2023&lt;/strong&gt; (closest successor) or &lt;strong&gt;Bottlerocket&lt;/strong&gt; (security-focused OS).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By starting your migration early and using &lt;strong&gt;eksctl&lt;/strong&gt; to manage node group upgrades, you can ensure a &lt;strong&gt;smooth transition&lt;/strong&gt; before the November 2025 cutoff.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eks</category>
    </item>
    <item>
      <title>Setting Up Modular Capability Plugins (MCP) with Amazon Q on Arch Linux</title>
      <dc:creator>Dulanjana Lakmal</dc:creator>
      <pubDate>Wed, 14 May 2025 22:01:55 +0000</pubDate>
      <link>https://forem.com/lakmalya/setting-up-modular-capability-plugins-mcp-with-amazon-q-on-arch-linux-ond</link>
      <guid>https://forem.com/lakmalya/setting-up-modular-capability-plugins-mcp-with-amazon-q-on-arch-linux-ond</guid>
      <description>&lt;p&gt;Hello everyone!&lt;br&gt;
Today I'm writing about one of the most viral topics in the developer world right now: Modular Capability Plugins (MCP).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are MCPs?&lt;/strong&gt;&lt;br&gt;
MCPs (Modular Capability Plugins) are specialized extensions that enhance AI assistants like Amazon Q with specific capabilities. Think of MCPs as "super-powered tool belts" for AI assistants - they give these assistants specialized knowledge and abilities to perform specific tasks far better than they could with general training alone.&lt;/p&gt;

&lt;p&gt;We can think like this if Amazon Q is like a smartphone out of the box, MCPs are like specialized apps you install to transform it from a general-purpose device into a professional-grade tool for specific tasks. Just as you might install Photoshop for image editing or Final Cut Pro for video production, MCPs add specialized capabilities to your AI assistant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Experience with Amazon Q MCPs&lt;/strong&gt;&lt;br&gt;
For the past month, I've been using Amazon Q Chat service for my day-to-day DevOps and CloudOps tasks. It's been an incredibly helpful tool that streamlines my workflow. Since I'm running Arch Linux on my personal laptop, I decided to set up several specialized MCPs to enhance my productivity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge&lt;/strong&gt;&lt;br&gt;
As a cloud engineer, I frequently encounter several time-consuming tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS architecture diagrams&lt;/strong&gt; have always been a critical but manual and time-consuming task for developers and cloud architects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Searching through AWS documentation&lt;/strong&gt; efficiently can be challenging&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Following Terraform best practices&lt;/strong&gt; and security-first development workflows requires constant reference checking&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Solution: Custom MCPs&lt;/strong&gt;&lt;br&gt;
To address these challenges, I've set up the following MCPs:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;awslabs.aws-diagram-mcp-server *&lt;em&gt;- For generating AWS architecture diagrams automatically&lt;br&gt;
**awslabs.aws-documentation-mcp-server *&lt;/em&gt;- For searching AWS documentation using the official AWS search API, getting content recommendations, and converting documentation to markdown format&lt;br&gt;
**awslabs.terraform-mcp-server&lt;/strong&gt; - For AWS Terraform best practices, security-first development workflows, Checkov integration, AWS provider documentation, AWS-IA GenAI modules, and Terraform workflow execution&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setup Guide for Arch Linux Prerequisites&lt;/strong&gt;&lt;br&gt;
Before beginning, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.10&lt;/li&gt;
&lt;li&gt;python-uv (Python packaging tool)&lt;/li&gt;
&lt;li&gt;AWS CLI configured locally with an AWS Builder ID for accessing Amazon Q&lt;/li&gt;
&lt;li&gt;GraphViz installed (&lt;a href="https://www.graphviz.org/" rel="noopener noreferrer"&gt;https://www.graphviz.org/&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Installation Steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install Amazon Q on Arch Linux&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl --proto '=https' --tlsv1.2 -sSf "https://desktop-release.q.us-east-1.amazonaws.com/latest/q-x86_64-linux.zip" -o "q.zip"
unzip q.zip
./q/install.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installation, you should see an "amazonq" folder under your .aws folder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ktbbnnh54wvix60hxcn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ktbbnnh54wvix60hxcn.png" alt="Image description" width="482" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create an MCP configuration file:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;vim ~/.aws/amazonq/mcp.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Add the following content to configure all three MCP servers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "mcpServers": {
    "awslabs.terraform-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.terraform-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      },
      "disabled": false,
      "autoApprove": []
    },
   "awslabs.aws-documentation-mcp-server": {
        "command": "uvx",
        "args": ["awslabs.aws-documentation-mcp-server@latest"],
        "env": {
          "FASTMCP_LOG_LEVEL": "ERROR"
        },
        "disabled": false,
        "autoApprove": []
    },
   "awslabs.aws-diagram-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-diagram-mcp-server"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      },
      "autoApprove": [],
      "disabled": false
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: If you don't need all three MCPs, you can remove the ones you don't want from your mcp.json file.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using Your MCPs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open Amazon Q - you should see it initializing the MCP servers&lt;br&gt;
Enter /tools in the chat to check what tools you have access to via Amazon Q MCPs&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq04ybxadh4742utz265.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxq04ybxadh4742utz265.png" alt="Image description" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadwe7of0800blmkwuga5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fadwe7of0800blmkwuga5.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you encounter any issues, double-check your Python version, the uv package installation, and that GraphViz is properly installed&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Your Setup&lt;/strong&gt;&lt;br&gt;
To verify everything is working correctly, try a sample prompt like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;generate an aws architecture diagram for sample s3 bucket with cloudfront for hosting static website&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This should trigger the diagram MCP to create a visualization of the architecture. It should produce an output similar to the one below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8okqv49hpg3pluh96xj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8okqv49hpg3pluh96xj.png" alt="Image description" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is the end of my post. Thank you for reading through it. If you liked it, please consider sharing it with your colleagues for better reach.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>aws</category>
    </item>
    <item>
      <title>Managing Multiple Bitbucket Repositories with Terraform: A Deep Dive into terraform-bitbucket-multi-repo-manager</title>
      <dc:creator>Dulanjana Lakmal</dc:creator>
      <pubDate>Mon, 14 Oct 2024 16:15:48 +0000</pubDate>
      <link>https://forem.com/lakmalya/managing-multiple-bitbucket-repositories-with-terraform-a-deep-dive-into-terraform-bitbucket-multi-repo-manager-84a</link>
      <guid>https://forem.com/lakmalya/managing-multiple-bitbucket-repositories-with-terraform-a-deep-dive-into-terraform-bitbucket-multi-repo-manager-84a</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In the fast-paced world of software development, efficiently managing multiple repositories is crucial. This article introduces terraform-bitbucket-multi-repo-manager, a powerful Terraform configuration that streamlines the process of creating and managing multiple Bitbucket repositories. We'll explore how this tool can save time, reduce errors, and improve consistency in your development workflow.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Motivation: A Real-World Problem
&lt;/h1&gt;

&lt;p&gt;Over the past few days, our team faced a significant challenge. We experienced a surge in the number of Bitbucket repositories we needed to create and manage. Each new repository required the same set of repetitive tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating the repository&lt;/li&gt;
&lt;li&gt;Configuring environment variables&lt;/li&gt;
&lt;li&gt;Setting up deployment variables&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This manual, repetitive process was not only time-consuming but also prone to errors. We realized we needed a more efficient, automated solution.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Solution: Terraform to the Rescue
&lt;/h1&gt;

&lt;p&gt;To address this challenge, we developed a Terraform module: terraform-bitbucket-multi-repo-manager. This module automates the entire process of repository creation and configuration.&lt;br&gt;
Before we dive into the details, we want to give a big shout-out to Ilia Lazebnik (&lt;a href="https://github.com/DrFaust92" rel="noopener noreferrer"&gt;DrFaust92&lt;/a&gt;) for his excellent Bitbucket Terraform provider. Without his work, our solution wouldn't have been possible. Thank you, Ilia!&lt;/p&gt;
&lt;h1&gt;
  
  
  What terraform-bitbucket-multi-repo-manager Offers
&lt;/h1&gt;

&lt;p&gt;Our Terraform configuration addresses the challenges of managing multiple repositories by automating several key processes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Automated Repository Creation: Create multiple repositories with a single Terraform apply.&lt;/li&gt;
&lt;li&gt;Consistent Configuration: Ensure all repositories adhere to your organization's standards.&lt;/li&gt;
&lt;li&gt;Variable Management: Easily set and manage repository and deployment variables.&lt;/li&gt;
&lt;li&gt;Deployment Configuration: Set up deployment environments for each repository.&lt;/li&gt;
&lt;/ol&gt;
&lt;h1&gt;
  
  
  Benefits and Use Cases
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;DevOps Efficiency: Automate the creation of new repositories for new projects or services, saving hours of manual work.&lt;/li&gt;
&lt;li&gt;Consistency: Ensure all repositories follow the same structure and have the necessary variables and deployments set up, reducing configuration errors.&lt;/li&gt;
&lt;li&gt;Scalability: Easily manage tens or hundreds of repositories without manual intervention, perfect for growing teams and projects.
Compliance: Enforce organization-wide policies and configurations across all repositories.&lt;/li&gt;
&lt;li&gt;Version Control: Keep your infrastructure as code, allowing for easy tracking of changes and rollbacks if necessary.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;
&lt;h2&gt;
  
  
  To use terraform-bitbucket-multi-repo-manager:
&lt;/h2&gt;

&lt;p&gt;Clone the repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/lakmal-ya/terraform-bitbucket-multi-repo-manager.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Configure your Bitbucket credentials and desired repository structure in terraform.tfvars_sample&lt;/p&gt;

&lt;p&gt;Run following to initialize the Terraform working directory&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run following to see the planned changes&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run following to create or update your Bitbucket repositories&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;terraform-bitbucket-multi-repo-manager was born out of a real need in our development workflow. It showcases the power of Infrastructure as Code in solving practical, day-to-day challenges in managing complex, multi-repository setups. By automating the creation and configuration of Bitbucket repositories, it allows development teams to focus on what matters: writing great code.&lt;/p&gt;

&lt;p&gt;Whether you're managing a handful of repositories or hundreds, this Terraform configuration can significantly streamline your workflow and ensure consistency across your organization. It's a testament to how the right tools and some automation can transform a tedious, error-prone process into a smooth, efficient operation.&lt;/p&gt;

&lt;p&gt;As we embrace DevOps practices and cloud-native development, tools like terraform-bitbucket-multi-repo-manager will become increasingly valuable. We'd like to encourage you to try it out, contribute to the project, and adapt it to your organization's needs.&lt;br&gt;
Remember, the goal of DevOps is not just to automate tasks, but to improve collaboration, increase efficiency, and deliver better software faster. With terraform-bitbucket-multi-repo-manager, you're taking a significant step in that direction.&lt;/p&gt;

&lt;p&gt;If you found this guide helpful, give it a clap! 👏&lt;br&gt;
Don't forget to follow me for more insightful content on Linux, AWS best practices, cloud security, and technology updates! 🚀&lt;/p&gt;

&lt;p&gt;And if you enjoyed this information and feel like supporting my virtual endeavours, consider buying me a ☕️ coffee! Your support keeps the bytes flowing. Cheers! ☕️😊 &lt;a href="https://www.buymeacoffee.com/lakmalya" rel="noopener noreferrer"&gt;Buy Me a Coffee.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>bitbucket</category>
      <category>cicd</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Securing Linux Filesystems: Best Practices for DevOps Security</title>
      <dc:creator>Dulanjana Lakmal</dc:creator>
      <pubDate>Tue, 03 Sep 2024 14:17:26 +0000</pubDate>
      <link>https://forem.com/lakmalya/securing-linux-filesystems-best-practices-for-devops-security-2860</link>
      <guid>https://forem.com/lakmalya/securing-linux-filesystems-best-practices-for-devops-security-2860</guid>
      <description>&lt;p&gt;As Linux file systems are a fundamental element in maintaining system integrity in this fast-changing world of DevOps and cloud computing, it is necessary to ensure that they are well-secured. Therefore, the article looks into best practices that DevOps persons can apply to fortify their Linux file systems’ security to guarantee data safety and continuity of operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Introduction to Linux Filesystem Security
&lt;/h2&gt;

&lt;p&gt;Linux filesystems form the backbone of data storage and management in most DevOps environments. Securing these filesystems is crucial to protect sensitive information, maintain system stability, and prevent unauthorized access. A comprehensive security strategy involves multiple layers of protection, from basic file permissions to advanced encryption techniques.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Implementing Proper File Permissions and Ownership
&lt;/h2&gt;

&lt;p&gt;The foundation of Linux filesystem security lies in its permission model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the principle of least privilege: Grant only the minimum necessary permissions.&lt;/li&gt;
&lt;li&gt;Regularly audit and update file permissions using commands like &lt;code&gt;chmod&lt;/code&gt; and &lt;code&gt;chown&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Implement umask settings to control default permissions for new files and directories.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Set restrictive permissions on a sensitive file&lt;/span&gt;
&lt;span class="nb"&gt;chmod &lt;/span&gt;600 /path/to/sensitive_file

&lt;span class="c"&gt;# Change ownership to a specific user and group&lt;/span&gt;
&lt;span class="nb"&gt;chown &lt;/span&gt;user:group /path/to/directory
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. Using Access Control Lists (ACLs) for Fine-Grained Control
&lt;/h2&gt;

&lt;p&gt;When standard permissions are not enough, ACLs provide more granular control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;setfacl&lt;/code&gt; and &lt;code&gt;getfacl&lt;/code&gt; to manage ACLs.&lt;/li&gt;
&lt;li&gt;ACLs allow you to set permissions for specific users or groups without changing the base permissions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Grant read access to a specific user&lt;/span&gt;
setfacl &lt;span class="nt"&gt;-m&lt;/span&gt; u:username:r /path/to/file

&lt;span class="c"&gt;# View ACLs on a file&lt;/span&gt;
getfacl /path/to/file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Securing Mount Points and Partitions
&lt;/h2&gt;

&lt;p&gt;Properly configuring mount points and partitions enhances security:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the &lt;code&gt;noexec&lt;/code&gt;, &lt;code&gt;nosuid&lt;/code&gt;, and &lt;code&gt;nodev&lt;/code&gt; mount options where appropriate.&lt;/li&gt;
&lt;li&gt;Separate sensitive directories into different partitions.&lt;/li&gt;
&lt;li&gt;Implement disk quotas to prevent resource exhaustion attacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example in /etc/fstab:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/dev/sda2 /tmp ext4 defaults,noexec,nosuid,nodev 0 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. Encrypting Sensitive Data and Filesystems
&lt;/h2&gt;

&lt;p&gt;Encryption adds a crucial layer of protection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use LUKS (Linux Unified Key Setup) for full-disk encryption.&lt;/li&gt;
&lt;li&gt;Implement eCryptfs or EncFS for directory-level encryption.&lt;/li&gt;
&lt;li&gt;Consider using dm-crypt for block device encryption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create an encrypted container&lt;/span&gt;
cryptsetup luksFormat /dev/sdb1

&lt;span class="c"&gt;# Open the encrypted container&lt;/span&gt;
cryptsetup luksOpen /dev/sdb1 secret_data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  6. Regular Security Audits and Monitoring
&lt;/h2&gt;

&lt;p&gt;Continuous monitoring is essential for maintaining security:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use tools like Auditd to monitor filesystem changes.&lt;/li&gt;
&lt;li&gt;Implement intrusion detection systems (IDS) like AIDE or Tripwire.&lt;/li&gt;
&lt;li&gt;Regularly scan for vulnerabilities using tools like Lynis.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Set up a basic audit rule&lt;/span&gt;
auditctl &lt;span class="nt"&gt;-w&lt;/span&gt; /etc/passwd &lt;span class="nt"&gt;-p&lt;/span&gt; wa &lt;span class="nt"&gt;-k&lt;/span&gt; passwd_changes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  7. Backup and Recovery Strategies
&lt;/h2&gt;

&lt;p&gt;A robust backup strategy is crucial for data protection and disaster recovery:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement regular, automated backups using tools like rsync or Bacula.&lt;/li&gt;
&lt;li&gt;Store backups in secure, off-site locations.&lt;/li&gt;
&lt;li&gt;Regularly test recovery procedures to ensure data integrity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example rsync backup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;rsync &lt;span class="nt"&gt;-avz&lt;/span&gt; &lt;span class="nt"&gt;--delete&lt;/span&gt; /source/directory/ user@remote:/backup/directory/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  8. DevOps-Specific Considerations for Filesystem Security
&lt;/h2&gt;

&lt;p&gt;In a DevOps context, additional considerations include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement Infrastructure as Code (IaC) to manage and version control filesystem configurations.&lt;/li&gt;
&lt;li&gt;Use containerization technologies like Docker to isolate applications and their filesystems.&lt;/li&gt;
&lt;li&gt;Employ secrets management tools to handle sensitive data securely.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  9. Automated Security Checks in CI/CD Pipelines
&lt;/h2&gt;

&lt;p&gt;Integrate security checks into your CI/CD pipelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use tools like CIS-CAT or OpenSCAP to automate compliance checks.&lt;/li&gt;
&lt;li&gt;Implement pre-commit hooks to catch security issues before they enter the codebase.&lt;/li&gt;
&lt;li&gt;Regularly scan Docker images and containers for vulnerabilities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example GitLab CI/CD job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;security_scan&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;test&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;lynis audit system&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker scan my-image:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  10. Conclusion and Best Practices Summary
&lt;/h2&gt;

&lt;p&gt;Securing Linux filesystems in a DevOps environment requires a multi-faceted approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Implement and regularly review file permissions and ACLs.&lt;/li&gt;
&lt;li&gt;Use encryption for sensitive data and filesystems.&lt;/li&gt;
&lt;li&gt;Secure mount points and partitions with appropriate options.&lt;/li&gt;
&lt;li&gt;Conduct regular security audits and monitoring.&lt;/li&gt;
&lt;li&gt;Maintain robust backup and recovery strategies.&lt;/li&gt;
&lt;li&gt;Integrate security checks into CI/CD pipelines.&lt;/li&gt;
&lt;li&gt;Stay informed about the latest security threats and best practices.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By following these best practices, DevOps teams can significantly enhance the security of their Linux filesystems, protecting valuable data and maintaining system integrity in an increasingly complex and threat-prone digital landscape.&lt;/p&gt;

&lt;p&gt;If you found this guide helpful, give it a clap! 👏&lt;br&gt;
Don't forget to follow me for more insightful content on Linux, AWS best practices, cloud security, and technology updates! 🚀&lt;/p&gt;

&lt;p&gt;And if you enjoyed this information and feel like supporting my virtual endeavours, consider buying me a ☕️ coffee! Your support keeps the bytes flowing. Cheers! ☕️😊 &lt;a href="https://www.buymeacoffee.com/lakmalya" rel="noopener noreferrer"&gt;Buy Me a Coffee.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>ubuntu</category>
    </item>
    <item>
      <title>Automating EC2 Instances Start/Stop with Terraform, Lambda, and Python</title>
      <dc:creator>Dulanjana Lakmal</dc:creator>
      <pubDate>Sun, 28 May 2023 14:16:36 +0000</pubDate>
      <link>https://forem.com/lakmalya/automating-ec2-instances-startstop-with-terraform-lambda-and-python-nc8</link>
      <guid>https://forem.com/lakmalya/automating-ec2-instances-startstop-with-terraform-lambda-and-python-nc8</guid>
      <description>&lt;p&gt;AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It allows you to run your code without provisioning or managing servers. With Lambda, you can execute your code in response to events, such as changes to data in an S3 bucket, updates to a DynamoDB table, or even an HTTP request.&lt;/p&gt;

&lt;p&gt;One common use case of AWS Lambda is automating the start and stop of Amazon EC2 instances. EC2 instances are virtual servers in the cloud that provide computing resources for various applications. By automating the start and stop process, you can optimize costs by running instances only when they are needed.&lt;/p&gt;

&lt;p&gt;Here’s how the process works:&lt;/p&gt;

&lt;p&gt;You write your code logic, such as the code snippet you provided earlier, to perform actions like retrieving instance IDs, checking their state, and starting or stopping them.&lt;br&gt;
You package your code and dependencies into a deployment package, typically a ZIP file.&lt;br&gt;
You create an AWS Lambda function and upload the deployment package to it.&lt;br&gt;
You configure the Lambda function’s trigger to determine when it should execute. For example, you can schedule the function to run at specific times using CloudWatch Events or trigger it manually through API Gateway.&lt;br&gt;
When the Lambda function is triggered, it executes the code within the lambda_handler function.&lt;br&gt;
The code interacts with the AWS SDKs to communicate with the EC2 service, retrieve instance information, and perform the start or stop actions on the instances.&lt;br&gt;
The results of the Lambda function’s execution, such as success or error messages, are logged and can be monitored using CloudWatch Logs.&lt;br&gt;
AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It allows you to run your code without provisioning or managing servers. With Lambda, you can execute your code in response to events, such as changes to data in an S3 bucket, updates to a DynamoDB table, or even an HTTP request.&lt;/p&gt;

&lt;p&gt;One common use case of AWS Lambda is automating the start and stop of Amazon EC2 instances. EC2 instances are virtual servers in the cloud that provide computing resources for various applications. By automating the start and stop process, you can optimize costs by running instances only when they are needed.&lt;/p&gt;

&lt;p&gt;Here’s how the process works:&lt;/p&gt;

&lt;p&gt;You write your code logic, such as the code snippet you provided earlier, to perform actions like retrieving instance IDs, checking their state, and starting or stopping them.&lt;br&gt;
You package your code and dependencies into a deployment package, typically a ZIP file.&lt;br&gt;
You create an AWS Lambda function and upload the deployment package to it.&lt;br&gt;
You configure the Lambda function’s trigger to determine when it should execute. For example, you can schedule the function to run at specific times using CloudWatch Events or trigger it manually through API Gateway.&lt;br&gt;
When the Lambda function is triggered, it executes the code within the lambda_handler function.&lt;br&gt;
The code interacts with the AWS SDKs to communicate with the EC2 service, retrieve instance information, and perform the start or stop actions on the instances.&lt;br&gt;
The results of the Lambda function’s execution, such as success or error messages, are logged and can be monitored using CloudWatch Logs.&lt;br&gt;
By leveraging AWS Lambda, you can automate the start and stop process, eliminating the need for manual intervention. This helps optimize costs by running instances only when necessary, such as during business hours or during specific time periods.&lt;/p&gt;

&lt;p&gt;It’s worth mentioning that AWS Lambda offers many other capabilities beyond EC2 instance management. It supports various programming languages, provides scalability and fault tolerance out of the box, and integrates with other AWS services for building serverless applications.&lt;/p&gt;

&lt;p&gt;I hope this provides you with a good understanding of AWS Lambda and its usage in automating the start and stop of EC2 instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import os
import json

def lambda_handler(event, context):
    try:
        tags = os.environ["Tags_json"]
        dictionary = json.loads(tags) # Convert str to Dictionary using Json
        print(dictionary)
        print(type(dictionary))

        ec2 = boto3.client('ec2')

        instance_ids = [] # Placeholder for instance ids

        for (key, value) in dictionary.items():
            print(f'Key: {key}')
            print(f'Value: {value}')

            NextToken = '' # Placeholder for NextToken

            # Describe instances with the specified tag key, value, and stopped state
            response = ec2.describe_instances(
                Filters=[
                    {'Name': 'tag-key', 'Values': [key]},
                    {'Name': 'tag-value', 'Values': [value]},
                    {'Name': 'instance-state-name', 'Values': ['stopped']}
                ],
                MaxResults=20
            )
            print(f'EC2 InstanceIds Response: {response}')

            # This loop fetches all the instance ids using pagination with NextToken
            while True:
                for reservation in response['Reservations']:
                    for instance in reservation['Instances']:
                        instance_ids.append(instance['InstanceId'])
                        print(f'Number of instances: {len(instance_ids)}')
                        print(f'InstanceIds : {instance_ids}')

                # If NextToken is present, it retrieves the next set of instances
                if 'NextToken' in response:
                    print("Testing NextToken")
                    response = ec2.describe_instances(
                        Filters=[
                            {'Name': 'tag-key', 'Values': [key]},
                            {'Name': 'tag-value', 'Values': [value]},
                            {'Name': 'instance-state-name', 'Values': ['stopped']}
                        ],
                        NextToken=response['NextToken']
                    )
                    print(f'Response: {response}')
                    print(f'NextToken: {NextToken}')
                else:
                    break

        if instance_ids:
            # Start the instances
            ec2.start_instances(InstanceIds=instance_ids)
            print("EC2 instances started: {}".format(instance_ids))
        else:
            print("No EC2 instances matching the filter.")

    except Exception as e:
        print("An error occurred:", str(e))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code snippet you provided is designed to be used within an AWS Lambda function. When executed, the code performs the following actions:&lt;/p&gt;

&lt;p&gt;It imports the necessary libraries: boto3 for interacting with AWS services, os for accessing environment variables, and json for working with JSON data.&lt;br&gt;
The lambda_handler function is the entry point for the Lambda function. It takes in two parameters: event and context. These parameters provide information about the triggering event and the execution context.&lt;br&gt;
Inside the lambda_handler, the code retrieves the environment variable Tags_json using os.environ. This variable is expected to contain a JSON string representing a dictionary of tags.&lt;br&gt;
The code then uses json.loads to convert the JSON string into a dictionary. This dictionary represents the tags and their corresponding values that will be used to filter the EC2 instances.&lt;br&gt;
The code initializes the ec2 client from boto3 to interact with the EC2 service.&lt;br&gt;
A list instance_ids is created as a placeholder for storing the IDs of instances that match the specified tags and are in a stopped state.&lt;br&gt;
The code iterates over the key-value pairs in the dictionary using a for loop. For each pair, it performs the following actions:&lt;br&gt;
Prints the key and value.&lt;br&gt;
Sets the NextToken variable to an empty string as a placeholder.&lt;br&gt;
Calls ec2.describe_instances with filters based on the tag key, tag value, and instance state (stopped). This retrieves information about the instances that match the filters.&lt;br&gt;
Prints the response and the number of instances found.&lt;br&gt;
Checks if a NextToken is present in the response. If so, it retrieves the next set of instances by making another describe_instances call with the NextToken parameter. This process continues until all instances have been retrieved.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;If instance_ids is not empty, it means there are instances that match the filters. The code calls ec2.start_instances with the InstanceIds parameter set to the list of instance IDs. This starts the instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If instance_ids is empty, it means there are no instances that match the filters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Any exceptions that occur during the execution are caught by the except block, and the error message is printed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you deploy this code as an AWS Lambda function, you can configure it to be triggered based on specific events, such as a schedule or manual invocation. Once triggered, the function will execute and perform the desired actions of starting the EC2 instances based on the specified tags and their current state.&lt;/p&gt;

&lt;p&gt;To make it more convenient for you, I have created Terraform code that sets up the infrastructure required for the stop/start Lambda function. You can find the code in the following Git repository:&lt;/p&gt;

&lt;p&gt;GitHub Repository: &lt;a href="https://github.com/lakmal-ya/EC2-Instances-Start-Stop-with-Terraform-Lambda.git" rel="noopener noreferrer"&gt;EC2-Instances-Start-Stop-with-Terraform-Lambda&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Terraform code in this repository will help you create the necessary resources, including the Lambda function, IAM roles, and any other dependencies required for automating the start and stop of EC2 instances.&lt;/p&gt;

&lt;p&gt;Feel free to explore the repository and utilize the Terraform code to set up the environment for your use case.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>pytho</category>
      <category>lambda</category>
      <category>aws</category>
    </item>
    <item>
      <title>Terraform list of object validation</title>
      <dc:creator>Dulanjana Lakmal</dc:creator>
      <pubDate>Fri, 24 Feb 2023 17:36:18 +0000</pubDate>
      <link>https://forem.com/lakmalya/terraform-list-of-object-validation-5aj2</link>
      <guid>https://forem.com/lakmalya/terraform-list-of-object-validation-5aj2</guid>
      <description>&lt;p&gt;Introduction for Terraform&lt;/p&gt;

&lt;p&gt;Terraform is an open-source infrastructure as code (IaC) tool developed by HashiCorp. It allows users to define and provision infrastructure resources such as virtual machines, storage accounts, and networking components in a declarative way using a high-level configuration language.&lt;/p&gt;

&lt;p&gt;With Terraform, infrastructure can be described in code, allowing for version control, collaboration, and automation. Terraform also supports a wide range of cloud providers, including Amazon Web Services, Microsoft Azure, Google Cloud Platform, and others, as well as on-premises solutions.&lt;/p&gt;

&lt;p&gt;Terraform uses a state file to keep track of the current state of the infrastructure, which can be used to plan, apply, and manage changes to the infrastructure over time. This makes it easier to manage and maintain complex infrastructure environments, and allows for easier scaling and modification of infrastructure as needed.&lt;/p&gt;

&lt;p&gt;What is list of object?&lt;/p&gt;

&lt;p&gt;In Terraform, a list of objects is a data structure that allows you to define a list of maps, where each map represents an object with a set of key-value pairs. This data structure is often used to represent a collection of similar resources or configurations that need to be created in a single block of code.&lt;/p&gt;

&lt;p&gt;Here’s an example of a list of objects in Terraform, which represents a collection of virtual machines to be created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;virtual_machines = [  { 
  name = "vm-1"   
  size = "Standard_DS2_v2"  
  image = "UbuntuServer:16.04.0-LTS"  },
{ 
  name = "vm-2"
  size = "Standard_DS1_v2"
  image = "UbuntuServer:18.04-LTS"  },
{    
  name = "vm-3"
  size = "Standard_DS3_v2"
  image = "WindowsServer:2016-Datacenter"
  }]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, virtual_machines is a list of maps, where each map represents a virtual machine configuration with name, size, and image attributes. This list can then be used in Terraform to create multiple virtual machines with a single module or resource block.&lt;/p&gt;

&lt;p&gt;Now is validation part of this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "virtual_machines" {
  description = "virtual_machines"
  type        = list(object({
    name  = string
    size  = string
    image = string
  }))
  default     = []
  validation {
    condition     = can(regex("^(Standard_DS3_v2|Standard_DS1_v2)$", var.virtual_machines[0].size))
    error_message = "Invalid input, options: \"Standard_DS3_v2\",\"Standard_DS1_v2\"."
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is an example of a Terraform variable definition for a list of objects called virtual_machines. Let's break it down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;description: This is an optional field that allows you to add a description of the variable for documentation purposes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;type: This defines the type of the variable, which in this case is a list of objects. The objects have three attributes, name, size, and image, each of which is a string.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;default: This sets the default value for the variable, which in this case is an empty list.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;validation: This allows you to add a validation condition to the variable. In this example, the condition argument uses the regex function to check that the size attribute of the first object in the list matches either "Standard_DS3_v2" or "Standard_DS1_v2". If the condition is not met, the error_message argument is used to generate an error message.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;var.virtual_machines[0].size is an expression that refers to the size attribute of the first object in the virtual_machines list.&lt;br&gt;
In Terraform, the var prefix is used to reference a variable value. In this case, var.virtual_machines refers to the value of the virtual_machines variable, which is a list of objects.&lt;br&gt;
[0] is used to access the first object in the list. Since Terraform uses zero-based indexing, [0] refers to the first element of the list.&lt;br&gt;
Finally, .size is used to access the size attribute of the first object in the list.&lt;br&gt;
Overall, var.virtual_machines[0].size is used in the validation condition to check the size attribute of the first object in the list against the regular expression ^(Standard_DS3_v2|Standard_DS1_v2)$. If the size does not match either of these values, the validation condition will fail and an error message will be generated.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This variable definition is useful when you want to define a list of virtual machines with specific sizes and ensure that only the correct sizes are used. The validation argument helps to catch errors early by generating an error message if an invalid size is specified.&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>azure</category>
      <category>tutorial</category>
    </item>
  </channel>
</rss>
