<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Nikitas Gargoulakis</title>
    <description>The latest articles on Forem by Nikitas Gargoulakis (@ngargoulakis).</description>
    <link>https://forem.com/ngargoulakis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ngargoulakis"/>
    <language>en</language>
    <item>
      <title>Complete Guide to AWS Monitoring and Observability for DevOps Teams</title>
      <dc:creator>Nikitas Gargoulakis</dc:creator>
      <pubDate>Thu, 22 Jan 2026 20:03:18 +0000</pubDate>
      <link>https://forem.com/aws-builders/complete-guide-to-aws-monitoring-and-observability-for-devops-teams-1e2f</link>
      <guid>https://forem.com/aws-builders/complete-guide-to-aws-monitoring-and-observability-for-devops-teams-1e2f</guid>
      <description>&lt;p&gt;In today’s cloud-first world, many organisations find themselves wrestling with a common challenge: &lt;strong&gt;monitoring fragmentation&lt;/strong&gt;. If you’re migrating to AWS from on-premises infrastructure, you’ve likely accumulated a collection of monitoring tools, Grafana here, Zabbix there, maybe some Prometheus, Scrutinizer, and a dash of CloudWatch. Each tool serves a purpose, but together they create operational chaos.&lt;/p&gt;

&lt;p&gt;This article walks through a real-world architecture for consolidating multiple monitoring tools into a unified, AWS-native observability platform. Whether you’re monitoring EKS clusters, Active Directory, firewalls, or a hybrid infrastructure, this guide will help you build a &lt;strong&gt;single panel of glass&lt;/strong&gt; for your entire estate.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Death by a Thousand Dashboards
&lt;/h2&gt;

&lt;p&gt;Let’s paint a familiar picture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;3 AM&lt;/strong&gt; : Your phone rings. Production is down.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3:02 AM&lt;/strong&gt; : You check CloudWatch. Nothing obvious.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3:05 AM&lt;/strong&gt; : Switch to Grafana. Some weird metrics.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3:10 AM&lt;/strong&gt; : Check Zabbix. Server CPU is spiking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3:15 AM&lt;/strong&gt; : But why? Check logs. wait, where are those logs again?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3:25 AM&lt;/strong&gt; : Finally correlate the issue across four different systems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MTTR&lt;/strong&gt; : 45 minutes (30 of which were spent context-switching between tools)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sound familiar? You’re not alone.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Core Requirements
&lt;/h3&gt;

&lt;p&gt;When consolidating monitoring infrastructure, we need to solve for:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Unified Visibility&lt;/strong&gt; : One place to see everything&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proactive Detection&lt;/strong&gt; : Catch issues before users do&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast Root Cause Analysis&lt;/strong&gt; : Correlate events across layers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance Ready&lt;/strong&gt; : Query data for audits without panic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational Efficiency&lt;/strong&gt; : Stop paying for five tools when one will do&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  The Solution: AWS-Native Observability Stack
&lt;/h2&gt;

&lt;p&gt;After extensive research and real-world implementation, here’s the architecture that actually works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
┌─────────────────────────────────────────────────────────┐
│ Visualization Layer │
│ CloudWatch Dashboards | Managed Grafana | QuickSight │
└─────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────┐
│ Analytics &amp;amp;amp; Investigation Layer │
│ CloudWatch Insights | Athena | OpenSearch Service │
└─────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────┐
│ Centralized Data Lake (Optional) │
│ AWS Security Lake (OCSF) │
└─────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────┐
│ Monitoring &amp;amp;amp; Security Services │
│ CloudWatch | Security Hub | GuardDuty | Config │
└─────────────────────────────────────────────────────────┘
                          ↓
┌─────────────────────────────────────────────────────────┐
│ Your Infrastructure │
│ EKS | EC2 | Lambda | RDS | On-Prem (Logs Only) │
└─────────────────────────────────────────────────────────┘

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Core AWS Services
&lt;/h3&gt;

&lt;p&gt;Let’s break down each component:&lt;/p&gt;




&lt;h2&gt;
  
  
  1. Amazon CloudWatch: Your Foundation
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;CloudWatch is unavoidable&lt;/strong&gt; when working with AWS. Instead of fighting it, embrace it as your foundation.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You Get:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Metrics&lt;/strong&gt; : CPU, memory, disk, network, custom application metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logs&lt;/strong&gt; : Centralized log aggregation with retention policies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alarms&lt;/strong&gt; : Threshold-based and anomaly detection alerting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dashboards&lt;/strong&gt; : Pre-built and custom operational views&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Insights&lt;/strong&gt; : SQL-like queries for log analysis&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Real-World Setup:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "agent": {
    "metrics_collection_interval": 60
  },
  "logs": {
    "logs_collected": {
      "files": {
        "collect_list": [
          {
            "file_path": "/var/log/application/*.log",
            "log_group_name": "/aws/application/myapp",
            "log_stream_name": "{instance_id}",
            "retention_in_days": 30
          }
        ]
      }
    }
  },
  "metrics": {
    "namespace": "CustomApp/Metrics",
    "metrics_collected": {
      "cpu": {
        "measurement": [
          {"name": "cpu_usage_idle", "unit": "Percent"}
        ]
      },
      "mem": {
        "measurement": [
          {"name": "mem_used_percent", "unit": "Percent"}
        ]
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. Container Insights for EKS
&lt;/h2&gt;

&lt;p&gt;If you’re running Kubernetes on AWS, Container Insights is a game-changer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Enable control plane logging
aws eks update-cluster-config \
  --name my-cluster \
  --logging '{"clusterLogging":[{"types":["api","audit","authenticator"],"enabled":true}]}'

# Deploy FluentBit DaemonSet
kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What You See:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Cluster-level metrics (CPU, memory, network)&lt;/li&gt;
&lt;li&gt;Namespace and pod-level breakdowns&lt;/li&gt;
&lt;li&gt;Node performance and capacity&lt;/li&gt;
&lt;li&gt;Application logs automatically collected from stdout/stderr&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;This replaces your standalone Prometheus + Grafana setup for most use cases.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  3. AWS Security Hub: Your Security Command Center
&lt;/h2&gt;

&lt;p&gt;Think of Security Hub as your security findings aggregator. It’s like having a security operations assistant that never sleeps.&lt;/p&gt;

&lt;h3&gt;
  
  
  What It Aggregates:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GuardDuty&lt;/strong&gt; : AI-powered threat detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Config&lt;/strong&gt; : Configuration compliance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IAM Access Analyzer&lt;/strong&gt; : Permission issues&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Macie&lt;/strong&gt; : Sensitive data discovery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inspector&lt;/strong&gt; : Vulnerability scanning&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Compliance Made Easy:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Enable Security Hub with CIS AWS Foundations Benchmark
aws securityhub enable-security-hub \
  --enable-default-standards

# Get compliance summary
aws securityhub get-findings \
  --filters '{"ComplianceStatus": [{"Value": "FAILED", "Comparison": "EQUALS"}]}'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  4. Amazon OpenSearch: Your SIEM Replacement
&lt;/h2&gt;

&lt;p&gt;Replacing Microsoft Sentinel? OpenSearch Service is your answer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why OpenSearch Over Sentinel?
&lt;/h3&gt;

&lt;p&gt;Use OpenSearch’s anomaly detection feature. It’s surprisingly good at catching unusual patterns you’d miss manually.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. AWS Security Lake: The Long-Term Play
&lt;/h2&gt;

&lt;p&gt;Here’s where things get interesting. Security Lake is AWS’s answer to the question: “Where do I store petabytes of security data without going bankrupt?”&lt;/p&gt;

&lt;h3&gt;
  
  
  The OCSF Advantage
&lt;/h3&gt;

&lt;p&gt;Security Lake automatically normalizes logs to the Open Cybersecurity Schema Framework (OCSF). This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standardized queries&lt;/strong&gt; across all log sources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-cloud ready&lt;/strong&gt; (Azure, GCP logs can be normalized too)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Future-proof&lt;/strong&gt; (vendor-agnostic format)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When to Use Security Lake:
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;YES&lt;/strong&gt; if you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 year log retention&lt;/li&gt;
&lt;li&gt;Compliance with strict audit requirements&lt;/li&gt;
&lt;li&gt;Multi-cloud strategy&lt;/li&gt;
&lt;li&gt;Cost-effective long-term storage (S3 is cheap!)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;NO&lt;/strong&gt; if you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time alerting (use CloudWatch + OpenSearch instead)&lt;/li&gt;
&lt;li&gt;Simple single-account setup&lt;/li&gt;
&lt;li&gt;Quick implementation (&amp;lt;4 weeks)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Security Lake for retention, OpenSearch for hot analytics (last 30 days).&lt;/p&gt;

&lt;h2&gt;
  
  
  6. The On-Premises Challenge
&lt;/h2&gt;

&lt;p&gt;Let’s address the elephant in the room: &lt;strong&gt;on-premises monitoring in a cloud-native world&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s Realistic:
&lt;/h3&gt;

&lt;p&gt;You CAN:**&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Forward logs via CloudWatch Agent&lt;/li&gt;
&lt;li&gt;Send syslogs via Kinesis Firehose&lt;/li&gt;
&lt;li&gt;Store and search on-prem logs in AWS&lt;/li&gt;
&lt;li&gt;Create basic alerts on log patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You CANNOT (easily):**&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get real-time metrics dashboards&lt;/li&gt;
&lt;li&gt;Automated remediation for on-prem resources&lt;/li&gt;
&lt;li&gt;Full observability parity with AWS resources&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Pragmatic Approach:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# On-premises server → CloudWatch Logs
# Install agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/amazon-cloudwatch-agent.deb
sudo dpkg -i amazon-cloudwatch-agent.deb

# Configure to send logs only (no metrics)
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
  -a fetch-config \
  -m onPremise \
  -s \
  -c file:/opt/aws/amazon-cloudwatch-agent/etc/config.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;For true on-premises monitoring, you might need to keep Zabbix or Prometheus for a while.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Decision: Two Approaches
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Approach 1: With Security Lake (Compliance-First)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Best For&lt;/strong&gt; : Healthcare, finance, government, or anyone with &amp;gt;1 year log retention requirements&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
AWS Services → Security Lake (S3/OCSF) → Athena (SQL queries)
                    ↓
              OpenSearch (Last 30 days hot analytics)
                    ↓
              CloudWatch Dashboards + Managed Grafana

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost-effective long-term retention&lt;/li&gt;
&lt;li&gt;OCSF standardization&lt;/li&gt;
&lt;li&gt;Multi-cloud ready&lt;/li&gt;
&lt;li&gt;Compliance-friendly&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More complex setup&lt;/li&gt;
&lt;li&gt;Longer implementation (16-20 weeks)&lt;/li&gt;
&lt;li&gt;Requires OCSF knowledge&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Approach 2: Direct CloudWatch/OpenSearch (Speed-First)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Best For&lt;/strong&gt; : Startups, lower compliance reqs, quick wins&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
AWS Services → CloudWatch Logs → OpenSearch (direct)
                    ↓
              CloudWatch Dashboards + Managed Grafana
                    ↓
              S3 (archived logs via export)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Pros&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Faster implementation &lt;/li&gt;
&lt;li&gt;Simpler architecture&lt;/li&gt;
&lt;li&gt;Real-time everything&lt;/li&gt;
&lt;li&gt;Lower learning curve&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cons&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Higher CloudWatch Logs costs at scale&lt;/li&gt;
&lt;li&gt;No OCSF normalization&lt;/li&gt;
&lt;li&gt;OpenSearch storage costs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real-World Implementation: Step-by-Step
&lt;/h2&gt;

&lt;p&gt;Let’s build this thing. Here’s the actual deployment sequence:&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 1-2: Foundation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# 1. Enable AWS Organizations (if not already)
aws organizations create-organization

# 2. Enable CloudTrail (all regions, all accounts)
aws cloudtrail create-trail \
  --name organization-trail \
  --s3-bucket-name my-cloudtrail-bucket \
  --is-organization-trail \
  --is-multi-region-trail

# 3. Enable GuardDuty
aws guardduty create-detector --enable

# 4. Enable Security Hub
aws securityhub enable-security-hub

# 5. Enable AWS Config
aws configservice put-configuration-recorder \
  --configuration-recorder name=default,roleARN=arn:aws:iam::ACCOUNT:role/aws-service-role/config.amazonaws.com/AWSServiceRoleForConfig
aws configservice start-configuration-recorder \
  --configuration-recorder-name default

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Week 3-4: EKS Monitoring
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# fluent-bit-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: amazon-cloudwatch
data:
  fluent-bit.conf: |
    [SERVICE]
        Flush 5
        Log_Level info

    [INPUT]
        Name tail
        Path /var/log/containers/*.log
        Parser docker
        Tag kube.*

    [FILTER]
        Name kubernetes
        Match kube.*
        Kube_URL https://kubernetes.default.svc:443
        Merge_Log On

    [OUTPUT]
        Name cloudwatch_logs
        Match kube.*
        region us-east-1
        log_group_name /aws/eks/my-cluster
        log_stream_prefix app-
        auto_create_group true



kubectl apply -f fluent-bit-config.yaml

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Week 5-6: OpenSearch SIEM
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Create OpenSearch domain
aws opensearch create-domain \
  --domain-name security-analytics \
  --engine-version "OpenSearch_2.11" \
  --cluster-config InstanceType=r6g.large.search,InstanceCount=3 \
  --ebs-options EBSEnabled=true,VolumeType=gp3,VolumeSize=100 \
  --encryption-at-rest-options Enabled=true \
  --node-to-node-encryption-options Enabled=true \
  --advanced-security-options Enabled=true,InternalUserDatabaseEnabled=false

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Week 7-8: Dashboards and Alerts
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
// cloudwatch-dashboard.json
{
  "widgets": [
    {
      "type": "metric",
      "properties": {
        "metrics": [
          ["AWS/EC2", "CPUUtilization", {"stat": "Average"}]
        ],
        "period": 300,
        "stat": "Average",
        "region": "us-east-1",
        "title": "EC2 CPU Overview"
      }
    },
    {
      "type": "log",
      "properties": {
        "query": "SOURCE '/aws/eks/my-cluster' | fields @timestamp, @message | filter @message like /ERROR/ | sort @timestamp desc | limit 20",
        "region": "us-east-1",
        "title": "Recent Errors"
      }
    }
  ]
}



aws cloudwatch put-dashboard \
  --dashboard-name "Production-Overview" \
  --dashboard-body file://cloudwatch-dashboard.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Automated Incident Response
&lt;/h2&gt;

&lt;p&gt;Here’s where it gets interesting. Let’s automate security responses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# lambda/security_response.py
import boto3

ec2 = boto3.client('ec2')
sns = boto3.client('sns')

def lambda_handler(event, context):
    """
    Responds to GuardDuty findings automatically
    """
    finding = event['detail']
    finding_type = finding['type']

    # SSH Brute Force detected
    if 'SSHBruteForce' in finding_type:
        instance_id = finding['resource']['instanceDetails']['instanceId']

        # Quarantine instance
        ec2.modify_instance_attribute(
            InstanceId=instance_id,
            Groups=['sg-quarantine'] # Pre-created quarantine security group
        )

        # Notify team
        sns.publish(
            TopicArn='arn:aws:sns:us-east-1:ACCOUNT:security-alerts',
            Subject=f'CRITICAL: Instance {instance_id} Quarantined',
            Message=f'Detected SSH brute force attack. Instance automatically isolated.\n\nFinding: {finding}'
        )

        return {'status': 'quarantined', 'instance': instance_id}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;EventBridge Rule&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "source": ["aws.guardduty"],
  "detail-type": ["GuardDuty Finding"],
  "detail": {
    "severity": [7, 8, 8.9],
    "type": ["UnauthorizedAccess:EC2/SSHBruteForce"]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Result&lt;/strong&gt; : Threat detected → Instance isolated → Team notified. All in &amp;lt;30 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Optimization Tips
&lt;/h2&gt;

&lt;p&gt;Let’s talk money. Here’s how to keep costs reasonable:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. CloudWatch Logs: The Got you by Surprise Cost
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Set appropriate retention periods
import boto3

logs = boto3.client('logs')

# Development logs: 7 days
logs.put_retention_policy(
    logGroupName='/aws/lambda/dev-functions',
    retentionInDays=7
)

# Production logs: 30 days
logs.put_retention_policy(
    logGroupName='/aws/lambda/prod-functions',
    retentionInDays=30
)

# Compliance logs: Export to S3, then delete
logs.put_retention_policy(
    logGroupName='/aws/cloudtrail',
    retentionInDays=90
)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Use Log Sampling
&lt;/h3&gt;

&lt;p&gt;Not every log line needs immediate indexing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Sample 10% of high-volume logs
import random

def lambda_handler(event, context):
    if random.random() &amp;amp;lt; 0.1: # 10% sampling
        # Send to OpenSearch
        pass

    # Always send to S3 (cheap storage)
    # Send everything

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. OpenSearch Reserved Instances
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Save 30-40% with 1-year reserved capacity
aws opensearch purchase-reserved-instance-offering \
  --reserved-instance-offering-id offering-id \
  --instance-count 3

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. S3 Intelligent-Tiering
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Automatic cost optimization for Security Lake
aws s3api put-bucket-intelligent-tiering-configuration \
  --bucket security-lake-bucket \
  --id intelligent-tiering \
  --intelligent-tiering-configuration '{
    "Id": "intelligent-tiering",
    "Status": "Enabled",
    "Tierings": [
      {"Days": 90, "AccessTier": "ARCHIVE_ACCESS"},
      {"Days": 180, "AccessTier": "DEEP_ARCHIVE_ACCESS"}
    ]
  }'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Migration Strategy: The Practical Path
&lt;/h2&gt;

&lt;p&gt;Don’t try to do everything at once. Here’s the battle-tested sequence:&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: AWS Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Start with EKS (highest ROI)&lt;/li&gt;
&lt;li&gt;Add EC2 instances&lt;/li&gt;
&lt;li&gt;Enable RDS Enhanced Monitoring&lt;/li&gt;
&lt;li&gt;Configure Lambda logging&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Win&lt;/strong&gt; : 60% of your monitoring consolidated&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Enable Security Hub&lt;/li&gt;
&lt;li&gt;Deploy GuardDuty&lt;/li&gt;
&lt;li&gt;Set up OpenSearch SIEM&lt;/li&gt;
&lt;li&gt;Migrate from Sentinel&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Win&lt;/strong&gt; : Security team has single console&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3: Dashboards
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Build CloudWatch operational dashboards&lt;/li&gt;
&lt;li&gt;Deploy Managed Grafana&lt;/li&gt;
&lt;li&gt;Recreate critical legacy dashboards&lt;/li&gt;
&lt;li&gt;Train operations team&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Win&lt;/strong&gt; : Ops team stops using old tools&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 4: On-Premises
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Deploy CloudWatch Agent to servers&lt;/li&gt;
&lt;li&gt;Configure syslog forwarding&lt;/li&gt;
&lt;li&gt;Archive on-prem logs in S3&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 5: Decommission
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Parallel run validation (2 weeks)&lt;/li&gt;
&lt;li&gt;Export historical data&lt;/li&gt;
&lt;li&gt;Turn off Zabbix, Prometheus&lt;/li&gt;
&lt;li&gt;Reclaim licenses and infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Common Issues (And How to Avoid Them)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issue #1: CloudWatch Logs Cost Explosion
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt; : Someone enables debug logging in production&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Implement log sampling and filtering at source
import logging
import watchtower

# Only send WARNING and above to CloudWatch
handler = watchtower.CloudWatchLogHandler(log_group='/aws/app')
handler.setLevel(logging.WARNING)

logger = logging.getLogger( __name__ )
logger.addHandler(handler)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Issue #2: Alert Fatigue
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt; : 500 alerts per day, all marked “critical”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Implement alert prioritization
def calculate_severity(metric_value, threshold):
    if metric_value &amp;gt; threshold * 1.5:
        return 'CRITICAL' # Page on-call
    elif metric_value &amp;gt; threshold * 1.2:
        return 'WARNING' # Slack notification
    else:
        return 'INFO' # Log only

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Issue #3: The “We’ll Monitor Everything” Trap
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt; : Monitoring 10,000 metrics per instance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt; : Start with the &lt;strong&gt;Golden Signals&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Latency&lt;/strong&gt; : How long requests take&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traffic&lt;/strong&gt; : Request volume&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Errors&lt;/strong&gt; : Failure rate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Saturation&lt;/strong&gt; : Resource utilization
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Focused metric collection
CRITICAL_METRICS = [
    'CPUUtilization',
    'MemoryUtilization',
    'NetworkIn',
    'NetworkOut',
    'DiskReadOps',
    'DiskWriteOps'
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Issue #4: Forgetting About Cardinality
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt; : OpenSearch cluster dies from high-cardinality fields&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Don't index user IDs, session IDs, or timestamps as keywords!
PUT /logs/_mapping
{
  "properties": {
    "user_id": {
      "type": "text", # Don't use "keyword" for high-cardinality
      "index": false # Don't index if you won't search it
    },
    "timestamp": {
      "type": "date"
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Success Metrics: Measuring Your Win
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Old Way: "Let me check 5 systems..."
Time to answer: 15-30 minutes

New Way: "Here's the CloudWatch dashboard..."
Time to answer: 30 seconds

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Troubleshooting Guide
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issue: CloudWatch Agent Not Sending Logs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Check agent status
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
  -a query -m ec2 -c default

# Check agent logs
sudo tail -f /opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log

# Common fix: IAM permissions
# Ensure instance role has CloudWatchAgentServerPolicy

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Issue: OpenSearch “Cluster Red” Status
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Check cluster health
curl -XGET 'https://your-domain.region.es.amazonaws.com/_cluster/health?pretty'

# Common causes:
# 1. Unassigned shards (need more nodes)
# 2. Disk space &amp;gt;85% used (scale storage)
# 3. JVM pressure (scale instance type)

# Quick fix: Delete old indices
curl -XDELETE 'https://your-domain.region.es.amazonaws.com/old-index-*'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Issue: High CloudWatch Costs
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Find expensive log groups
aws logs describe-log-groups \
  --query 'logGroups[*].[logGroupName,storedBytes]' \
  --output table | sort -k2 -rn

# Check for debug logs in production
aws logs filter-log-events \
  --log-group-name /aws/lambda/my-function \
  --filter-pattern "DEBUG" \
  --limit 10

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Best Practices Checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt; Document current log volumes (GB/day)&lt;/li&gt;
&lt;li&gt; List all alert rules from legacy systems&lt;/li&gt;
&lt;li&gt; Identify compliance retention requirements&lt;/li&gt;
&lt;li&gt; Get buy-in from security and ops teams&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set realistic budget expectations&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start with non-production environment&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run legacy and new systems in parallel (2+ weeks)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Train ops team before cutover&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Have rollback plan ready&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Document everything (future you will thank you)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Monitor CloudWatch costs daily (first month)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review alert effectiveness weekly&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Gather user feedback from ops team&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimise based on actual usage patterns&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Schedule quarterly reviews&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;Consolidating from multiple monitoring tools to a unified AWS native stack isn’t just about reducing complexity, it’s about &lt;strong&gt;operational excellence&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Faster incident response&lt;/strong&gt; : 15 minutes instead of 45&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better security posture&lt;/strong&gt; : Automated threat response&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance confidence&lt;/strong&gt; : Query any log in seconds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost savings&lt;/strong&gt; : £5-10k+/year in eliminated tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Happier ops team&lt;/strong&gt; : One system to master, not five&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;

&lt;p&gt;If you’re ready to begin:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Week 1&lt;/strong&gt; : Audit current tools and costs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 2&lt;/strong&gt; : Estimate AWS costs with &lt;a href="https://calculator.aws/" rel="noopener noreferrer"&gt;AWS Pricing Calculator&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 3&lt;/strong&gt; : POC with non-prod EKS cluster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 4&lt;/strong&gt; : Build business case&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Week 5+&lt;/strong&gt; : Execute phased migration&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;h3&gt;
  
  
  AWS Documentation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/cloudwatch/" rel="noopener noreferrer"&gt;CloudWatch User Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ContainerInsights.html" rel="noopener noreferrer"&gt;Container Insights&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/securityhub/" rel="noopener noreferrer"&gt;Security Hub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/security-lake/" rel="noopener noreferrer"&gt;Security Lake&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Tools
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://observability.workshop.aws/" rel="noopener noreferrer"&gt;AWS Observability Workshop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-cloudwatch-agent-configuration-file-wizard.html" rel="noopener noreferrer"&gt;CloudWatch Agent Configuration Wizard&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://schema.ocsf.io/" rel="noopener noreferrer"&gt;OCSF Schema&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Community
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.youtube.com/playlist?list=PL2yQDdvlhXf8yz0eVxJZZKq8BcqJOZJKK" rel="noopener noreferrer"&gt;AWS re:Invent Observability Sessions&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws-observability" rel="noopener noreferrer"&gt;AWS Observability GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Building a unified AWS monitoring solution is a journey, not a destination. Start small, prove value quickly, and iterate based on real-world usage.&lt;/p&gt;

&lt;p&gt;The goal isn’t monitoring perfection, it’s &lt;strong&gt;operational sanity&lt;/strong&gt;. When your phone rings at 3 AM, you want answers in minutes, not a hunt across five different tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; #aws #monitoring #observability #cloudwatch #devops #sre #kubernetes #eks #security #siem&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.allaboutcloud.co.uk/complete-guide-to-aws-monitoring-and-observability-for-devops-teams/" rel="noopener noreferrer"&gt;Complete Guide to AWS Monitoring and Observability for DevOps Teams&lt;/a&gt; first appeared on &lt;a href="https://www.allaboutcloud.co.uk" rel="noopener noreferrer"&gt;Allaboutcloud.co.uk&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Kiro CLI – Transform Your Terminal into an AI-Powered AWS Architecture Studio</title>
      <dc:creator>Nikitas Gargoulakis</dc:creator>
      <pubDate>Tue, 06 Jan 2026 19:55:34 +0000</pubDate>
      <link>https://forem.com/aws-builders/kiro-cli-transform-your-terminal-into-an-ai-powered-aws-architecture-studio-1m1i</link>
      <guid>https://forem.com/aws-builders/kiro-cli-transform-your-terminal-into-an-ai-powered-aws-architecture-studio-1m1i</guid>
      <description>&lt;p&gt;Creating AWS architecture diagrams has traditionally been one of those tasks that developers and solutions architects love to procrastinate on. You know the drill: dragging and dropping icons in tools like Lucidchart or draw.io, hunting for the latest AWS service icons, spending hours on layout and alignment, and then starting all over again when requirements change. What if I told you there’s a better way?&lt;/p&gt;

&lt;p&gt;Here comes &lt;strong&gt;Kiro CLI&lt;/strong&gt; with Model Context Protocol (MCP) support, a game-changing approach that lets you generate professional AWS architecture diagrams using natural language prompts, right from your terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Kiro cli?
&lt;/h2&gt;

&lt;p&gt;Kiro CLI is a command-line interface that brings AI-powered development capabilities directly to your terminal. Built on Claude’s frontier models, it’s designed to help developers write code, debug issues, automate workflows, and yes, create architecture diagrams – all through natural conversation.&lt;/p&gt;

&lt;p&gt;Unlike traditional CLI tools that require memorizing specific commands and syntax, Kiro CLI understands what you want to accomplish and helps you get there through an interactive dialogue. It’s like having a senior developer sitting next to you, ready to help with any task.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features about Kiro cli
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Custom Agents for Specialized Tasks&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can create task-specific agents optimized for your workflows. Want a DevOps agent that knows your infrastructure patterns? Or a diagram specialist that follows your company’s architecture standards? Kiro CLI lets you build and deploy these specialized agents with pre-defined tool permissions, context, and prompts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Advanced Context Management&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kiro CLI maintains project-specific conversation history and understands your codebase through directory-based persistence. It automatically associates chat sessions with your working directories, ensuring relevant context is always available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Native MCP Support&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is where the magic happens for diagram generation. The Model Context Protocol allows Kiro CLI to connect to external tools and data sources, including AWS documentation and diagram generation libraries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Seamless Cross-Platform Experience&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you’re already using Kiro IDE, your configurations transfer seamlessly. Your MCP servers, steering files, and project documentation work in both environments – configure once, use everywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Model Context Protocol (MCP)
&lt;/h2&gt;

&lt;p&gt;Before diving into diagram creation, let’s understand what makes this possible. The Model Context Protocol, developed by Anthropic, is an open standard that enables AI tools to securely connect to external data sources, tools, and custom servers.&lt;/p&gt;

&lt;p&gt;Think of MCP as a universal adapter for your AI assistant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MCP Client&lt;/strong&gt; : The host application (Kiro CLI in our case) that communicates with MCP servers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP Server&lt;/strong&gt; : Lightweight programs that expose specific tools or resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP Tools&lt;/strong&gt; : Model-controlled functions that the AI can automatically discover and invoke&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For AWS diagram generation, we’ll use two critical MCP servers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;AWS Diagram MCP Server&lt;/strong&gt; : Provides tools to create diagrams using Python’s diagrams library&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Documentation MCP Server&lt;/strong&gt; : Searches and fetches AWS documentation for best practices&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up Your Environment
&lt;/h2&gt;

&lt;p&gt;Let’s get everything configured so you can start generating diagrams.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install Kiro CLI
&lt;/h3&gt;

&lt;p&gt;Installation is straightforward and takes less than a minute:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
curl -fsSL https://cli.kiro.dev/install | bash

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the installation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
kiro --version

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see output like &lt;code&gt;kiro-cli x.x.x&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Configure Authentication
&lt;/h3&gt;

&lt;p&gt;Kiro CLI supports multiple authentication methods. For quick experimentation, AWS Builder ID is recommended:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
kiro login

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Follow the prompts to complete authentication.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Install Required Dependencies
&lt;/h3&gt;

&lt;p&gt;The diagram generation relies on Python tooling. First, install &lt;code&gt;uv&lt;/code&gt; (a fast Python package installer):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
pip install uv

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you’re on macOS, you’ll also need Graphviz:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
brew install graphviz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Linux:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
sudo apt-get install graphviz

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Configure MCP Servers
&lt;/h3&gt;

&lt;p&gt;Kiro CLI automatically discovers MCP servers from the configuration file. Create or edit &lt;code&gt;~/.kiro/settings/mcp.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
{
  "mcpServers": {
    "awslabs.aws-diagram-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-diagram-mcp-server"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      },
      "autoApprove": [],
      "disabled": false
    },
    "awslabs.aws-documentation-mcp-server": {
      "command": "uvx",
      "args": ["awslabs.aws-documentation-mcp-server@latest"],
      "env": {
        "FASTMCP_LOG_LEVEL": "ERROR"
      },
      "autoApprove": [],
      "disabled": false
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Verify MCP Configuration
&lt;/h3&gt;

&lt;p&gt;Start a Kiro session:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
kiro-cli

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ay44ylg1lgifjl45rdb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ay44ylg1lgifjl45rdb.png" alt="kiro-cli" width="714" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check your configured MCP servers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
/mcp

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see both AWS Diagram and AWS Documentation MCP servers listed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fisdjczegmyjlpylb3mhj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fisdjczegmyjlpylb3mhj.png" width="610" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating Your First AWS Diagram
&lt;/h2&gt;

&lt;p&gt;Let’s start with a simple three-tier web application architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic Three-Tier Architecture
&lt;/h3&gt;

&lt;p&gt;In your Kiro chat session, simply describe what you want:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Create a three-tier web application architecture diagram with:
1. Application Load Balancer in a public subnet
2. Auto Scaling group with EC2 instances in private subnets
3. RDS MySQL database in private subnets
4. S3 bucket for static assets
Include VPC, availability zones, and security groups

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kiro will use the AWS Diagram MCP to generate Python code using the diagrams library and create a visual representation. The diagram will be saved as an image file (typically PNG or SVG).&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding the Generated Code
&lt;/h3&gt;

&lt;p&gt;Behind the scenes, Kiro generates code similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
from diagrams import Diagram, Cluster
from diagrams.aws.compute import EC2, AutoScaling
from diagrams.aws.network import ELB, VPC
from diagrams.aws.database import RDS
from diagrams.aws.storage import S3

with Diagram("Three-Tier Web Application", show=False, direction="TB"):
    with Cluster("VPC"):
        with Cluster("Public Subnet"):
            alb = ELB("Application\nLoad Balancer")

        with Cluster("Private Subnet - App Tier"):
            app_group = [EC2("Web Server 1"),
                        EC2("Web Server 2"),
                        EC2("Web Server 3")]
            asg = AutoScaling("Auto Scaling")

        with Cluster("Private Subnet - Data Tier"):
            db = RDS("MySQL\nDatabase")

        static = S3("Static Assets")

    alb &amp;gt;&amp;gt; app_group &amp;gt;&amp;gt; db
    alb &amp;gt;&amp;gt; static

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The main feature of Kiro CLI is that you don’t need to know this syntax – just describe what you want, and the AI handles the rest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced Diagram Patterns
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Data Processing Pipeline
&lt;/h3&gt;

&lt;p&gt;For more complex architectures, Kiro CLI really shines. Here’s how to create a sophisticated data processing pipeline:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Create a data processing pipeline diagram using AWS services:

1. Data Ingestion cluster:
   - Kinesis Data Streams for real-time ingestion
   - API Gateway for REST endpoints

2. Data Processing cluster:
   - Lambda functions for transformation
   - AWS Glue for ETL jobs
   - Step Functions for orchestration

3. Data Storage cluster:
   - S3 buckets (Raw, Processed, Curated)
   - DynamoDB for metadata

4. Analytics cluster:
   - Athena for querying
   - QuickSight for visualization

Include data flow arrows, IAM role boundaries, and CloudWatch monitoring.
Ensure AWS best practice visualization.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f35whjj6t73y8kor2u1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f35whjj6t73y8kor2u1.png" alt="kiro cli" width="800" height="784"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Serverless Microservices Architecture
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Design a serverless microservices architecture with:
- API Gateway as the entry point
- Multiple Lambda functions for different services (auth, orders, inventory)
- DynamoDB tables for each service
- EventBridge for inter-service communication
- Cognito for user authentication
- CloudFront for global distribution
- S3 for frontend hosting

Show clear service boundaries and event flows.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymki057cujpv2ivto985.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymki057cujpv2ivto985.png" width="800" height="1082"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Container-Based Architecture
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Create an EKS-based microservices diagram showing:
- VPC with public and private subnets across 3 AZs
- Application Load Balancer
- EKS cluster with multiple node groups
- ECR for container images
- RDS Aurora for database
- ElastiCache for caching
- CloudWatch for logging and monitoring

Include networking flow and security group relationships.


 Perfect! I've created a comprehensive EKS microservices architecture diagram
that follows AWS best practices. Here's what the diagram includes:

## Architecture Components:

VPC Design (Multi-AZ):
- VPC with CIDR 10.0.0.0/16 across 3 Availability Zones
- Public subnets (10.0.1-3.0/24) for internet-facing resources
- Private subnets (10.0.11-13.0/24) for EKS worker nodes
- Database subnets (10.0.21-23.0/24) for RDS and ElastiCache

Networking &amp;amp; Load Balancing:
- Internet Gateway for public internet access
- NAT Gateways in each AZ for outbound internet access from private subnets
- Application Load Balancer in public subnets for incoming traffic

EKS Cluster:
- Managed EKS control plane
- Two node groups with different instance types:
  - General purpose (t3.medium) for standard workloads
  - Compute optimized (c5.large) for CPU-intensive tasks
- Microservices deployed as containers (Frontend, API Gateway, User Service,
Order Service, Payment Service)

Data Layer:
- RDS Aurora with primary (writer) and replica (reader) instances
- ElastiCache Redis cluster for caching

Supporting Services:
- ECR for container image storage
- CloudWatch for logging and monitoring

Security:
- Security groups with appropriate port restrictions:
  - ALB SG: Ports 80/443 (HTTP/HTTPS)
  - EKS SG: Ports 443, 10250 (Kubernetes API, kubelet)
  - DB SG: Ports 3306/5432 (MySQL/PostgreSQL)
  - Cache SG: Port 6379 (Redis)

## AWS Best Practices Implemented:

1. High Availability: Multi-AZ deployment across 3 availability zones
2. Security: Private subnets for compute and database tiers, security groups with
least privilege
3. Scalability: Multiple node groups, auto-scaling capabilities
4. Monitoring: CloudWatch integration for logs and metrics
5. Container Management: ECR for secure image storage
6. Network Segmentation: Separate subnets for different tiers
7. Managed Services: Using managed EKS control plane, RDS Aurora, and ElastiCache

The diagram shows the complete traffic flow from users through the internet
gateway, load balancer, to the microservices, and their connections to the
database and cache layers. The orange dashed lines represent security group
associations for each component.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.allaboutcloud.co.uk%2Fwp-content%2Fuploads%2F2026%2F01%2Feks-microservices-architecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.allaboutcloud.co.uk%2Fwp-content%2Fuploads%2F2026%2F01%2Feks-microservices-architecture.png" width="800" height="1568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices for Diagram Generation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Be Specific About Your Requirements
&lt;/h3&gt;

&lt;p&gt;The more detailed your prompt, the better the result. Instead of “create an AWS diagram,” specify:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which AWS services to include&lt;/li&gt;
&lt;li&gt;How they connect to each other&lt;/li&gt;
&lt;li&gt;Clustering and organisation preferences&lt;/li&gt;
&lt;li&gt;Network boundaries and security groups&lt;/li&gt;
&lt;li&gt;Data flow directions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Iterate on Your Diagrams
&lt;/h3&gt;

&lt;p&gt;Kiro CLI excels at iteration. After generating an initial diagram, you can refine it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Add a CloudFront distribution in front of the ALB
Include an AWS WAF for security
Show the connection to Route 53 for DNS
Don't over engineer it. Keep it simple

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Leverage AWS Documentation MCP
&lt;/h3&gt;

&lt;p&gt;When you’re unsure about best practices, ask Kiro to consult AWS documentation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Search AWS documentation for best practices on securing RDS databases,
then update the diagram to reflect those recommendations.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Real-World Use Cases
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Documentation for Stakeholders
&lt;/h3&gt;

&lt;p&gt;When you need to explain your infrastructure to non-technical stakeholders, generate clean, professional diagrams that focus on business flows rather than technical details.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Compliance and Audit Requirements
&lt;/h3&gt;

&lt;p&gt;Many compliance frameworks require architecture documentation. Kiro CLI can quickly generate diagrams showing security controls, data flows, and network segmentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Infrastructure Planning
&lt;/h3&gt;

&lt;p&gt;Before implementing new features, use Kiro to explore different architectural approaches visually. Generate multiple diagram variations to compare trade-offs.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Onboarding New Team Members
&lt;/h3&gt;

&lt;p&gt;Create comprehensive architecture diagrams as part of your onboarding documentation. New developers can quickly understand system design and component relationships.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Disaster Recovery Planning
&lt;/h3&gt;

&lt;p&gt;Generate diagrams showing your DR setup, including cross-region replication, backup strategies, and failover processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comparing Traditional vs. AI-Powered Diagram Tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Traditional Approach
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time&lt;/strong&gt; : A few hours for a complex architecture&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning Curve&lt;/strong&gt; : Days to weeks to master diagramming tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintenance&lt;/strong&gt; : Manual updates for every change&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt; : Difficult to maintain across multiple diagrams&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt; : Subscription fees for professional tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Kiro CLI Approach
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Time&lt;/strong&gt; : 5-15 minutes for a complex architecture&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning Curve&lt;/strong&gt; : Minutes – just describe what you want&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintenance&lt;/strong&gt; : Natural language updates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt; : AI ensures standard practices&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost&lt;/strong&gt; : Included in Kiro subscription&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advanced Tips and Tricks
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Multi-Region Architectures
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Create a multi-region architecture showing:
- Primary region in us-east-1 with full stack
- DR region in eu-west-1 with warm standby
- Route 53 health checks and failover routing
- Cross-region replication for S3 and DynamoDB


## Architecture Components

Primary Region (us-east-1) - Active:
- Application Load Balancer distributing traffic
- Multiple EC2 web servers for high availability
- RDS with Multi-AZ deployment for database resilience
- DynamoDB primary table
- S3 primary bucket for object storage
- CloudWatch for monitoring and alerting

DR Region (eu-west-1) - Warm Standby:
- ALB in standby mode (can be activated quickly)
- EC2 instance in warm standby (minimal capacity, can scale up)
- RDS Read Replica for database failover
- DynamoDB Global Tables for automatic replication
- S3 replica bucket with cross-region replication
- CloudWatch for monitoring the DR environment

## Key AWS Best Practices Implemented

Route 53 Health Checks &amp;amp; Failover:
- DNS-based failover routing
- Health checks monitor primary region endpoints
- Automatic failover to DR region when primary fails
- Green arrows show normal traffic flow, red dashed shows failover

Cross-Region Replication:
- **RDS:** Read replicas provide near real-time data replication
- **DynamoDB:** Global Tables enable automatic multi-region replication
- **S3:** Cross-region replication ensures data durability across regions

Warm Standby Strategy:
- Cost-effective approach with minimal resources in DR region
- Can be scaled up quickly during failover
- Balances cost with recovery time objectives (RTO)

This architecture provides robust disaster recovery capabilities while following
AWS Well-Architected principles for reliability and cost optimisation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj8zf1u42rtj1qwy6i2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvj8zf1u42rtj1qwy6i2a.png" width="800" height="1183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Hybrid Cloud Scenarios
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Design a hybrid architecture connecting on-premises data center to AWS:
- Direct Connect connection
- VPN as backup
- AWS Transit Gateway
- On-premises resources shown separately

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Future of Infrastructure Documentation
&lt;/h2&gt;

&lt;p&gt;Kiro CLI represents a shift in how we can create and maintain technical documentation. By combining the power of large language models with specialized tools through MCP, we’re moving toward a future where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Documentation stays in sync with code automatically&lt;/li&gt;
&lt;li&gt;Architecture diagrams update as infrastructure evolves&lt;/li&gt;
&lt;li&gt;Best practices are enforced consistently&lt;/li&gt;
&lt;li&gt;Knowledge sharing becomes frictionless&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Traditional diagramming tools still have their place, but for developers and architects, Kiro CLI with MCP support offers an unprecedented productivity boost. What used to take hours now takes minutes. What required specialised tool knowledge now just requires clear communication.&lt;/p&gt;

&lt;p&gt;The real power isn’t just in generating diagrams faster, it’s in lowering the barrier to creating and maintaining quality documentation. When documentation becomes this easy, teams are more likely to keep it current, and that benefits everyone.&lt;/p&gt;

&lt;p&gt;Whether you’re building a simple three-tier application or a complex microservices architecture spanning multiple regions, Kiro CLI can help you create professional AWS architecture diagrams that communicate your vision clearly and accurately.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://kiro.dev/docs/cli" rel="noopener noreferrer"&gt;https://kiro.dev/docs/cli&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This article reflects my personal experience using AWS Kiro for side projects and creating diagrams, that can be used as starting point, following AWS Best Practices&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.allaboutcloud.co.uk/kiro-cli-transform-your-terminal-into-an-ai-powered-aws-architecture-studio/" rel="noopener noreferrer"&gt;Kiro CLI – Transform Your Terminal into an AI-Powered AWS Architecture Studio&lt;/a&gt; first appeared on &lt;a href="https://www.allaboutcloud.co.uk" rel="noopener noreferrer"&gt;Allaboutcloud.co.uk&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>kiro</category>
      <category>kirocli</category>
    </item>
    <item>
      <title>How to Protect VMware VMs from Ransomware with AWS Backup</title>
      <dc:creator>Nikitas Gargoulakis</dc:creator>
      <pubDate>Wed, 03 Sep 2025 17:43:00 +0000</pubDate>
      <link>https://forem.com/ngargoulakis/how-to-protect-vmware-vms-from-ransomware-with-aws-backup-2hh1</link>
      <guid>https://forem.com/ngargoulakis/how-to-protect-vmware-vms-from-ransomware-with-aws-backup-2hh1</guid>
      <description>&lt;p&gt;AWS Backup ransomware protection helps secure VMware workloads against modern ransomware attacks by using immutable backups, logical air-gapped vaults, and cross-account isolation.&lt;/p&gt;

&lt;p&gt;This guide details the entire process of deploying an enterprise-level backup solution that safeguards against ransomware attacks, by using three important security layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Immutable Storage&lt;/strong&gt; – Backups cannot be deleted by anyone, including attackers with administrative access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logical Air-Gap&lt;/strong&gt;  &lt;strong&gt;Vault&lt;/strong&gt; – Securely isolated storage environment that uses AWS controls (like separate accounts, restrictive IAM, and immutability) to prevent access or tampering&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External Approval Authority&lt;/strong&gt; – Approval team located in separate AWS account outside your organisation&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Security Model
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Production Environment&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your production AWS accounts and VMware environment&lt;/li&gt;
&lt;li&gt;Daily automated backups with Changed Block Tracking&lt;/li&gt;
&lt;li&gt;Standard backup vault for operational restores&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;External Approval Authority&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Completely separate AWS account (not part of your organisation)&lt;/li&gt;
&lt;li&gt;Contains approval team of 3+ trusted senior executives&lt;/li&gt;
&lt;li&gt;MFA enforcement for all team members&lt;/li&gt;
&lt;li&gt;Only function: approve emergency backup access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Recovery Environment&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used exclusively during disaster recovery&lt;/li&gt;
&lt;li&gt;Can request access to protected backups&lt;/li&gt;
&lt;li&gt;Access granted only after multi-party approval from external team&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How This Stops Ransomware Attacks
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Traditional Attack Pattern:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Attacker gains access to production systems&lt;/li&gt;
&lt;li&gt;Escalates privileges to administrator level&lt;/li&gt;
&lt;li&gt;Deletes or encrypts all backups&lt;/li&gt;
&lt;li&gt;Encrypts production data&lt;/li&gt;
&lt;li&gt;Demands ransom&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How Our Architecture Prevents This:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Step 3 Fails:&lt;/strong&gt; Compliance-mode locks prevent backup deletion by anyone&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Air-Gapping:&lt;/strong&gt; Backups logically isolated, not directly accessible&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;External Approval:&lt;/strong&gt; Attacker cannot compromise approval team (separate organization)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Party Requirement:&lt;/strong&gt; Need 2+ trusted individuals to approve access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Result:&lt;/strong&gt; Backups remain protected; organization can recover without paying ransom&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Production Environment Requirements
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Infrastructure:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Organisations with minimum 2 accounts&lt;/li&gt;
&lt;li&gt;One account designated as backup delegate administrator&lt;/li&gt;
&lt;li&gt;Site-to-Site VPN between on-premises datacenter and AWS / Internet connectivity from VMware environment to AWS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VMware Environment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VMware vSphere 6.7 or later&lt;/li&gt;
&lt;li&gt;vCenter Server operational&lt;/li&gt;
&lt;li&gt;Service account with backup permissions&lt;/li&gt;
&lt;li&gt;Available resources: 4 vCPUs, 8GB RAM, 80GB disk for gateway VM&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network Requirements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPN bandwidth: 100 Mbps minimum (1 Gbps recommended)&lt;/li&gt;
&lt;li&gt;Network latency: Under 50ms recommended&lt;/li&gt;
&lt;li&gt;Firewall permits: HTTPS (port 443) to AWS endpoints&lt;/li&gt;
&lt;li&gt;DNS resolution for AWS service endpoints&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  External Approval Account Requirements
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;New AWS Account:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Must be completely separate from your organisation&lt;/li&gt;
&lt;li&gt;Cannot be member of any AWS Organisations&lt;/li&gt;
&lt;li&gt;Dedicated email address (example: &lt;a href="mailto:backup-approvals@yourcompany.com"&gt;backup-approvals@yourcompany.com&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Root account secured with hardware MFA token&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Trusted Approvers (3 minimum):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Senior executives with authority to approve emergencies&lt;/li&gt;
&lt;li&gt;Examples: CTO, CISO, Infrastructure Director, etc&lt;/li&gt;
&lt;li&gt;Must be available 24/7 for emergency response&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  VMware Backup Gateway Setup
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Configure AWS Organisations Delegation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; Designates your backup account as central backup administrator for entire organization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instructions:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to your AWS Organizations management account&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Organizations&lt;/strong&gt; service&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;Services&lt;/strong&gt; → &lt;strong&gt;AWS service access&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enable access for &lt;strong&gt;AWS Backup&lt;/strong&gt; service&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Delegated administrators&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Register your backup account as delegated administrator for AWS Backup service&lt;/li&gt;
&lt;li&gt;Verify delegation appears as “Active”&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What This means:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backup account can manage backups across all organization accounts&lt;/li&gt;
&lt;li&gt;Doesn’t require management account access for daily operations&lt;/li&gt;
&lt;li&gt;Centralizes backup management and policies&lt;/li&gt;
&lt;li&gt;Download and Deploy Backup Gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Installs virtual appliance that connects AWS Backup to your VMware environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In AWS Backup Console (Backup Account):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your region (example: EU-WEST-2 for London)&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Backup&lt;/strong&gt; → &lt;strong&gt;External resources&lt;/strong&gt; → &lt;strong&gt;Gateways&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create gateway&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Download the OVF template file (approximately 1.2 GB)&lt;/li&gt;
&lt;li&gt;Save as: &lt;code&gt;aws-appliance-latest.ova&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;In VMware vSphere Client:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Connect to vCenter Server&lt;/li&gt;
&lt;li&gt;Right-click parent object (datacenter or cluster)&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Deploy OVF Template&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Local file&lt;/strong&gt; and select downloaded OVA file&lt;/li&gt;
&lt;li&gt;Provide gateway name: &lt;code&gt;Backup-Gateway-Production&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Select compute resource (cluster or host)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Critical:&lt;/strong&gt; Select storage disk format: &lt;strong&gt;Thick Provision Lazy Zeroed&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select management network (must have internet access)&lt;/li&gt;
&lt;li&gt;Complete deployment wizard&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Configure VM Settings Before Power-On:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Right-click deployed gateway VM → &lt;strong&gt;Edit Settings&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Verify configuration:

&lt;ul&gt;
&lt;li&gt;CPU: 4 vCPUs&lt;/li&gt;
&lt;li&gt;Memory: 8 GB (set memory reservation to 8192 MB)&lt;/li&gt;
&lt;li&gt;Hard Disk: 80 GB&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;VM Options&lt;/strong&gt; → &lt;strong&gt;VMware Tools&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enable: &lt;strong&gt;Synchronize Time with Host&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enable: &lt;strong&gt;Synchronize at startup and resume&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Save settings&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Configure Gateway Network Settings
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Power On Gateway VM:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Right-click gateway VM → &lt;strong&gt;Power&lt;/strong&gt; → &lt;strong&gt;Power On&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Open VM console&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Initial Login:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Default username: &lt;code&gt;admin&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Default password: &lt;code&gt;password&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;You’ll be prompted to change password immediately&lt;/li&gt;
&lt;li&gt;Create strong password (minimum 12 characters, mixed case, numbers, symbols)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configure Static IP Address:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;At main menu, select &lt;strong&gt;Configure Network&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Static IP&lt;/strong&gt; configuration&lt;/li&gt;
&lt;li&gt;Enter network details:

&lt;ul&gt;
&lt;li&gt;IP Address: (assign from your management network range)&lt;/li&gt;
&lt;li&gt;Subnet Mask: (example: 255.255.255.0)&lt;/li&gt;
&lt;li&gt;Default Gateway: (your network gateway)&lt;/li&gt;
&lt;li&gt;Primary DNS: (your internal DNS server)&lt;/li&gt;
&lt;li&gt;Secondary DNS: (backup DNS, can use 8.8.8.8)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Save configuration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Test Network Connectivity:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;At main menu, select &lt;strong&gt;Test Network Connectivity&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Gateway tests:

&lt;ul&gt;
&lt;li&gt;Basic network connectivity&lt;/li&gt;
&lt;li&gt;DNS resolution&lt;/li&gt;
&lt;li&gt;Internet access&lt;/li&gt;
&lt;li&gt;AWS endpoint reachability&lt;/li&gt;
&lt;li&gt;Time synchronization&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;All tests should show “OK” or “PASS”&lt;/li&gt;
&lt;li&gt;Record the gateway IP address for next step&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Firewall Requirements:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you have firewalls between gateway and internet, allow outbound traffic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Destination: *.backup.[your-region].amazonaws.com (port 443)&lt;/li&gt;
&lt;li&gt;Destination: *.s3.[your-region].amazonaws.com (port 443)&lt;/li&gt;
&lt;li&gt;Destination: time.aws.com (port 123 UDP)&lt;/li&gt;
&lt;li&gt;No inbound rules required (all connections are outbound)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Register Gateway with AWS
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In AWS Backup Console:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;External resources&lt;/strong&gt; → &lt;strong&gt;Gateways&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Register gateway&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enter gateway details:

&lt;ul&gt;
&lt;li&gt;Gateway IP Address: (IP from previous step)&lt;/li&gt;
&lt;li&gt;Gateway Name: &lt;code&gt;Production-VMware-Gateway&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Gateway Timezone: (select your timezone)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Add tags for organization:

&lt;ul&gt;
&lt;li&gt;Environment: Production&lt;/li&gt;
&lt;li&gt;Purpose: VMware-Backup&lt;/li&gt;
&lt;li&gt;Location: On-Premises&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Register gateway&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Verify Connection:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Wait 2-5 minutes for connection&lt;/li&gt;
&lt;li&gt;Status should change from “Registering” to “Connected”&lt;/li&gt;
&lt;li&gt;Green indicator shows healthy connection&lt;/li&gt;
&lt;li&gt;If connection fails, verify firewall rules and network connectivity&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Integrate VMware vCenter
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Create vCenter Service Account:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In vCenter Server, create a service account for AWS Backup with these permissions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Required Permissions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Virtual Machine: All inventory, configuration, state, and provisioning operations&lt;/li&gt;
&lt;li&gt;Datastore: Browse datastore, allocate space&lt;/li&gt;
&lt;li&gt;Network: Assign network&lt;/li&gt;
&lt;li&gt;Apply at: Datacenter or Cluster level&lt;/li&gt;
&lt;li&gt;Propagate to child objects: Yes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Add Hypervisor in AWS Backup:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;External resources&lt;/strong&gt; → &lt;strong&gt;Hypervisors&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add hypervisor&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select your registered gateway&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Hypervisor Type:&lt;/strong&gt; VMware vCenter&lt;/li&gt;
&lt;li&gt;Enter connection details:

&lt;ul&gt;
&lt;li&gt;Host: (vCenter IP address or hostname)&lt;/li&gt;
&lt;li&gt;Port: 443 (default)&lt;/li&gt;
&lt;li&gt;Username: (service account created above)&lt;/li&gt;
&lt;li&gt;Password: (service account password)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Provide hypervisor name: &lt;code&gt;Production-vCenter&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Test Connection&lt;/strong&gt; to verify&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add hypervisor&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Wait for VM Discovery:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Backup automatically discovers all VMs (5-10 minutes)&lt;/li&gt;
&lt;li&gt;Progress shown in console&lt;/li&gt;
&lt;li&gt;After completion, view discovered VMs under &lt;strong&gt;External resources&lt;/strong&gt; → &lt;strong&gt;Virtual machines&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create VMware Tags for Backup Selection
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In vSphere Client:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Tags &amp;amp; Custom Attributes&lt;/strong&gt; → &lt;strong&gt;Tags&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;New Tag Category&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Create Tag Category:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Category Name: &lt;code&gt;backup&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Description: &lt;code&gt;Backup schedule&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Cardinality: Single value per object&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create Tags Under ‘backup’ Category:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Tag: daily&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;For VMs requiring daily backups&lt;/li&gt;
&lt;li&gt;Example: Production databases, critical applications&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tag: weekly&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;For VMs requiring weekly backups&lt;/li&gt;
&lt;li&gt;Example: Development servers, secondary systems&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tag: monthly&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;For VMs requiring monthly backups only&lt;/li&gt;
&lt;li&gt;Example: Archive systems, long-term storage&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tag: none&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;For VMs excluded from backups&lt;/li&gt;
&lt;li&gt;Example: Temporary VMs, easily recreated systems&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Apply Tags to VMs:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Right-click each VM in vSphere inventory&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Tags &amp;amp; Custom Attributes&lt;/strong&gt; → &lt;strong&gt;Assign Tag&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Choose appropriate backup tag&lt;/li&gt;
&lt;li&gt;VM will now be automatically included in matching backup plan&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Tagging Strategy Example:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mission-critical database servers: &lt;code&gt;backup:daily&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Application servers: &lt;code&gt;backup:daily&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;File servers: &lt;code&gt;backup:daily&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Development servers: &lt;code&gt;backup:weekly&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test environments: &lt;code&gt;backup:none&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create Backup Plan
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In AWS Backup Console:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Backup plans&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create backup plan&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Build a new plan&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Backup Plan Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plan Name:&lt;/strong&gt; &lt;code&gt;VMware-Production-Daily-Backup&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backup Rule Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rule Name: &lt;code&gt;DailyBackupRule&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Backup Vault: &lt;code&gt;Default&lt;/code&gt; (temporary; will add air-gapped vault in Phase 2)&lt;/li&gt;
&lt;li&gt;Schedule:

&lt;ul&gt;
&lt;li&gt;Frequency: Daily&lt;/li&gt;
&lt;li&gt;Time: 3:00 AM (choose off-peak time for your organization)&lt;/li&gt;
&lt;li&gt;Timezone: Your local timezone&lt;/li&gt;
&lt;li&gt;Backup window start: Within 1 hour&lt;/li&gt;
&lt;li&gt;Completion window: Within 3 hours&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lifecycle Settings:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transition to cold storage: 30 days&lt;/li&gt;
&lt;li&gt;Expire/Delete: 90 days&lt;/li&gt;
&lt;li&gt;(Air-gapped vault will have longer retention)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tags for Recovery Points:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;BackupType: Daily&lt;/li&gt;
&lt;li&gt;Environment: Production&lt;/li&gt;
&lt;li&gt;Automated: True&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create Backup Selection:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After creating plan, immediately create backup selection:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Selection Name: &lt;code&gt;Tagged-VMs-Daily-Production&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;IAM Role: Select &lt;strong&gt;Default role&lt;/strong&gt; (AWS creates automatically)&lt;/li&gt;
&lt;li&gt;Resource Selection: &lt;strong&gt;Include specific resource types&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Resource Type: Select &lt;code&gt;VM&lt;/code&gt; (Virtual Machine)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Define Selection by Tags:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tag Key: &lt;code&gt;backup&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Tag Value: &lt;code&gt;daily&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Condition: Equals&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Optional Additional Filter:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tag Key: &lt;code&gt;environment&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Tag Value: &lt;code&gt;production&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;This ensures only production VMs with daily tag are backed up&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Assign resources&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What Happens Now:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Backup automatically discovers all VMs with &lt;code&gt;backup:daily&lt;/code&gt; tag&lt;/li&gt;
&lt;li&gt;First backup runs at next scheduled time (3:00 AM)&lt;/li&gt;
&lt;li&gt;You can also trigger manual backup immediatel, for testing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Execute First Backup
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Trigger Manual Backup (Don’t Wait for Schedule):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Protected resources&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Locate a test VM (non-production, with sample data)&lt;/li&gt;
&lt;li&gt;Click the VM name&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create on-demand backup&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select backup vault: Default&lt;/li&gt;
&lt;li&gt;Use default IAM role&lt;/li&gt;
&lt;li&gt;Start backup immediately&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create on-demand backup&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Monitor Backup Progress:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Jobs&lt;/strong&gt; → &lt;strong&gt;Backup jobs&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Find your job at top of list&lt;/li&gt;
&lt;li&gt;Watch status progression:

&lt;ul&gt;
&lt;li&gt;Created → Job queued&lt;/li&gt;
&lt;li&gt;Running → Backup in progress (shows percentage)&lt;/li&gt;
&lt;li&gt;Completed → Backup successful&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;First Backup Timing:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full backup typically takes 1-3 hours depending on VM size&lt;/li&gt;
&lt;li&gt;Shows progress percentage throughout&lt;/li&gt;
&lt;li&gt;Backup size approximately equals VM disk usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Verify Backup Completed:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Backup vaults&lt;/strong&gt; → &lt;strong&gt;Default&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Recovery points&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Find your VM’s recovery point&lt;/li&gt;
&lt;li&gt;Verify:

&lt;ul&gt;
&lt;li&gt;Status: Completed (green)&lt;/li&gt;
&lt;li&gt;Backup size: Reasonable for your VM&lt;/li&gt;
&lt;li&gt;Creation date: Today&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Important: Incremental Backups&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First backup is always full snapshot&lt;/li&gt;
&lt;li&gt;Second and subsequent backups use Changed Block Tracking (CBT)&lt;/li&gt;
&lt;li&gt;Incremental backups are 90-95% smaller&lt;/li&gt;
&lt;li&gt;Complete in minutes instead of hours&lt;/li&gt;
&lt;li&gt;Automatic – no configuration needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Test Restore
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Initiate Test Restore:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Backup vaults&lt;/strong&gt; → &lt;strong&gt;Default&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Recovery points&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Select your test VM’s recovery point&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Restore&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Restore Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VMware Destination Settings:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Target Hypervisor: Select your vCenter&lt;/li&gt;
&lt;li&gt;Resource Pool: Select appropriate pool&lt;/li&gt;
&lt;li&gt;Datastore: Select storage location&lt;/li&gt;
&lt;li&gt;VM Folder: Create &lt;code&gt;RestoredVMs&lt;/code&gt; folder for test restores&lt;/li&gt;
&lt;li&gt;Network: Map networks appropriately&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VM Settings:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VM Name: &lt;code&gt;TestVM-Restored-Validation&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Power On: Yes (to immediately test functionality)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;IAM Role:&lt;/strong&gt; Select default role&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Restore&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Monitor Restore:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Jobs&lt;/strong&gt; → &lt;strong&gt;Restore jobs&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Watch status: Running → Completed&lt;/li&gt;
&lt;li&gt;Typical restore time: 30 minutes – 2 hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Validate Restored VM:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In vSphere Client:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;code&gt;RestoredVMs&lt;/code&gt; folder&lt;/li&gt;
&lt;li&gt;Verify VM exists: &lt;code&gt;TestVM-Restored-Validation&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Confirm VM is powered on&lt;/li&gt;
&lt;li&gt;Open console and verify:

&lt;ul&gt;
&lt;li&gt;Guest operating system boots normally&lt;/li&gt;
&lt;li&gt;All disks present and accessible&lt;/li&gt;
&lt;li&gt;Applications start correctly&lt;/li&gt;
&lt;li&gt;Data integrity is intact (compare sample files)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Document Recovery Metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recovery Point Objective (RPO): Time difference between restore point and actual data&lt;/li&gt;
&lt;li&gt;Recovery Time Objective (RTO): Time from restore initiation to VM fully operational&lt;/li&gt;
&lt;li&gt;These metrics are critical for disaster recovery planning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Delete Test Restore:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After validation, delete restored test VM&lt;/li&gt;
&lt;li&gt;Prevents confusion and saves storage&lt;/li&gt;
&lt;li&gt;Keep documented results for reference&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your organisation now has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Operational backup gateway connected to AWS&lt;/li&gt;
&lt;li&gt;VMware vCenter fully integrated&lt;/li&gt;
&lt;li&gt;Tag-based backup policies configured&lt;/li&gt;
&lt;li&gt;First full backup completed successfully&lt;/li&gt;
&lt;li&gt;Incremental backup capability verified&lt;/li&gt;
&lt;li&gt;Restore process tested and validated&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Air-Gapped Vault with Compliance Locks
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Compliance-Mode Locks
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What is Compliance-Mode Lock:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Makes backup vault permanently immutable&lt;/li&gt;
&lt;li&gt;Backups cannot be deleted before retention period expires&lt;/li&gt;
&lt;li&gt;Not even root account owner can bypass&lt;/li&gt;
&lt;li&gt;Not even AWS support can override&lt;/li&gt;
&lt;li&gt;Once grace period expires, lock is irreversible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During grace period you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test backup and restore operations&lt;/li&gt;
&lt;li&gt;Verify retention policies work correctly&lt;/li&gt;
&lt;li&gt;Delete vault if you change your mind (last chance)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After grace period expires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lock becomes permanent&lt;/li&gt;
&lt;li&gt;No changes possible&lt;/li&gt;
&lt;li&gt;Vault exists until all backups expire naturally&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;*\ &lt;em&gt;**Warning**:&lt;/em&gt;*&lt;/p&gt;

&lt;p&gt;This is a &lt;strong&gt;point of no return&lt;/strong&gt; decision. Before proceeding:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get approval from senior management&lt;/li&gt;
&lt;li&gt;Understand financial commitment (vault accumulates costs for entire retention period)&lt;/li&gt;
&lt;li&gt;Test thoroughly during grace period&lt;/li&gt;
&lt;li&gt;Document retention requirements clearly&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create Dedicated KMS Encryption Key
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In AWS Key Management Service (KMS):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;KMS&lt;/strong&gt; service&lt;/li&gt;
&lt;li&gt;Select same region as backup vault (example: eu-west-2)&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Customer managed keys&lt;/strong&gt; → &lt;strong&gt;Create key&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 – Configure Key:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key Type: Symmetric&lt;/li&gt;
&lt;li&gt;Key Usage: Encrypt and decrypt&lt;/li&gt;
&lt;li&gt;Key Material Origin: KMS&lt;/li&gt;
&lt;li&gt;Regionality: Single-Region key&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2 – Add Labels:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Alias: &lt;code&gt;backup-airgapped-vault-encryption&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Description: &lt;code&gt;Encryption key for air-gapped backup vault - Production environment&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Tags:

&lt;ul&gt;
&lt;li&gt;Purpose: Backup-Encryption&lt;/li&gt;
&lt;li&gt;Environment: Production&lt;/li&gt;
&lt;li&gt;VaultType: AirGapped&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3 – Define Key Administrators:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select your IAM user or role as key administrator&lt;/li&gt;
&lt;li&gt;This allows you to manage key policies&lt;/li&gt;
&lt;li&gt;Key administrators cannot use key to encrypt/decrypt&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4 – Define Key Usage Permissions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select: &lt;strong&gt;AWS Backup service&lt;/strong&gt; (allows AWS Backup to use key)&lt;/li&gt;
&lt;li&gt;Select: Your backup administrator IAM role&lt;/li&gt;
&lt;li&gt;This grants permission to encrypt and decrypt backup data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5 – Review and Create:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review all settings carefully&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Finish&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Document Key Information:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy Key ID (format: a1b2c3d4-…)&lt;/li&gt;
&lt;li&gt;Copy Key ARN (format: arn:aws:kms:eu-west-2:account:key/…)&lt;/li&gt;
&lt;li&gt;Store in secure location&lt;/li&gt;
&lt;li&gt;You’ll need this later for vault creation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create Air-Gapped Backup Vault
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In AWS Backup Console:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Backup vaults&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create backup vault&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Important:&lt;/strong&gt; Select &lt;strong&gt;Create logically air-gapped vault&lt;/strong&gt; option&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Vault Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Basic Information:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vault Name: &lt;code&gt;Production-AirGapped-Vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Vault Type: Logically air-gapped vault&lt;/li&gt;
&lt;li&gt;Description: &lt;code&gt;Immutable backup vault for ransomware protection&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Encryption:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select: &lt;strong&gt;Choose a custom encryption key&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select the KMS key you created in Step 2.1&lt;/li&gt;
&lt;li&gt;Key alias: &lt;code&gt;backup-airgapped-vault-encryption&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Retention Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum Retention Days: &lt;code&gt;30&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Maximum Retention Days: &lt;code&gt;365&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Adjust those based on your compliance requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Environment: Production&lt;/li&gt;
&lt;li&gt;Purpose: Ransomware-Protection&lt;/li&gt;
&lt;li&gt;ComplianceMode: True&lt;/li&gt;
&lt;li&gt;CreatedDate: (today’s date)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Create vault&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Document Vault ARN&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After creation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy the full Vault ARN (format: arn:aws:backup:eu-west-2:account:backup-vault:Production-AirGapped-Vault)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Print this ARN and store in physical safe&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Store digital copy in password manager&lt;/li&gt;
&lt;li&gt;You will need this ARN for disaster recovery&lt;/li&gt;
&lt;li&gt;Without this ARN, you cannot request access during emergency&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2.3: Apply Compliance-Mode Vault Lock
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;FINAL WARNING – READ CAREFULLY:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
╔════════════════════════════════════════════════════════╗
║ POINT OF NO RETURN ║
╠════════════════════════════════════════════════════════╣
║ ║
║ You are about to enable COMPLIANCE MODE VAULT LOCK ║
║ ║
║ After grace period (default 3 days) expires: ║
║ • Lock becomes PERMANENT and IMMUTABLE ║
║ • Vault CANNOT be deleted by anyone ║
║ • Settings CANNOT be changed or modified ║
║ • Even AWS support CANNOT bypass this lock ║
║ • Vault exists until retention period expires ║
║ ║
║ Financial Commitment: ║
║ • Estimated monthly cost: $325 ║
║ • Commitment period: Retention period (365 days) ║
║ • Cannot be canceled or refunded ║
║ ║
║ Required Approvals: ║
║ □ Management approval obtained ║
║ □ Backup/restore tested successfully ║
║ □ Retention requirements verified ║
║ □ Budget approval secured ║
║ □ Vault ARN documented offline ║
║ □ All implications understood ║
║ ║
╚════════════════════════════════════════════════════════╝

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;If you have all approvals and understand implications:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In AWS Backup Console:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Backup vaults&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;Production-AirGapped-Vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Actions&lt;/strong&gt; → &lt;strong&gt;Configure vault lock&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Vault Lock Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lock Mode: &lt;strong&gt;Compliance mode&lt;/strong&gt; (recommended for ransomware protection)&lt;/li&gt;
&lt;li&gt;Minimum Retention Days: &lt;code&gt;30&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Maximum Retention Days: &lt;code&gt;365&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Grace Period: &lt;code&gt;3&lt;/code&gt; (72 hours to test)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Review warning dialog carefully&lt;/li&gt;
&lt;li&gt;Type &lt;code&gt;confirm&lt;/code&gt; to acknowledge&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Apply vault lock&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Grace Period Begins:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You now have 3 days to test thoroughly&lt;/li&gt;
&lt;li&gt;Mark calendar for when lock becomes permanent&lt;/li&gt;
&lt;li&gt;Use this time to validate restore operations&lt;/li&gt;
&lt;li&gt;Last chance to delete vault if needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Update Backup Plan for Air-Gapped Copy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Modify Existing Backup Plan:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Backup plans&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;VMware-Production-Daily-Backup&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Edit&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Find &lt;code&gt;DailyBackupRule&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Edit rule&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Add Copy Destination:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Scroll to &lt;strong&gt;Copy to destination&lt;/strong&gt; section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable: &lt;strong&gt;Yes, copy backups to another vault&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Destination Vault: Select &lt;code&gt;Production-AirGapped-Vault&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lifecycle for Copied Backups:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transition to cold storage: &lt;code&gt;90 days&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Expire: &lt;code&gt;365 days&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why Different Lifecycle:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Primary vault: Short retention (90 days) for quick operational restores&lt;/li&gt;
&lt;li&gt;Air-gapped vault: Long retention (365 days) for ransomware recovery&lt;/li&gt;
&lt;li&gt;Cold storage after 90 days saves approximately 90% on storage costs&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Save changes&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How Copy Jobs Work:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Primary backup runs at scheduled time (3:00 AM) to Default vault&lt;/li&gt;
&lt;li&gt;After primary backup completes, copy job starts automatically&lt;/li&gt;
&lt;li&gt;Backup copied to air-gapped vault (typically 30 minutes – 2 hours)&lt;/li&gt;
&lt;li&gt;Both copies exist independently:

&lt;ul&gt;
&lt;li&gt;Primary can be deleted after 90 days (operational use)&lt;/li&gt;
&lt;li&gt;Air-gapped copy protected for 365 days (ransomware protection)&lt;/li&gt;
&lt;li&gt;If primary corrupted, air-gapped copy remains safe&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Monitor Copy Job Execution
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Wait for Next Scheduled Backup or Trigger Manual Backup:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After next backup completes, copy job automatically starts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor Copy Jobs:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Jobs&lt;/strong&gt; → &lt;strong&gt;Copy jobs&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Locate most recent copy job&lt;/li&gt;
&lt;li&gt;Watch status progression:

&lt;ul&gt;
&lt;li&gt;Created: Job queued&lt;/li&gt;
&lt;li&gt;Running: Copy in progress (shows percentage)&lt;/li&gt;
&lt;li&gt;Completed: Copy successful&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Typical Timeline:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Source backup size: 500 GB&lt;/li&gt;
&lt;li&gt;Copy duration: 45-90 minutes&lt;/li&gt;
&lt;li&gt;Network: Internal AWS (no egress charges)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Verify Copy in Air-Gapped Vault:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Backup vaults&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;Production-AirGapped-Vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Recovery points&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Verify:

&lt;ul&gt;
&lt;li&gt;Recovery point from test VM exists&lt;/li&gt;
&lt;li&gt;Status: Completed&lt;/li&gt;
&lt;li&gt;Size: Matches primary backup&lt;/li&gt;
&lt;li&gt;Retention: 365 days&lt;/li&gt;
&lt;li&gt;Locked: Yes&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Check Vault Statistics:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;View vault summary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Number of recovery points: Should match expected backup count&lt;/li&gt;
&lt;li&gt;Total storage: Sum of all backup sizes&lt;/li&gt;
&lt;li&gt;Locked status: Yes (with grace period countdown or “Locked” if expired)&lt;/li&gt;
&lt;li&gt;Lock date: When lock becomes or became permanent&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Test Restore from Air-Gapped Vault
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Test during grace period while you can still delete vault if problems occur&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Post-Lock Verification
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;After Grace Period Expires:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify Permanent Lock Status:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Backup vaults&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;Production-AirGapped-Vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Verify vault details:

&lt;ul&gt;
&lt;li&gt;Locked: Yes (no grace period remaining)&lt;/li&gt;
&lt;li&gt;Lock Mode: Compliance&lt;/li&gt;
&lt;li&gt;Lock Date: (date when lock became permanent)&lt;/li&gt;
&lt;li&gt;Immutable: True&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Test Lock Protection (Should Fail – Proves It Works):&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attempt 1: Try to Delete Vault&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select vault&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Actions&lt;/strong&gt; → &lt;strong&gt;Delete&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Expected: Error message “Cannot delete vault – protected by compliance-mode lock”&lt;/li&gt;
&lt;li&gt;This proves protection is working correctly&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Attempt 2: Try to Modify Lock Settings&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select vault&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Actions&lt;/strong&gt; → &lt;strong&gt;Configure vault lock&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Expected: All options greyed out / disabled&lt;/li&gt;
&lt;li&gt;This proves lock is truly immutable&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Attempt 3: Try to Delete Individual Backup&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select a recovery point&lt;/li&gt;
&lt;li&gt;Try to delete&lt;/li&gt;
&lt;li&gt;Expected: Deletion blocked by retention policy&lt;/li&gt;
&lt;li&gt;Backup can only be deleted after retention period expires naturally&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Document Vault Lock Status:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Record in your documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lock Status: Permanent / Immutable&lt;/li&gt;
&lt;li&gt;Lock Applied Date: [date]&lt;/li&gt;
&lt;li&gt;Earliest Possible Deletion: [date + 30 days minimum retention]&lt;/li&gt;
&lt;li&gt;Verified By: [your name]&lt;/li&gt;
&lt;li&gt;Next Review: [quarterly review date]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your organisation now has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dedicated KMS encryption key for air-gapped vault&lt;/li&gt;
&lt;li&gt;Logically air-gapped vault with compliance-mode lock&lt;/li&gt;
&lt;li&gt;Automated copy jobs from primary to air-gapped vault&lt;/li&gt;
&lt;li&gt;Tested restore capability from air-gapped vault&lt;/li&gt;
&lt;li&gt;Permanent immutable protection active&lt;/li&gt;
&lt;li&gt;Vault ARN documented in secure offline location&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;External Approval Team Configuration&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding External Approval Teams
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Why External Account is Critical:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vulnerable Setup (What NOT To Do):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Approval Team → Located in your AWS Organization
↓
Attacker compromises organization
↓
Attacker can compromise approval team
↓
Result: Backups accessible to attacker

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Secure Setup (What We’re Building):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Approval Team → Separate AWS account (outside organization)
↓
Attacker compromises organization
↓
Approval team remains isolated and secure
↓
Result: Backups protected, attacker cannot approve access

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Security Principle:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even if ransomware attackers gain root access to every account in your organization, they &lt;strong&gt;cannot&lt;/strong&gt; access air-gapped vault without approval from external team members who are outside the compromised environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create External Approval Account
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Create Completely Separate AWS Account:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical Requirements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Must be standalone AWS account&lt;/li&gt;
&lt;li&gt;Cannot be member of your AWS Organization&lt;/li&gt;
&lt;li&gt;Cannot be part of any organizational structure&lt;/li&gt;
&lt;li&gt;Managed by separate administrators&lt;/li&gt;
&lt;li&gt;Dedicated email address (not shared with production accounts)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Account Creation Process:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;https://aws.amazon.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create an AWS Account&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Use dedicated email address

&lt;ul&gt;
&lt;li&gt;Example: &lt;code&gt;backup-approvals@yourcompany.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create new mailbox if needed (don’t reuse existing)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Account name: &lt;code&gt;Backup-Approval-Authority&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Complete registration process&lt;/li&gt;
&lt;li&gt;Provide payment method (monthly cost will be ~$0)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Immediate Security Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Secure Root Account:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign in as root user immediately&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Security credentials&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enable MFA using hardware token (strongly recommended) or authenticator app&lt;/li&gt;
&lt;li&gt;Create strong root password (20+ characters)&lt;/li&gt;
&lt;li&gt;Store credentials in password manager&lt;/li&gt;
&lt;li&gt;Record recovery codes securely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Set Account Alias:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;IAM&lt;/strong&gt; → &lt;strong&gt;Dashboard&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Create account alias: &lt;code&gt;backup-approval-authority&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;This creates friendly URL: &lt;code&gt;https://backup-approval-authority.signin.aws.amazon.com&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Enable CloudTrail Logging:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to &lt;strong&gt;CloudTrail&lt;/strong&gt; service&lt;/li&gt;
&lt;li&gt;Create trail: &lt;code&gt;approval-team-audit-trail&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Apply to all regions: Yes&lt;/li&gt;
&lt;li&gt;Log file validation: Enabled&lt;/li&gt;
&lt;li&gt;Create new S3 bucket for logs&lt;/li&gt;
&lt;li&gt;Enable log encryption (optional but recommended)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why CloudTrail is Critical:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logs every approval action&lt;/li&gt;
&lt;li&gt;Provides complete audit trail&lt;/li&gt;
&lt;li&gt;Required for compliance&lt;/li&gt;
&lt;li&gt;Cannot be disabled (ransomware protection)&lt;/li&gt;
&lt;li&gt;Helps forensics if incident occurs&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Enable IAM Identity Center
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;What is IAM Identity Center:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralised user management for AWS&lt;/li&gt;
&lt;li&gt;Built-in MFA enforcement&lt;/li&gt;
&lt;li&gt;Required for approval team functionality&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enable Identity Center&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configure Multi-Factor Authentication:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In IAM Identity Center, click &lt;strong&gt;Settings&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Authentication&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Under &lt;strong&gt;Multi-factor authentication&lt;/strong&gt; section:

&lt;ul&gt;
&lt;li&gt;Enable MFA: Yes&lt;/li&gt;
&lt;li&gt;Prompt users for MFA: Every time (most secure)&lt;/li&gt;
&lt;li&gt;Allow these MFA types:&lt;/li&gt;
&lt;li&gt;Authenticator apps (Google Authenticator, Authy, 1Password)&lt;/li&gt;
&lt;li&gt;Security keys (YubiKey, other FIDO2 devices)&lt;/li&gt;
&lt;li&gt;Built-in authenticators&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Save changes&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Configure Password Policy:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Still in &lt;strong&gt;Settings&lt;/strong&gt; → &lt;strong&gt;Authentication&lt;/strong&gt; :&lt;/p&gt;

&lt;p&gt;Password requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum length: 14 characters&lt;/li&gt;
&lt;li&gt;Require uppercase letters: Yes&lt;/li&gt;
&lt;li&gt;Require lowercase letters: Yes&lt;/li&gt;
&lt;li&gt;Require numbers: Yes&lt;/li&gt;
&lt;li&gt;Require symbols: Yes&lt;/li&gt;
&lt;li&gt;Password expiration: 90 days&lt;/li&gt;
&lt;li&gt;Prevent password reuse: Last 24 passwords&lt;/li&gt;
&lt;li&gt;Account lockout: 5 failed attempts&lt;/li&gt;
&lt;li&gt;Lockout duration: 15 minutes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configure Session Duration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;Settings&lt;/strong&gt; :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Session duration: 8 hours&lt;/li&gt;
&lt;li&gt;Idle timeout: 1 hour&lt;/li&gt;
&lt;li&gt;This balances security with usability&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create Approval Team Members
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Identify Trusted Approvers:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selection Criteria:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Senior executive level (C-suite)&lt;/li&gt;
&lt;li&gt;Technical understanding of disaster recovery&lt;/li&gt;
&lt;li&gt;Authority to approve emergency actions&lt;/li&gt;
&lt;li&gt;Available 24/7 for emergency response&lt;/li&gt;
&lt;li&gt;Trusted with company-critical decisions&lt;/li&gt;
&lt;li&gt;Ideally not IT administrators (separation of duties)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example Approval Team Composition:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Approver 1: Chief Technology Officer (CTO)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Role: Technical authority and infrastructure oversight&lt;/li&gt;
&lt;li&gt;Responsibility: Verify technical legitimacy of requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Approver 2: Chief Information Security Officer (CISO)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Role: Security authority and incident response&lt;/li&gt;
&lt;li&gt;Responsibility: Verify security implications and threats&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Approver 3: Chief Financial Officer (CFO) or IT Manager&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Role: Business continuity and operational authority&lt;/li&gt;
&lt;li&gt;Responsibility: Authorize business impact decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create Users in IAM Identity Center:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;In IAM Identity Center, navigate to &lt;strong&gt;Users&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add user&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;For Each Approver, Configure:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Username: (first.last format, example: john.smith)&lt;/li&gt;
&lt;li&gt;Email address: (work email, must be valid and monitored)&lt;/li&gt;
&lt;li&gt;First name: (example: John)&lt;/li&gt;
&lt;li&gt;Last name: (example: Smith)&lt;/li&gt;
&lt;li&gt;Display name: (example: John Smith)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Optional but Recommended:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Job title: (example: Chief Technology Officer)&lt;/li&gt;
&lt;li&gt;Department: (example: C- Leadership)&lt;/li&gt;
&lt;li&gt;Phone number: (for verification)&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Next&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Skip group assignment (for now)&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Next&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Review details&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Add user&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Repeat for all approval team members (minimum 3 recommended)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Users Receive Setup Emails:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each user receives invitation email:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Subject: Set up your AWS IAM Identity Center account

You've been invited to join the Backup-Approval-Authority 
AWS account.

Click here to complete setup: [Link expires in 7 days]

Setup Requirements:
1. Create password (minimum 14 characters)
2. Configure MFA device (required)
3. Save recovery codes
4. Complete profile

Important: Complete setup within 7 days or invitation expires.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  User Setup Process
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Each Approval Team Member Must Complete:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create Password&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click invitation link received via email&lt;/li&gt;
&lt;li&gt;Create strong password meeting requirements:

&lt;ul&gt;
&lt;li&gt;Minimum 14 characters&lt;/li&gt;
&lt;li&gt;Mix of uppercase, lowercase, numbers, symbols&lt;/li&gt;
&lt;li&gt;Example: &lt;code&gt;MySecure$Backup#Approval#2025!&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Confirm password&lt;/li&gt;

&lt;li&gt;Click &lt;strong&gt;Continue&lt;/strong&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Register MFA Device&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Choose MFA device type:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Recommended:&lt;/strong&gt; Authenticator app (Google Authenticator, 1Password)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alternative:&lt;/strong&gt; Hardware security key (YubiKey or similar)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For Authenticator App:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open authenticator app on smartphone&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;Add account&lt;/strong&gt; or scan QR code option&lt;/li&gt;
&lt;li&gt;Scan QR code displayed in AWS console&lt;/li&gt;
&lt;li&gt;App generates 6-digit codes every 30 seconds&lt;/li&gt;
&lt;li&gt;Enter two consecutive codes to verify&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Assign MFA device&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Important MFA Setup Notes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save backup codes in secure location&lt;/li&gt;
&lt;li&gt;Test MFA before closing setup&lt;/li&gt;
&lt;li&gt;If smartphone lost, recovery codes allow access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Complete User Profile&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify display name is correct&lt;/li&gt;
&lt;li&gt;Verify email address&lt;/li&gt;
&lt;li&gt;Add phone number (used for out-of-band verification)&lt;/li&gt;
&lt;li&gt;Review profile details&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Complete setup&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Test Initial Sign-In&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign out from setup session&lt;/li&gt;
&lt;li&gt;Navigate to: &lt;code&gt;https://backup-approval-authority.signin.aws.amazon.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Enter username&lt;/li&gt;
&lt;li&gt;Enter password&lt;/li&gt;
&lt;li&gt;Enter current MFA code from authenticator app&lt;/li&gt;
&lt;li&gt;Should successfully sign in and see approval portal&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Enable Multi-Party Approval in Backup Account
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Switch Context to Production Backup Account:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now configure your production backup account to accept external approval teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Production Backup Account:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to backup administrator account&lt;/li&gt;
&lt;li&gt;Select your backup region (example: eu-west-2)&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Backup&lt;/strong&gt; service&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Settings&lt;/strong&gt; in left navigation menu&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Enable Required Features:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enable all three cross-account features:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Account Backup:&lt;/strong&gt; Enable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Account Monitoring:&lt;/strong&gt; Enable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Party Approval:&lt;/strong&gt; Enable&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What These Settings Enable:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Account Backup:&lt;/strong&gt; Allows backup sharing between AWS accounts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Account Monitoring:&lt;/strong&gt; View backup status across multiple accounts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Party Approval:&lt;/strong&gt; Required for external approval team integration&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Save changes&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Verification:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All three settings should show “Enabled” status&lt;/li&gt;
&lt;li&gt;Green checkmarks next to each feature&lt;/li&gt;
&lt;li&gt;If any setting fails to enable, check IAM permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create Approval Team
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Return to External Approval Account:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In External Approval Account (us-east-1 region):&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Backup&lt;/strong&gt; service&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Approval teams&lt;/strong&gt; in left menu&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create approval team&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Approval Team Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Team Name: &lt;code&gt;Production-Recovery-Team&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Description: &lt;code&gt;External approval team for emergency backup access.&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Add Team Members:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For each IAM Identity Center user created earlier:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Member 1:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User: Select first user from dropdown&lt;/li&gt;
&lt;li&gt;Email: (auto-populated from Identity Center)&lt;/li&gt;
&lt;li&gt;Role: Approver&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Member 2:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User: Select second user&lt;/li&gt;
&lt;li&gt;Email: (auto-populated)&lt;/li&gt;
&lt;li&gt;Role: Approver&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Member 3:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User: Select third user&lt;/li&gt;
&lt;li&gt;Email: (auto-populated)&lt;/li&gt;
&lt;li&gt;Role: Approver&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Approval Threshold:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum approvals required: &lt;code&gt;2&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;This means 2 out of 3 members must approve any access request&lt;/li&gt;
&lt;li&gt;Prevents single point of failure&lt;/li&gt;
&lt;li&gt;Balances security with availability (if 1 member unavailable, can still proceed)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tags for Organization:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Purpose: Emergency-Recovery&lt;/li&gt;
&lt;li&gt;Organisation: External-Authority&lt;/li&gt;
&lt;li&gt;CriticalityLevel: High&lt;/li&gt;
&lt;li&gt;Environment: Production&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Create approval team&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;***Document Approval Team ARN&lt;/strong&gt; ***&lt;/p&gt;

&lt;p&gt;After creation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Copy Approval Team ARN (format: arn:aws:backup:us-east-1:account:approval-team/Production-Recovery-Team)&lt;/li&gt;
&lt;li&gt;Store in password manager&lt;/li&gt;
&lt;li&gt;You will need this ARN for disaster recovery&lt;/li&gt;
&lt;li&gt;Without this ARN, cannot request or grant access during emergency&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Team Activation Process
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Automatic Invitation Emails Sent:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Immediately after approval team creation, each member receives:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
From: AWS Backup Multi-Party Approval
Subject: [ACTION REQUIRED] Join Approval Team: Production-Recovery-Team

You've been invited to join an approval team for emergency 
backup access.

Team: Production-Recovery-Team
Your Role: Approver
Approval Threshold: 2 of 3 members must approve

CRITICAL REQUIREMENTS:
• ALL team members must accept within 24 hours
• If ANY member declines, team becomes inactive
• MFA is REQUIRED (already configured during setup)
• You must be available 24/7 for emergency approvals

To Accept Invitation:
1. Click invitation link: [Link expires in 24 hours]
2. Sign in with IAM Identity Center credentials
3. Complete MFA verification
4. Review team details
5. Accept membership

Approval Portal: https://backup-approvals.aws.amazon.com/

Questions? Contact your security team immediately.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Each Team Member Must:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click invitation link (within 24 hours)&lt;/li&gt;
&lt;li&gt;Sign in to IAM Identity Center:

&lt;ul&gt;
&lt;li&gt;Username: (their username)&lt;/li&gt;
&lt;li&gt;Password: (their password)&lt;/li&gt;
&lt;li&gt;MFA Code: (6-digit code from authenticator app)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Review approval team invitation details:

&lt;ul&gt;
&lt;li&gt;Team name: Production-Recovery-Team&lt;/li&gt;
&lt;li&gt;Team purpose: Emergency backup access approval&lt;/li&gt;
&lt;li&gt;Minimum approvals: 2 of 3&lt;/li&gt;
&lt;li&gt;Responsibilities: 24/7 availability, out-of-band verification&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Accept invitation&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Confirm understanding of role and responsibilities&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Test Approval Portal Access
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Each Member Should Independently Test:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access Approval Portal:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to: &lt;code&gt;https://backup-approvals.aws.amazon.com/&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Sign in with IAM Identity Center credentials&lt;/li&gt;
&lt;li&gt;Complete MFA verification&lt;/li&gt;
&lt;li&gt;Should see: &lt;strong&gt;Approval Portal Dashboard&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Document Portal Access:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Record in documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Portal URL: &lt;a href="https://backup-approvals.aws.amazon.com/" rel="noopener noreferrer"&gt;https://backup-approvals.aws.amazon.com/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;All 3 members tested access successfully: Yes&lt;/li&gt;
&lt;li&gt;Date tested: [today’s date]&lt;/li&gt;
&lt;li&gt;Next access test: [quarterly drill date]&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your organisation now has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;External approval account (completely separate from production organization)&lt;/li&gt;
&lt;li&gt;IAM Identity Center configured with strong MFA enforcement&lt;/li&gt;
&lt;li&gt;Three trusted approval team members with active accounts and MFA&lt;/li&gt;
&lt;li&gt;Approval team created and fully activated&lt;/li&gt;
&lt;li&gt;All members can access approval portal&lt;/li&gt;
&lt;li&gt;Approval team ARN documented in secure offline location&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Cross-Account Integration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cross-Account Resource Sharing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Resource Access Manager (RAM)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS service for secure resource sharing between accounts&lt;/li&gt;
&lt;li&gt;Works across AWS Organizations and external accounts&lt;/li&gt;
&lt;li&gt;Provides audited, secure sharing&lt;/li&gt;
&lt;li&gt;Required for external approval team integration&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Integration Flow:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
External Approval Account (has approval team)
↓
[Share via AWS RAM]
↓
Production Backup Account (has air-gapped vault)
↓
[Associate approval team with vault]
↓
Vault requires approval team authorization for access

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Share Approval Team via AWS RAM
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In External Approval Account:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to external approval account&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Resource Access Manager&lt;/strong&gt; (RAM) service&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Create Resource Share:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Create resource share&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Step 1 – Resource Share Details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: &lt;code&gt;Backup-Approval-Team-Share&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Description: &lt;code&gt;Shares Production-Recovery-Team with production backup account for emergency vault access authorization&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2 – Select Resources to Share:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource type: Select &lt;strong&gt;Backup: Approval Team&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select resources: Check &lt;code&gt;Production-Recovery-Team&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;This is the approval team created in Phase 3&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3 – Grant Access to Principals:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical Setting:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Principal type: Select &lt;strong&gt;AWS account&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Enter AWS account ID: (your production backup account ID)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Allow external principals:&lt;/strong&gt; ✓ &lt;strong&gt;CHECK THIS BOX&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;This is critical for cross-organization sharing&lt;/li&gt;
&lt;li&gt;Without this, sharing will fail&lt;/li&gt;
&lt;li&gt;Required because approval account is outside your organization&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4 – Add Tags:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Purpose: Emergency-Recovery-Authorisation&lt;/li&gt;
&lt;li&gt;TargetAccount: (backup account ID)&lt;/li&gt;
&lt;li&gt;SecurityLevel: Critical&lt;/li&gt;
&lt;li&gt;SharedResource: ApprovalTeam&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Review all settings&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Create resource share&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Resource Share Status:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Status: &lt;strong&gt;Pending&lt;/strong&gt; (waiting for acceptance by backup account)&lt;/li&gt;
&lt;li&gt;Share ARN: (document this for reference)&lt;/li&gt;
&lt;li&gt;Shared resources: 1 (approval team)&lt;/li&gt;
&lt;li&gt;Principals: 1 (backup account)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Accept Resource Share in Backup Account
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Switch to Production Backup Account:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Sign in to production backup account&lt;/li&gt;
&lt;li&gt;Region: &lt;strong&gt;eu-west-2&lt;/strong&gt; (or your backup region)&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Resource Access Manager&lt;/strong&gt; (RAM)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;View Pending Invitations:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click &lt;strong&gt;Shared with me&lt;/strong&gt; in left navigation&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Resource shares&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;You should see:

&lt;ul&gt;
&lt;li&gt;Resource share name: &lt;code&gt;Backup-Approval-Team-Share&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Status: &lt;strong&gt;Pending&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Shared from: (external approval account ID)&lt;/li&gt;
&lt;li&gt;Resources: 1 Backup approval team&lt;/li&gt;
&lt;li&gt;Received: (timestamp)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Accept the Resource Share:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select the resource share&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Accept resource share&lt;/strong&gt; button&lt;/li&gt;
&lt;li&gt;Confirmation dialog appears&lt;/li&gt;
&lt;li&gt;Read the confirmation message&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Accept&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Verification After Acceptance:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Status changes from &lt;strong&gt;Pending&lt;/strong&gt; to &lt;strong&gt;Active&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Resource share now appears under “Accepted” section&lt;/li&gt;
&lt;li&gt;Shared resources become available to use&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Verify Approval Team Accessibility
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In Production Backup Account:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;AWS Backup&lt;/strong&gt; service&lt;/li&gt;
&lt;li&gt;Region: &lt;strong&gt;eu-west-2&lt;/strong&gt; (your backup region)&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Approval teams&lt;/strong&gt; in left navigation menu&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;You Should Now See:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Approval Teams

Name: Production-Recovery-Team
Shared From: [External Account ID]
Status: Active
Members: 3
Minimum Approvals: 2 of 3
Type: External (Cross-Account)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;This Confirms:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-account sharing successful&lt;/li&gt;
&lt;li&gt;Approval team accessible from production account&lt;/li&gt;
&lt;li&gt;Ready to be assigned to air-gapped vault&lt;/li&gt;
&lt;li&gt;External approval team will protect your backups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;If Approval Team Doesn’t Appear:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify resource share was accepted (check AWS RAM)&lt;/li&gt;
&lt;li&gt;Confirm you’re in correct region (eu-west-2 or your backup region)&lt;/li&gt;
&lt;li&gt;Wait 2-3 minutes for propagation&lt;/li&gt;
&lt;li&gt;Check AWS Service Health Dashboard for any issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Create and Apply Vault Access Policy
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Purpose:&lt;/strong&gt; Configure vault to require external approval for all restore operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In AWS Backup Console:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Backup vaults&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Select &lt;code&gt;Production-AirGapped-Vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Access policy&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Edit policy&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Select Policy Builder Option:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create policy with three statements:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Statement 1 – Allow Backup Operations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Effect: &lt;strong&gt;Allow&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Principal: &lt;strong&gt;Service&lt;/strong&gt; → &lt;code&gt;backup.amazonaws.com&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Actions:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;backup:CopyIntoBackupVault&lt;/code&gt; (allows copy jobs)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Resources: &lt;code&gt;*&lt;/code&gt; (all)&lt;/li&gt;

&lt;li&gt;Conditions: None&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Statement 2 – Require Approval for Restore:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Effect: &lt;strong&gt;Allow&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Principal: &lt;code&gt;*&lt;/code&gt; (anyone)&lt;/li&gt;
&lt;li&gt;Actions:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;backup:StartRestoreJob&lt;/code&gt; (restore operations)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;backup:GetRecoveryPointRestoreMetadata&lt;/code&gt; (restore metadata)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Resources: &lt;code&gt;*&lt;/code&gt; (all)&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Condition&lt;/strong&gt; (Critical):

&lt;ul&gt;
&lt;li&gt;Condition key: &lt;code&gt;backup:ApprovalTeamArn&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Operator: &lt;strong&gt;String equals&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Value: (paste your approval team ARN from Phase 3)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Statement 3 – Deny Dangerous Operations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Effect: &lt;strong&gt;Deny&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Principal: &lt;code&gt;*&lt;/code&gt; (anyone, including root)&lt;/li&gt;
&lt;li&gt;Actions:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;backup:DeleteRecoveryPoint&lt;/code&gt; (prevent backup deletion)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;backup:DeleteBackupVault&lt;/code&gt; (prevent vault deletion)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;backup:PutBackupVaultAccessPolicy&lt;/code&gt; (prevent policy modification)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Resources: &lt;code&gt;*&lt;/code&gt; (all)&lt;/li&gt;

&lt;li&gt;Conditions: None&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Review all three statements&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Save policy&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;What This Policy Enforces:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;✓ &lt;strong&gt;Allows:&lt;/strong&gt; AWS Backup service to copy backups into vault (normal operations) ✓ &lt;strong&gt;Requires:&lt;/strong&gt; External approval team approval for any restore operation ✗ &lt;strong&gt;Denies:&lt;/strong&gt; Anyone (including root) from deleting backups or vault ✗ &lt;strong&gt;Denies:&lt;/strong&gt; Anyone from modifying this vault access policy (prevents tampering)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Policy Protection:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once applied, this policy cannot be modified without:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deleting and recreating vault (impossible due to compliance lock)&lt;/li&gt;
&lt;li&gt;Or having specific IAM permissions (which should be tightly controlled)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Associate Approval Team with Vault
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In Vault Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;While viewing &lt;code&gt;Production-AirGapped-Vault&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Approval team&lt;/strong&gt; tab&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Assign approval team&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Association Settings:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select approval team: &lt;code&gt;Production-Recovery-Team&lt;/code&gt; (from external account)&lt;/li&gt;
&lt;li&gt;Review association notice:

&lt;ul&gt;
&lt;li&gt;“This approval team is from an external account”&lt;/li&gt;
&lt;li&gt;“All vault access will require approval from this team”&lt;/li&gt;
&lt;li&gt;“Minimum 2 of 3 approvals required”&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Confirm you understand implications&lt;/li&gt;
&lt;li&gt;Click &lt;strong&gt;Assign approval team&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Confirmation After Assignment:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Approval Team Assignment

Team Name: Production-Recovery-Team
Team ARN: arn:aws:backup:us-east-1:[EXTERNAL-ACCOUNT]:approval-team/...
Status: Assigned
Shared From: [External Account ID]
Members: 3
Minimum Approvals: 2 of 3
Type: External (Cross-Organization)
Assignment Date: [Today's Date]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Document Complete Configuration:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Record all details in secure documentation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Production Air-Gapped Vault - Complete Configuration
═══════════════════════════════════════════════════

Vault Information:
─────────────────
Name: Production-AirGapped-Vault
ARN: arn:aws:backup:eu-west-2:[BACKUP-ACCOUNT]:backup-vault:Production-AirGapped-Vault
Region: eu-west-2
Account: [Backup Account ID]

Security Configuration:
──────────────────────
Compliance Lock: Active (Permanent)
Lock Date: [Date]
Minimum Retention: 30 days
Maximum Retention: 365 days
Encryption: Custom KMS Key

Approval Team:
─────────────
Name: Production-Recovery-Team
ARN: arn:aws:backup:us-east-1:[EXTERNAL-ACCOUNT]:approval-team/Production-Recovery-Team
Location: External Account (Outside Organization)
Account: [External Approval Account ID]

Team Members:
────────────
1. [Name] ([Title]) - [Email] - [Phone]
2. [Name] ([Title]) - [Email] - [Phone]
3. [Name] ([Title]) - [Email] - [Phone]

Approval Requirements:
─────────────────────
Minimum Approvals: 2 of 3
MFA Required: Yes (all members)
Out-of-Band Verification: Required before approval

Access Details:
──────────────
Approval Portal: https://backup-approvals.aws.amazon.com/
Emergency Contact Card: [Physical Location]
Vault ARN Document: [Physical Safe Location]

Integration Status:
──────────────────
RAM Resource Share: Active
Approval Team Assignment: Complete
Access Policy: Applied and Enforced
Testing: Completed Successfully

Verification:
────────────
Configuration Date: [Date]
Verified By: [Your Name]
Next Review: [Quarterly Review Date]
Last Tested: [DR Drill Date]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your organisation now has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Approval team shared from external account via AWS RAM&lt;/li&gt;
&lt;li&gt;Production backup account accepted and integrated shared team&lt;/li&gt;
&lt;li&gt;Air-gapped vault configured with comprehensive access policy&lt;/li&gt;
&lt;li&gt;Policy enforces multi-party approval requirement from external team&lt;/li&gt;
&lt;li&gt;Approval team assigned to vault&lt;/li&gt;
&lt;li&gt;Integration verified through comprehensive testing&lt;/li&gt;
&lt;li&gt;Complete configuration documented&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Your backups are now protected by multiple security layers:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Immutable compliance-mode locks (cannot be deleted by anyone)&lt;/li&gt;
&lt;li&gt;Logical air-gapping (isolated from your production environment)&lt;/li&gt;
&lt;li&gt;External approval authority (outside your organisation)&lt;/li&gt;
&lt;li&gt;Multi-party approval requirement (2+ trusted individuals)&lt;/li&gt;
&lt;li&gt;MFA enforcement (all approvers)&lt;/li&gt;
&lt;li&gt;Complete CloudTrail audit trail&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Even if ransomware attackers compromise your entire AWS organisation with root access, they can’t access or delete your immutable backups without approval from external team members who are completely outside the compromised environment.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You have now successfully implemented enterprise-grade ransomware protection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your organisation is now protected against:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ransomware attacks targeting backups&lt;/li&gt;
&lt;li&gt;Insider threats attempting backup deletion&lt;/li&gt;
&lt;li&gt;Administrative account compromise&lt;/li&gt;
&lt;li&gt;Organisational-level security breaches&lt;/li&gt;
&lt;li&gt;Accidental deletion of critical backups&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This implementation guide is designed to be used by any organisation implementing AWS Backup for VMware with air-gapped vaults and external approval teams. All examples should be adapted to your specific environment, account IDs, and organisational requirements.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://www.allaboutcloud.co.uk/how-to-protect-vmware-vms-from-ransomware-with-aws-backup/" rel="noopener noreferrer"&gt;How to Protect VMware VMs from Ransomware with AWS Backup&lt;/a&gt; first appeared on &lt;a href="https://www.allaboutcloud.co.uk" rel="noopener noreferrer"&gt;Allaboutcloud.co.uk&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsbackup</category>
      <category>vmware</category>
      <category>immutable</category>
    </item>
    <item>
      <title>How to detect Forest fires using Kinesis Video Streams and Amazon Rekognition</title>
      <dc:creator>Nikitas Gargoulakis</dc:creator>
      <pubDate>Wed, 05 Jun 2024 22:01:55 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-to-detect-forest-fires-using-kinesis-video-streams-and-rekognition-4he8</link>
      <guid>https://forem.com/aws-builders/how-to-detect-forest-fires-using-kinesis-video-streams-and-rekognition-4he8</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;On a hot summer night, while we were enjoying our food and drinks, the dogs suddenly began barking and staring at a certain direction. We got outside to have a better look and noticed that the sky had started to turn orange. We immediately knew what it was happening, there was a huge fire at a beautiful forest a few miles away. This was happening almost every summer, at different places, wiping out forests and destroying homes, with a massive impact on the environment and people's lives. &lt;/p&gt;

&lt;p&gt;Having seen the aftermath and the years it took for the burnt areas and people to recover, I decided to build something to detect smoke and fire and help reduce the destructive impact. After all, early detection plays a crucial role when it comes to forest fires.&lt;/p&gt;

&lt;h1&gt;
  
  
  Challenges
&lt;/h1&gt;

&lt;p&gt;Waiting for a Real-time scenario, like the one described above, was not an option or desirable for testing my solution. To overcome this challenge, i decided to simulate the required conditions. &lt;/p&gt;

&lt;p&gt;I used my laptop and played YouTube videos of forest fires as the source. This approach allowed me to consistently recreate the visual characteristics of forest fires, use specific scenes, thus ensuring that my solution was tested thoroughly under different conditions. This approach provided a reliable and efficient way to validate my solution and demonstrate how it could possibly handle similar real-time scenarios.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1xmombl8la6f0ozl3ca.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1xmombl8la6f0ozl3ca.jpg" alt=" " width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Prerequisites
&lt;/h1&gt;

&lt;p&gt;Here is a brief overview of the AWS services and components used in the solution:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RTSP Camera&lt;/strong&gt; &lt;br&gt;
An IP/CCTV camera &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Raspberry Pi&lt;/strong&gt; &lt;br&gt;
This acts as a local gateway to connect the camera and manage the video stream up to Amazon Kinesis Video Streams. It is using certificates generated by AWS IoT Core to authenticate itself securely to AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS IoT&lt;/strong&gt;&lt;br&gt;
Set up an IoT Thing to represent my IP camera. This involved configuring the certificates and policies for secure communication between the IP camera and AWS IoT. It is an important component in creating a secure and manageable architecture for streaming video from an RTSP camera through a Raspberry Pi to Kinesis Video Streams. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvp7994v49wb4hyeanmd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvp7994v49wb4hyeanmd.png" alt=" " width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kinesis Video Stream KVS&lt;/strong&gt;&lt;br&gt;
Kinesis Video Stream to ingest live video from the RTSP camera (with a matching name to the IoT Thing).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubp1aukl4vbq49ks645s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubp1aukl4vbq49ks645s.png" alt=" " width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Rekognition&lt;/strong&gt;&lt;br&gt;
Trained a Rekognition Custom Labels model to detect smoke and fire in images. Training takes some time, depending on the size of the dataset. (The ARN is used in Lambda functions).&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7a4b7hk7obyzz6rcznsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7a4b7hk7obyzz6rcznsy.png" alt=" " width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3&lt;/strong&gt;&lt;br&gt;
Created an S3 bucket to store the extracted images from the IP camera, with the appropriate bucket policies to allow read/write access from the AWS services used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe28f30pnh0mhbv837qfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe28f30pnh0mhbv837qfj.png" alt=" " width="800" height="37"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda&lt;/strong&gt;&lt;br&gt;
Wrote a Lambda function to processes images stored in S3, detect smoke and fire using Rekognition, and trigger an SNS notification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmq6v9zc0iilaoswgc6ye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmq6v9zc0iilaoswgc6ye.png" alt=" " width="800" height="144"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SNS&lt;/strong&gt;&lt;br&gt;
If smoke or fire is detected by the Rekognition Custom Labels model, the Lambda function triggers a notification using Amazon Simple Notification Service (SNS). SNS can then deliver the notification to subscribed endpoints, such as email, SMS, or mobile push notifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ihszhyt9rqe19q3biy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ihszhyt9rqe19q3biy9.png" alt=" " width="800" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM Roles&lt;/strong&gt;&lt;br&gt;
Created the required  IAM roles and policies for Kinesis Video Streams, Rekognition, Lambda, IoT, S3, and SNS. As per best practices, least privilege principles were applied.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Producer SDK - GStreamer plugin&lt;/strong&gt;&lt;br&gt;
The GStreamer plugin for Kinesis Video Streams is a component that integrates GStreamer with Amazon Kinesis Video Streams.&lt;/p&gt;
&lt;h1&gt;
  
  
  Solution Overview and walkthrough
&lt;/h1&gt;

&lt;p&gt;Here is a brief overview about how the solution works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmiibnj16jcy69qvjsac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmiibnj16jcy69qvjsac.png" alt=" " width="800" height="461"&gt;&lt;/a&gt;&lt;br&gt;
The first thing to do is to start the Amazon Rekognition Model that we trained.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66mk8bqay8p0c3eb2gq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66mk8bqay8p0c3eb2gq1.png" alt=" " width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to setup the RTSP camera and test the stream, using VLC. Then we move on and configure the GStreamer plugin in the Raspberry-Pi.&lt;/p&gt;

&lt;p&gt;We have to transfer the certificates to the Raspberry Pi and place them in a specific directory.&lt;/p&gt;

&lt;p&gt;Obtain the IoT credential endpoint using AWS CloudShell or awscli:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iot describe-endpoint --endpoint-type iot:CredentialProvider
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72ekcyq9mcgetzdk9kmc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F72ekcyq9mcgetzdk9kmc.png" alt=" " width="800" height="86"&gt;&lt;/a&gt;&lt;br&gt;
The next step is to set the environment variables for the region, certificate paths, and role alias:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_DEFAULT_REGION=eu-west-1
export CERT_PATH=certs/certificate.pem.crt
export PRIVATE_KEY_PATH=certs/private.pem.key
export CA_CERT_PATH=certs/AmazonRootCA1.pem
export ROLE_ALIAS=CameraIoTRoleAlias
export IOT_GET_CREDENTIAL_ENDPOINT=cxxxxxxxxxxs.credentials.iot.eu-west-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can execute the GStreamer command and start streaming to Kinesis Video Streams:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./kvs_gstreamer_sample FireDetection rtsp://username:password@192.168.1.100/stream1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the video feed successfully streaming to Kinesis Video Streams, it's time to start extracting the images from the stream.&lt;/p&gt;

&lt;p&gt;Kinesis Video Streams simplifies this process by automatically transcoding and delivering images. It extracts images from video data in real-time based on tags and delivers them to a specified S3 bucket.&lt;br&gt;
To use that feature, we need to create a JSON file named &lt;strong&gt;&lt;em&gt;update-image-generation-input.json&lt;/em&gt;&lt;/strong&gt; with the required config.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 "StreamName": "FireDetection",
 "ImageGenerationConfiguration":
 {
  "Status": "ENABLED",
  "DestinationConfig":
  {
   "DestinationRegion": "eu-west-1",
   "Uri": "s3://images-bucket-name"
  },
  "SamplingInterval": 200,
  "ImageSelectorType": "PRODUCER_TIMESTAMP",
  "Format": "JPEG",
  "FormatConfig": {
                "JPEGQuality": "80"
       },
  "WidthPixels": 1080,
  "HeightPixels": 720
 }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and run the following command in awscli&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kinesisvideo update-image-generation-configuration \
--cli-input-json file://./update-image-generation-input.json \
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we check our S3 bucket we can see the extracted images&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cp9fxg8umpx0j5vq3l7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cp9fxg8umpx0j5vq3l7.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our Lambda function is now going to be triggered and will start processing them using Amazon Rekognition. This allows for identifying smoke/fire objects within the images and triggering notifications based on detected objects.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F727rm0nd5ovf8wfm9xqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F727rm0nd5ovf8wfm9xqt.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;We now have a solution where our IP camera streams video to a Kinesis Video Stream. AWS Lambda processes frames from this stream, using Amazon Rekognition Custom Labels to detect smoke and fire. Detected events are then triggering SNS. &lt;br&gt;
By integrating Amazon Rekognition with custom labels, Kinesis Video Streams, S3, and AWS IoT, we can create a powerful image recognition system for many use cases.&lt;/p&gt;

&lt;p&gt;For a more detailed walkthrough, feel free to contact me. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>kinesis</category>
      <category>rekognition</category>
      <category>globallogic</category>
    </item>
    <item>
      <title>Amazon Forecast - Empowering Accurate and Data-Driven Predictions</title>
      <dc:creator>Nikitas Gargoulakis</dc:creator>
      <pubDate>Mon, 03 Jul 2023 18:37:20 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-forecast-empowering-accurate-and-data-driven-predictions-4kl5</link>
      <guid>https://forem.com/aws-builders/amazon-forecast-empowering-accurate-and-data-driven-predictions-4kl5</guid>
      <description>&lt;p&gt;Amazon Forecast is a fully managed service that utilizes machine learning algorithms to generate highly accurate forecasts. It helps businesses to make informed decisions by predicting future outcomes based on historical data. Whether you are planning inventory levels, optimizing resource allocation, or predicting customer demand, Amazon Forecast provides the tools to derive actionable insights and improve decision-making.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9dpjppdp3zoscvzgyess.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9dpjppdp3zoscvzgyess.png" alt="Amazon Forecast" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring Amazon Forecast
&lt;/h2&gt;

&lt;p&gt;To effectively configure Amazon Forecast, you need to follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Preparation&lt;/strong&gt;: Prepare your historical data, ensuring it meets the required format for Amazon Forecast. This includes having a time series dataset with timestamped data points and associated values. Amazon Forecast supports CSV, JSON, and Parquet file formats.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create a Dataset&lt;/strong&gt;: In the Amazon Management Console, create a dataset in Amazon Forecast. Specify the dataset type, schema, and import the historical data prepared in the previous step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Choose a Predictor&lt;/strong&gt;: Select an appropriate predictor from the available algorithms in Amazon Forecast. AWS offers a range of algorithms, including ARIMA, Prophet, DeepAR+, and others, each suited for specific use cases. Consider the nature of your data and the desired forecast horizon when choosing a predictor.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Train the Model&lt;/strong&gt;: Initiate the training process by providing the dataset and predictor configuration to Amazon Forecast. The service automatically trains the model using machine learning techniques to learn patterns and relationships within the data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Evaluate and Fine-tune&lt;/strong&gt;: After training the model, evaluate its performance using statistical metrics such as Mean Absolute Error (MAE) or Root Mean Squared Error (RMSE). Fine-tune the model by adjusting hyperparameters, choosing different algorithms, or modifying the dataset to improve forecast accuracy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Generate Forecasts&lt;/strong&gt;: When  you are satisfied with the model's performance, use Amazon Forecast to generate forecasts for future time periods. These forecasts provide valuable insights for planning, decision-making, and resource allocation.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Use Cases and Advantages of Amazon Forecast
&lt;/h2&gt;

&lt;p&gt;Amazon Forecast finds application in a wide range of industries and use cases. Let's explore some common scenarios where Amazon Forecast can be leveraged and the advantages it brings:&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case 1: Retail and Demand Planning
&lt;/h3&gt;

&lt;p&gt;In the retail industry, accurately predicting customer demand is crucial for optimizing inventory levels, managing supply chains, and reducing costs. With Amazon Forecast, retailers can analyze historical sales data, account for seasonality and trends, and generate accurate demand forecasts. This enables proactive inventory management, reduces stockouts, and improves overall operational efficiency.&lt;/p&gt;

&lt;p&gt;For instance, a large e-commerce platform can leverage Amazon Forecast to predict demand for specific products during peak shopping seasons like Black Friday. By using historical sales data, customer behavior patterns, and external factors, such as marketing campaigns and economic indicators, the platform can optimize inventory, allocate resources efficiently, and ensure a seamless shopping experience for customers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case 2: Energy and Capacity Planning
&lt;/h3&gt;

&lt;p&gt;Energy providers face the challenge of accurately forecasting energy demand to optimize capacity planning, resource allocation, and pricing strategies. Amazon Forecast assists in analyzing historical energy consumption patterns, weather data, and market conditions to predict future energy demand accurately. This enables energy companies to optimize their generation, transmission, and distribution operations, ensuring efficient utilization of resources and reducing costs.&lt;/p&gt;

&lt;p&gt;For example, a renewable energy company can leverage Amazon Forecast to predict electricity demand in a particular region. By considering historical energy consumption, weather forecasts, and upcoming events, the company can optimize the generation mix, schedule maintenance activities, and manage energy contracts effectively, leading to cost savings and improved reliability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Case 3: Financial Planning and Revenue Forecasting
&lt;/h3&gt;

&lt;p&gt;In the financial industry, accurate revenue forecasting and financial planning are critical for strategic decision-making and investment strategies. Amazon Forecast allows financial institutions to analyze historical revenue data, market trends, economic indicators, and customer behavior to predict future revenue accurately. This helps organizations make data-driven decisions, allocate resources effectively, and adapt to market changes.&lt;/p&gt;

&lt;p&gt;For instance, a fintech startup can utilize Amazon Forecast to forecast monthly revenue based on historical transaction data and user activity. By accurately predicting revenue streams, the startup can make informed decisions regarding marketing budgets, product development, and expansion plans, ensuring sustainable growth and profitability.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Case
&lt;/h3&gt;

&lt;p&gt;A few years ago during the COVID outbreak, i spent some time testing the service. At first, I created the relevant historical data sets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuecsxef7d66ksn6mxrul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuecsxef7d66ksn6mxrul.png" alt="Amazon Foecast guide" width="800" height="350"&gt;&lt;/a&gt;&lt;br&gt;
Then started testing all the algorithms and training the models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlkgeanxmtdrf6chvbvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlkgeanxmtdrf6chvbvj.png" alt="Amazon Forecast model training" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the end, the predictions matched closely those published by the health authorities.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2qovq4c1elxrcq1y6x2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh2qovq4c1elxrcq1y6x2.png" alt="Amazon Forecast 2" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Forecast Algorithms
&lt;/h2&gt;

&lt;p&gt;Amazon Forecast provides a range of advanced algorithms to cater to different use cases and data characteristics. Let's briefly explore some of the key algorithms available:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;ARIMA (AutoRegressive Integrated Moving Average)&lt;/strong&gt;: A widely used algorithm that models the time series data by considering the auto-regressive, integrated, and moving average components. ARIMA is effective for data with linear trends and seasonal patterns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prophet&lt;/strong&gt;: Developed by Facebook, Prophet is a powerful algorithm that incorporates seasonality, holidays, and trend components in its forecasts. It handles missing data and outliers effectively, making it suitable for datasets with irregularities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;DeepAR+&lt;/strong&gt;: DeepAR+ is a deep learning algorithm that utilizes recurrent neural networks (RNNs) to capture complex patterns in time series data. It can handle long-term dependencies and non-linear relationships, making it suitable for datasets with significant variations and multiple seasonalities.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every algorithm possesses unique strengths and is tailored to specific use cases and as result, it is recommended to experiment with multiple algorithms and fine-tune them based on the characteristics of your data to achieve the most accurate forecasts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By configuring Amazon Forecast and leveraging its range of algorithms, organizations can make accurate predictions, optimize operations, and make data-driven decisions.&lt;/p&gt;

&lt;p&gt;Whether it's demand planning in retail, capacity optimization in energy, or revenue forecasting in finance, Amazon Forecast empowers businesses to harness the power of predictive analytics. The step-by-step configuration process, along with a variety of algorithms, ensures flexibility and accuracy in generating forecasts tailored to your specific use cases.&lt;/p&gt;

&lt;p&gt;If you require additional guidance on configuring and utilising Amazon Forecast, feel free to reach out. I can assist you with unlocking the potential of Amazon Forecast and driving informed decision-making through accurate predictions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Useful Resources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/forecast/" rel="noopener noreferrer"&gt;Amazon Forecast&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/aws-samples/amazon-forecast-samples" rel="noopener noreferrer"&gt;Amazon-Forecast-samples on Github&lt;/a&gt;. Information about workshops, useful code and samples, to get you started.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>amazon</category>
    </item>
    <item>
      <title>Secure Remote Access - EC2 Instance Connect Endpoint</title>
      <dc:creator>Nikitas Gargoulakis</dc:creator>
      <pubDate>Mon, 19 Jun 2023 20:13:21 +0000</pubDate>
      <link>https://forem.com/aws-builders/secure-remote-access-ec2-instance-connect-endpoint-5h3n</link>
      <guid>https://forem.com/aws-builders/secure-remote-access-ec2-instance-connect-endpoint-5h3n</guid>
      <description>&lt;p&gt;AWS recently launched a new feature called Amazon &lt;a href="https://aws.amazon.com/about-aws/whats-new/2023/06/amazon-ec2-instance-connect-ssh-rdp-public-ip-address/" rel="noopener noreferrer"&gt;EC2 Instance Connect (EIC) Endpoint&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EIC Endpoint provides a secure solution to connect to your instances via SSH or RDP in private subnets without IGWs, public IPs, agents, and bastion hosts. By configuring an EIC Endpoint for your VPC, you can securely connect using your existing client tools or the Console/AWS CLI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yhxct3pyxzvzg23gfve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yhxct3pyxzvzg23gfve.png" alt="EIC Endpoint" width="800" height="679"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Connect to private EC2 instances through an EIC Endpoint - Image Copyright AWS&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this post i am going to show you how you can create an EIC Endpoint and connect to an instance in a Private subnet, by using the AWS console and AWS CLI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the EC2 Instance Connect (EIC) Endpoint
&lt;/h2&gt;

&lt;p&gt;Login to the AWS Console and Click on VPC. Then at the menu on the left, click &lt;strong&gt;Endpoints&lt;/strong&gt; and then on &lt;strong&gt;Create Endpoint&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc5g1l0m7ub5ohegmi1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwc5g1l0m7ub5ohegmi1x.png" alt="Create EIC Endpoint" width="800" height="264"&gt;&lt;/a&gt;&lt;br&gt;
In the next screen select the &lt;strong&gt;&lt;em&gt;Instance Connect Endpoint&lt;/em&gt;&lt;/strong&gt; option, your &lt;strong&gt;&lt;em&gt;VPC&lt;/em&gt;&lt;/strong&gt;, &lt;strong&gt;&lt;em&gt;Security Group&lt;/em&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;em&gt;Subnet&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn8vkd3gzdo0gq0jfz8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn8vkd3gzdo0gq0jfz8q.png" alt="Create EIC Endpoint 1" width="800" height="644"&gt;&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj79q6mka3pygh0nei6i4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj79q6mka3pygh0nei6i4.png" alt="Create EIC Endpoint 11" width="800" height="637"&gt;&lt;/a&gt;&lt;br&gt;
When done click on &lt;strong&gt;Create Endpoint&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluc2qc6l43b4i9s3nziq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluc2qc6l43b4i9s3nziq.png" alt="Create EIC Endpoint 2" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait a few minutes, then hit refresh on the next screen. Your Endpoint should be now shown as &lt;strong&gt;Available&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n5wvon292v7aonp6fb1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n5wvon292v7aonp6fb1.png" alt="Create EIC Endpoint 3" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you prefer to create it using the AWS CLI, run the following command and replace &lt;em&gt;SUBNET&lt;/em&gt; and &lt;em&gt;SG-ID&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-instance-connect-endpoint \
    --subnet-id [_SUBNET_] \
    --security-group-id [_SG-ID_]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Connect to your instance through AWS Console
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;For the purpose of this tutorial, i have created an EC2 instance in a Private Subnet&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Click on &lt;strong&gt;Connect&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38vrbexcktyqhzf44ief.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38vrbexcktyqhzf44ief.png" alt="ec2" width="800" height="90"&gt;&lt;/a&gt;&lt;br&gt;
Select &lt;strong&gt;Connect using EC2 Instance Connect Endpoint&lt;/strong&gt; and then pick your &lt;strong&gt;Endpoint&lt;/strong&gt; from the list. Next click Connect&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko5m9m2y802pfskec80h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fko5m9m2y802pfskec80h.png" alt="ec2 connect" width="800" height="668"&gt;&lt;/a&gt;&lt;br&gt;
You have now successfully connected to your instance&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6qz1zuxk9f1fc4i6uk2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6qz1zuxk9f1fc4i6uk2.png" alt="ec2 endpoing connection" width="800" height="666"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Connect using the AWS CLI
&lt;/h2&gt;

&lt;p&gt;This option requires some extra steps. At first you need to attach a policy to your user. You can use an AWS Managed one, to start and test the service.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flre7gm3zd72i2iscy1iz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flre7gm3zd72i2iscy1iz.png" alt="EIC endpoint policy" width="800" height="307"&gt;&lt;/a&gt;&lt;br&gt;
But for best practises and security you can refer to this &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/permissions-for-ec2-instance-connect-endpoint.html#iam-OpenTunnel" rel="noopener noreferrer"&gt;link&lt;/a&gt; about how to create a custom policy.&lt;br&gt;
Once done you can proceed.&lt;/p&gt;

&lt;p&gt;To connect to your instance from the AWS CLI, you can run the following command where [&lt;em&gt;INSTANCE&lt;/em&gt;] is the instance ID of your EC2 instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2-instance-connect ssh --instance-id [INSTANCE]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;The new &lt;em&gt;&lt;strong&gt;EC2 Instance Connect Endpoint&lt;/strong&gt;&lt;/em&gt; feature has been added to AWS CLI v2.12.0.  If you are having issues, you just need to update your AWS CLI to the latest version.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5syze72bq9nhs0tbx3ak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5syze72bq9nhs0tbx3ak.png" alt="EIC Connect" width="800" height="129"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvfq18swfc82rbxc89or.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvfq18swfc82rbxc89or.png" alt="EIC Ebdpoint connect" width="800" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;EC2 Instance Connect Endpoint&lt;/strong&gt; offers several significant benefits to remote access management. As we can see, it eliminates the need to manage SSH key pairs manually, reducing the chances of key exposure or unauthorised access. Additionally, it allows you to grant temporary access to users by specifying the duration of their access, adding an extra layer of security. With EC2 Instance Connect Endpoint, you can also audit and track all remote access requests for compliance and governance purposes in CloudTrail.&lt;/p&gt;

&lt;p&gt;You can read more about this great feature at this AWS post:&lt;a href="https://aws.amazon.com/blogs/compute/secure-connectivity-from-public-to-private-introducing-ec2-instance-connect-endpoint-june-13-2023/" rel="noopener noreferrer"&gt;Secure Connectivity from Public to Private: Introducing EC2 Instance Connect Endpoint&lt;/a&gt; &lt;/p&gt;

</description>
      <category>aws</category>
      <category>networking</category>
      <category>ec2</category>
    </item>
    <item>
      <title>Deploy Amazon Workspaces using Service Catalog</title>
      <dc:creator>Nikitas Gargoulakis</dc:creator>
      <pubDate>Thu, 15 Jun 2023 20:29:22 +0000</pubDate>
      <link>https://forem.com/aws-builders/deploy-amazon-workspaces-using-service-catalog-2c9e</link>
      <guid>https://forem.com/aws-builders/deploy-amazon-workspaces-using-service-catalog-2c9e</guid>
      <description>&lt;p&gt;In this post we are going to discuss about Amazon Workspaces and how you can automate the deployment.&lt;/p&gt;

&lt;p&gt;But first let's have a brief introduction about that service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon WorkSpaces&lt;/strong&gt; is a cloud-based virtual desktop service that allows you to provision virtual desktops in the cloud and access them from anywhere. It provides a fully managed, secure, and scalable desktop computing environment without the need for you to manage any hardware or software and you can access the desktop from any supported device.&lt;/p&gt;

&lt;h2&gt;
  
  
  WorkSpaces requirements
&lt;/h2&gt;

&lt;p&gt;In order to deploy Amazon Workspaces a few things need to be in place.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Active Directory&lt;/strong&gt; to authenticate users and provide access to their WorkSpace. This can be AWS Managed Microsoft AD or On-premises AD. Or you can use an AWS AD Connector that will act as a proxy service for an existing Active Directory. If you're using AWS Managed Microsoft AD or Simple AD, your directory can be in a dedicated private subnet, as long as the directory has access to the VPC where the WorkSpaces are located.
(To allow WorkSpaces to use an existing AWS Directory Service, you must first register it with WorkSpaces. After you register a directory, you can start launching WorkSpaces.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VPC&lt;/strong&gt; You’ll need a minimum of two subnets for an Amazon WorkSpaces deployment because each AWS Directory Service construct requires two subnets in a multi-AZ deployment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more details about the requirements and deployments scenarios, you may refer to this link:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/best-practices-deploying-amazon-workspaces/best-practices-deploying-amazon-workspaces.html" rel="noopener noreferrer"&gt;Best Practices for Deploying Amazon WorkSpaces&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Assumptions
&lt;/h2&gt;

&lt;p&gt;In this guide we are going to focus on Automating the Workspaces deployment and AD configuration is out of scope. We are going to consider that AD and users are already configured.&lt;/p&gt;

&lt;h2&gt;
  
  
  Directory Registration
&lt;/h2&gt;

&lt;p&gt;The first step is to register the Directory in Amazon Workspaces.&lt;br&gt;
In the AWS Console click on &lt;strong&gt;Workspaces&lt;/strong&gt; and then Directories, on the left.&lt;br&gt;
Select your Directory, Click on Actions and then Register&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5p3cuj0qelx0tppjstu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5p3cuj0qelx0tppjstu.png" alt=" " width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, you have to select 2 subnets in your Workspaces VPC and click on Register again.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj3hzgz14lonbnym46ai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj3hzgz14lonbnym46ai.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;br&gt;
The Directory Registration process has begun and few minutes later the Registered status will be shown as True.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foii0bb5nt7ywiqbjj3pt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foii0bb5nt7ywiqbjj3pt.png" alt=" " width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Service Catalog Configuration
&lt;/h2&gt;

&lt;p&gt;Clone the following Github repo to your PC.&lt;br&gt;
&lt;a href="https://github.com/nikitasg/Amazon-workspaces" rel="noopener noreferrer"&gt;Amazon Workspaces&lt;/a&gt;.&lt;br&gt;
It contains 2 files:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;workspaces.yaml&lt;/li&gt;
&lt;li&gt;sc-workspaces.yaml&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Update the required values in workspaces.yaml &lt;em&gt;(pDirectory, pUsername, pEncryptionKey, pWorkstationType)&lt;/em&gt;
and then upload it, in your artefacts bucket (or a S3 bucket of your choice)&lt;/li&gt;
&lt;li&gt;Update sc-workspaces.yaml with the S3 URL for that file&lt;/li&gt;
&lt;li&gt;In AWS console, navigate to Cloudformation and deploy sc-workspaces.yaml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When deployment is complete, you are going to have a new Portfolio and Product in the Service Catalog.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmsaiqjbqgopkdgcufkn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmsaiqjbqgopkdgcufkn.png" alt=" " width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p8iyoh3bmqca9kqhil6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p8iyoh3bmqca9kqhil6.png" alt="Service Catalog Portfolio" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Workspaces Deployment
&lt;/h2&gt;

&lt;p&gt;Now you are ready to deploy your first Workspace by using SC.&lt;br&gt;
Under Products, select Workspaces and lick on Launch Product&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxl4fqsyebf0vwhznc2wu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxl4fqsyebf0vwhznc2wu.png" alt="Service Catalog Workspaces Product" width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select your product version &lt;em&gt;(There will be just one. More will be visible if you update the CF template in the future)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq75rl70ohp841j0k4e0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq75rl70ohp841j0k4e0.png" alt=" " width="800" height="654"&gt;&lt;/a&gt;&lt;br&gt;
Fill any required values and click Launch Product&lt;br&gt;
(In &lt;strong&gt;WorkSpace User&lt;/strong&gt; field enter the AD username of the Workspace owner. That user must exist in AD)&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eli9vbmgel1wfoqrjy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7eli9vbmgel1wfoqrjy9.png" alt=" " width="800" height="468"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wapzymvac9bmuz8ui4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wapzymvac9bmuz8ui4u.png" alt=" " width="797" height="821"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next screen you can now see that Service catalog has started provisioning your Workspace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgiesakl3m6sva4k03fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgiesakl3m6sva4k03fw.png" alt=" " width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also check the progress in Cloudformation&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5ufiz87qe5wm1107zp2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx5ufiz87qe5wm1107zp2.png" alt=" " width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait for a few minutes and then in AWS Console, click on Workspaces. Your newly provisioned workspace will now be visible&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferz6op4iayu1zy2rytp5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferz6op4iayu1zy2rytp5.png" alt="Amazon Workspaces" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the workspace to view it's details and take a note of the Registration Code, as you are going to need it at the next step&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1q2og5ltjs8oxeziam5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1q2og5ltjs8oxeziam5c.png" alt=" " width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Connect to your Workspace
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Download the &lt;a href="https://clients.amazonworkspaces.com/" rel="noopener noreferrer"&gt;Amazon Workspaces Client&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Run the client, enter the Registration Code and click Continue
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv8oxelstx1lql3d6cee.png" alt=" " width="662" height="527"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now enter the AD Username and Password and click on Sign In&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcal0wyz0mtub4tddgfve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcal0wyz0mtub4tddgfve.png" alt=" " width="699" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You have now successfully logged in your Amazon Workspace&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjy9kvlj1c5ltdyxh5ci.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjy9kvlj1c5ltdyxh5ci.png" alt=" " width="625" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terminate your Workspace
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;In Service Catalog click on Provisioned Products.&lt;/li&gt;
&lt;li&gt;Select the Workspace that you want to Terminate&lt;/li&gt;
&lt;li&gt;Click on Actions and select Terminate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwoee60lf0y83rt7f7h3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwoee60lf0y83rt7f7h3l.png" alt="Terminate Amazon Workspaces" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>automation</category>
      <category>workspaces</category>
    </item>
    <item>
      <title>How to implement a Mesh Network on AWS</title>
      <dc:creator>Nikitas Gargoulakis</dc:creator>
      <pubDate>Tue, 11 Apr 2023 20:07:20 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-to-implement-a-mesh-network-on-aws-569p</link>
      <guid>https://forem.com/aws-builders/how-to-implement-a-mesh-network-on-aws-569p</guid>
      <description>&lt;p&gt;It was sometime ago, that i was working in a complex Greenfield project.&lt;/p&gt;

&lt;p&gt;We had to design a secure infrastructure (in many aspects), make sure that all traffic was encrypted at Rest and in Transit and deploy a large number of services in AWS. While the Dev teams were working on building the applications, i was focusing on those requirements.&lt;/p&gt;

&lt;p&gt;The main requirement, was to design and implement a flat Mesh Network on AWS (with encrypted traffic). All servers deployed, should have a point-to-point connection to every other peer in the network. On top of that, some servers hosted on Azure and GCP should be able to join the Mesh. And to add more complexity, external clients like Laptops or Mobile Phones should be able to access securely, specific servers/services in the Mesh.&lt;/p&gt;

&lt;p&gt;After gathering all the information and speaking with a number of people to make sure that all requirements were properly documented and added to the backlog, it was time to start working for the PoC (Proof of Concept)&lt;/p&gt;

&lt;p&gt;One of the tools that seemed like the right candidate was Wireguard.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Wireguard
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;WireGuard is a secure network tunnel, operating at layer 3, implemented as a kernel virtual network interface for Linux. It utilizes state-of-the-art cryptography and it aims to be faster, simpler, leaner, and more useful than IPsec. Wireguard is currently under heavy development, but already it might be regarded as the most secure, easiest to use, and simplest VPN solution in the industry. That’s why a lot of VPN service providers have started to using it..&lt;br&gt;
Source: &lt;em&gt;&lt;a href="https://www.wireguard.com/" rel="noopener noreferrer"&gt;wireguard.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After reading the documentation and running some tests, i decided to proceed with that. The process of setting up a Wireguard as a VPN is straight forward. You have to install it, generate the required keys, create a &lt;em&gt;wg0.conf&lt;/em&gt; file for each server, configure the relevant Security groups on AWS and your VPN is up and running quickly.&lt;/p&gt;

&lt;p&gt;But in our case, we had to build a Mesh consisting of 100s of servers, most of them being part of AutoScaling groups. As a result, we didn’t have the option to configure &lt;em&gt;wg0.conf&lt;/em&gt; manually, every time a server had to join the mesh.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Wireguard works
&lt;/h2&gt;

&lt;p&gt;WireGuard works by encrypting the connection using a pair of cryptographic keys, each server needs to have it’s own private and public keys and then exchange public keys with the rest.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;wg0.conf&lt;/em&gt; file contains all the necessary configuration parameters for the WireGuard interface&lt;/p&gt;

&lt;p&gt;Here are some of the main parameters that can be configured in &lt;em&gt;wg0.conf&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PrivateKey:&lt;/strong&gt; This parameter defines the private key for the WireGuard interface. It is used to authenticate and encrypt traffic between peers.&lt;br&gt;
&lt;strong&gt;ListenPort:&lt;/strong&gt; This parameter defines the port that WireGuard will listen on for incoming connections ( default is UDP 51820).&lt;br&gt;
&lt;strong&gt;Address:&lt;/strong&gt; This parameter defines the IP address and subnet mask for the WireGuard interface.&lt;br&gt;
&lt;strong&gt;Peer:&lt;/strong&gt; This parameter defines the configuration for a peer on the WireGuard network. It includes the public key of the peer, its IP address, allowed IPs (the IP ranges that the peer can access), and other options such as endpoint configurations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps0vf14db78lfi2bz8e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fps0vf14db78lfi2bz8e7.png" alt=" " width="588" height="344"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Mesh Solution Overview
&lt;/h2&gt;

&lt;p&gt;I started working on my Spike and it was a challenge to find the best way to implement such a Mesh topology. At first i deployed a number of EC2 instances, by using Terraform, in multiple AWS regions. After that i was going through my list and started building and trying things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform and Ansible:&lt;/strong&gt; Successfully created a Mesh, but it was really difficult to manage any new peers and auto update the wg0.conf when they joined. Came to the conclusion that it was fine for static setups but not for dynamic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform, Hashicorp Vault and a ton of bash scripts:&lt;/strong&gt; That looked promising, let’s see how it works. When connecting nodes via wireguard, each node has to know the public key and endpoint ip of all peers. In this scenario, nodes with proper authentication in Vault were allowed to publish their own data and also to read connection data from other peers. They could all read the meeting point data for our mesh (data structure containing basic information about our mesh network), publish their own configuration to vault, query vault for other nodes known to the meeting point and add a wireguard peer for each of them. Although it worked, it was really complex to support it and troubleshoot, especially after the handover.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then i came across a tool called Netmaker. It was at the early stages of development but looked really promising (Since then, i have tested all versions, including the current one 0.18.5 that was released a few days ago, with big improvements and fixes).&lt;/p&gt;
&lt;h2&gt;
  
  
  What is Netmaker
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Netmaker is a platform for creating fast and secure virtual networks with WireGuard. It is a tool for creating and managing virtual overlay networks. If you have at least two machines with internet access that you need to connect with a secure tunnel or thousands of servers spread across multiple locations or cloud providers, Netmaker is the perfect “tool”. It connects machines securely, wherever they are.&lt;br&gt;
Source: &lt;em&gt;&lt;a href="https://docs.netmaker.org/about.html" rel="noopener noreferrer"&gt;Netmaker.org&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, after this intro, let’s see how we can create a secure Mesh Network on AWS using Netmaker and Wireguard.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to Install Netmaker
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Start by Launching a VM with Ubuntu 20.04 or latest with a public IP. (Ubuntu is the one currently supported)&lt;/li&gt;
&lt;li&gt;Open ports 443, 80, and 51821-51830 (UDP) on the security group. You can make this range smaller, but keep in mind that you need have a port for each network you create. (I ‘am going to explain more about Networks later)&lt;/li&gt;
&lt;li&gt;Run the following script:
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo wget -qO /root/nm-quick-interactive.sh https://raw.githubusercontent.com/gravitl/netmaker/master/scripts/nm-quick-interactive.sh &amp;amp;&amp;amp; sudo chmod +x /root/nm-quick-interactive.sh &amp;amp;&amp;amp; sudo /root/nm-quick-interactive.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You need to answer a number of simple questions and at the the end you are going to presented with the login URL.&lt;/p&gt;

&lt;p&gt;After typing the URL, you are going to be asked to create a username and password and when you login this is what you are going to see.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpkspbshlzwi46vse5mg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffpkspbshlzwi46vse5mg.png" alt=" " width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Create a network
&lt;/h2&gt;

&lt;p&gt;The first thing we have to do afterwards, is to create a Network and enter the IP ranges that our servers would use for secure cross-communication. (Wireguard interface wg0, is going to use an IP address for that range)&lt;/p&gt;

&lt;p&gt;Click the ‘Networks’ tile on the dashboard, or in the left navigation panel click ‘Networks’.&lt;/p&gt;

&lt;p&gt;On the Networks screen, click on the ‘Create Network’ button.&lt;/p&gt;

&lt;p&gt;Give you network a name, and then enter your preferred CIDR.  Or click on the ‘Autofill’ button and then change the name and the CIDR generated by the autofill option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxmqsmghmm3urdu5yupk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffxmqsmghmm3urdu5yupk.png" alt=" " width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g0i5mry45jy4mvxku18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g0i5mry45jy4mvxku18.png" alt=" " width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Create the Keys
&lt;/h2&gt;

&lt;p&gt;Then proceed by creating the required keys. When done, we can see that there multiple ways to add a peer to our Mesh Network&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i4zwcrs51nh662qytbm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7i4zwcrs51nh662qytbm.png" alt=" " width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Create and configure the Nodes
&lt;/h2&gt;

&lt;p&gt;Most of the hard work is done. And now it’s time to launch a few instances in AWS in multiple regions and spread them across Public and Private subnets. In our case almost all instances are in Private subnets, with the exception of Netmaker server and Azure instance.&lt;/p&gt;

&lt;p&gt;I like to use Terraform with Gitlab Runners for my test deployments and for this demo i had about 10 EC2 instances up and running really fast (Was using spot instances to minimise costs). Just remember that you need to deploy a standalone (on-demand) EC2 instance for Netmaker.&lt;/p&gt;

&lt;p&gt;All the the Security Groups, for the Nodes, were configured to allow incoming traffic (UDP) to ports 51820–51830.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb51jgxgjop68fcpvkzda.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb51jgxgjop68fcpvkzda.png" alt=" " width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And with with the help of User Data and the command shown below, we can configure the nodes to join the Mesh during the launch process.&lt;br&gt;
&lt;em&gt;(Need to replace eyJzZXJxxxyxxxxxxxxxxxxxxxxcccccccccccvvvvvvv0000000== with your token)&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

sudo curl -Lo /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo
sudo yum install epel-release
sudo amazon-linux-extras install -y epel &amp;amp;&amp;amp; yum install -y wireguard-dkms wireguard-tools
curl -sL 'https://rpm.netmaker.org/gpg.key' | sudo tee /tmp/gpg.key
curl -sL 'https://rpm.netmaker.org/netclient-repo' | sudo tee /etc/yum.repos.d/netclient.repo
sudo rpm --import /tmp/gpg.key
sudo yum check-update
sudo yum install -y netclient
netclient register -t eyJzZXJxxxyxxxxxxxxxxxxxxxxcccccccccccvvvvvvv0000000==
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few minutes we have our instances up and running, fully configured with Wireguard and Netclient (All of the them have automatically joined our Mesh network).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojkv2uelg2cdaaeh20n4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojkv2uelg2cdaaeh20n4.png" alt=" " width="549" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let’s launch one more server but this time in… Azure&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde2wauq3w719ydz6r4nu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fde2wauq3w719ydz6r4nu.png" alt=" " width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Time to check our Netmaker GUI and make sure that all nodes have joined. If they don’t show immediately, there is no need to worry. It could take up to 5 mins to show up. In our case all Nodes are now visible with a Healthy status.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx4e7gepcmsve0rj1bhn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzx4e7gepcmsve0rj1bhn.png" alt=" " width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point we have successfully deployed and configured a flat Mesh network, not only between AWS instances but also with a server in a different cloud provider. All traffic between them is encrypted in transit, by using Wireguard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mesh Graph / Visualisation
&lt;/h2&gt;

&lt;p&gt;Let’s see how our Mesh Network looks like at this stage&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn4yl2yxtag4ft72xnpb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn4yl2yxtag4ft72xnpb.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;em&gt;Wireguard Mesh&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetu9t4xay4mcxmy9wads.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetu9t4xay4mcxmy9wads.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;em&gt;Netmaker server used as Ingress Node&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Test our Mesh Network
&lt;/h2&gt;

&lt;p&gt;How about running some tests to confirm that everything is working as expected?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F171o8i7todaywi5bfszf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F171o8i7todaywi5bfszf.png" alt=" " width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9xijgmbdhwzyg2oxol8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo9xijgmbdhwzyg2oxol8.png" alt=" " width="800" height="701"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Access Control Lists
&lt;/h2&gt;

&lt;p&gt;By default, Netmaker creates a “full mesh,” meaning every node in our network can talk to every other node. But there is a nice feature that you can use in order to enable/disable any peer-to-peer connection in the network.&lt;/p&gt;

&lt;p&gt;The ACL feature can be accessed by either clicking on “ACLs” in the sidebar, or by clicking on a Node in the Node List.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2bdfikbnsnj3qv23d7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu2bdfikbnsnj3qv23d7z.png" alt=" " width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add External Clients
&lt;/h2&gt;

&lt;p&gt;There are cases that external clients need to access some services running in the nodes. That can be a Mobile phone, a laptop/tablet or an IoT device.&lt;/p&gt;

&lt;p&gt;We can achieve that by creating an Ingress. &lt;em&gt;(And once connected to the Ingress, we can reach all servers in the network.)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgzigggz0kopfnt9i0lb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcgzigggz0kopfnt9i0lb.png" alt=" " width="800" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next step is to generate the client configs. Clients can then join our mesh, either by scanning a QR code or by importing the Wireguard config (Please note, that Wireguard client must be installed in the mobile, laptop etc)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nvgl7h4u5nghyu7p55f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nvgl7h4u5nghyu7p55f.png" alt=" " width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In our case i have download the config in my laptop and have connected using the Wireguard client.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7yxjbx1478vetn50vlr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7yxjbx1478vetn50vlr.png" alt=" " width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this demo, i have installed Apache in an AWS EC2 instance and in an Azure VM. As you can see, i can access both from my laptop, through a secure tunnel, using the 10.141.x.x IPs ( Mesh network CIDR)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchherzfwzptfup6anza5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchherzfwzptfup6anza5.png" alt=" " width="800" height="248"&gt;&lt;/a&gt;Apache running on AWS EC2&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff75a9eozcf7gl4o6r26u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff75a9eozcf7gl4o6r26u.png" alt=" " width="800" height="397"&gt;&lt;/a&gt;Apache running on Azure instance&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This is just a use case of using &lt;strong&gt;Netmaker&lt;/strong&gt; and &lt;strong&gt;Wireguard&lt;/strong&gt; to create a secure Mesh Network on AWS. There are more as you can see below and we are going to discuss some of them in future posts. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automate the creation of a large WireGuard-based (Mesh) network&lt;/li&gt;
&lt;li&gt;Secure access to a home or office network&lt;/li&gt;
&lt;li&gt;Provide remote access to resources like an AWS VPC, or K8S cluster&lt;/li&gt;
&lt;li&gt;Create clusters that span environments&lt;/li&gt;
&lt;li&gt;Remotely access a cluster from an external source&lt;/li&gt;
&lt;li&gt;Remotely access an external source from a cluster&lt;/li&gt;
&lt;li&gt;Manage a secure mesh of IoT devices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope you found this post useful. Feel free to reach to me for any questions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Useful links:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.wireguard.com/" rel="noopener noreferrer"&gt;Wireguard&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/gravitl/netmaker" rel="noopener noreferrer"&gt;Netmaker&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>network</category>
      <category>security</category>
    </item>
  </channel>
</rss>
