<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: vikasbanage</title>
    <description>The latest articles on Forem by vikasbanage (@vikasbanage).</description>
    <link>https://forem.com/vikasbanage</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/vikasbanage"/>
    <language>en</language>
    <item>
      <title>AWS Resource Control Policies (RCPs) Explained: A Practical Guide to Resource-Level Security</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Wed, 14 Jan 2026 08:54:12 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-scp-vs-rcp-explained-when-to-use-service-control-policies-vs-resource-control-policies-1hj0</link>
      <guid>https://forem.com/aws-builders/aws-scp-vs-rcp-explained-when-to-use-service-control-policies-vs-resource-control-policies-1hj0</guid>
      <description>&lt;p&gt;Modern AWS environments are built for scale — multiple accounts, shared teams, and resources that need to operate across boundaries. While this flexibility enables agility, it also introduces a critical governance challenge: even when IAM policies and SCPs are correctly configured, resource policies can still be misconfigured, resulting in overly broad access or unintended exposure.&lt;/p&gt;

&lt;p&gt;That gap is exactly what Resource Control Policies (RCPs) were created to address.&lt;/p&gt;

&lt;p&gt;RCPs let you enforce &lt;strong&gt;non-negotiable rules directly on resources&lt;/strong&gt;, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This resource can never be shared outside the organization&lt;/li&gt;
&lt;li&gt;Only approved identities can assume roles here&lt;/li&gt;
&lt;li&gt;Even admins can’t break these rules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They don’t replace IAM or SCPs — they &lt;strong&gt;fill the last missing governance layer&lt;/strong&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;What it controls&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;IAM&lt;/td&gt;
&lt;td&gt;Which principals can perform which actions on which resources, within organizational guardrails.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SCP&lt;/td&gt;
&lt;td&gt;The maximum permissions IAM principals in an account or OU can ever have; they restrict but do not grant access.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RCP&lt;/td&gt;
&lt;td&gt;The maximum permissions that can ever apply to specific resources in accounts or OUs, regardless of which principal calls them.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Let's now jump to hands-on for RCP, how it works. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note : It is highly advisable to test RCP on dev/test account before applying. Don't apply directly at Root level.&lt;/strong&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-On
&lt;/h2&gt;

&lt;p&gt;In this demo, we will be covering two scenarios:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enforce HTTPS-only across all S3 buckets&lt;/li&gt;
&lt;li&gt;Prevent external accounts from decrypting KMS keys. or Disable cross-account use of KMS keys.
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Scenario #1 : Enforce HTTPS-only across all S3 buckets
&lt;/h4&gt;

&lt;p&gt;I have created S3 bucket and intentionally set over permissive access as follows. I have uploaded simple file which has text &lt;code&gt;RCP HTTPS test&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3rcppolicy",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject",
                "s3:GetObjectVersion",
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::XXXX-XXXXX-demo-XXXXXXX/*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if you access files using HTTP and HTTPs, both will work :&lt;/p&gt;

&lt;p&gt;HTTP Test:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpmywyfqm6nre5nwf5ve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpmywyfqm6nre5nwf5ve.png" alt="s3-htttp" width="800" height="94"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;HTTPS test&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzac389i2ngx9hnk56l4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzac389i2ngx9hnk56l4.png" alt=" " width="800" height="41"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, let's apply RCP at account level to make sure only HTTPs access is allowed&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpet01vc5hkq5sgvebdqq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpet01vc5hkq5sgvebdqq.png" alt="rcp_s3" width="800" height="970"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After attaching RCP, you can see below HTTP requests gets AccessDenied (highlighted part) but HTTPS works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy6bwa79o20w1h6otp82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy6bwa79o20w1h6otp82.png" alt="http-error" width="800" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you have also observed that, once we apply RCP its applicable to current and new resource policies as well. So while applying RCP make sure that which resources are intentionally shared across org.  &lt;/p&gt;
&lt;h4&gt;
  
  
  Scenario #2 : Prevent external accounts from decrypting KMS keys.
&lt;/h4&gt;

&lt;p&gt;This example blocks the decryption of AWS KMS (Key Management Service) keys for any principal except whitelisted principal in policy.&lt;/p&gt;

&lt;p&gt;Part of this demo I have create KSM key and given access for decrypt to other account in KMS policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
      "Sid": "Enable IAM User Permissions",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::714XXXXXXXXX:root"
      },
      "Action": "kms:*",
      "Resource": "*"
    },
    {
      "Sid": "Allow use of the key",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::599XXXXXXXXX:root"
      },
      "Action": [
        "kms:Encrypt",
        "kms:Decrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:DescribeKey"
      ],
      "Resource": "*"
    },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'm encrypting one file using KMS key in AccountA(714XXXXXXXXX) and will try to decrypt it using assuming IAM role from AccountA(714XXXXXXXXX) and AccountB(599XXXXXXXXX). &lt;/p&gt;

&lt;p&gt;Without RCP applied, we able to decrypt it.&lt;/p&gt;

&lt;p&gt;AccountA(714XXXXXXXXX)&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaj3gbavyifyezwn9nmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaj3gbavyifyezwn9nmq.png" alt="kms" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AccountB(599XXXXXXXXX)&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F096luzq47q5h466fu06g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F096luzq47q5h466fu06g.png" alt="kms-b" width="800" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we apply the RCP directly to Account A (714XXXXXXXXX). With this policy in place, access is restricted so that only users or roles from Account A can perform the allowed actions, regardless of how individual resource or IAM policies are configured.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Effect": "Deny",
  "Principal": "*",
  "Action": [
    "kms:Decrypt",
    "kms:Encrypt",
    "kms:GenerateDataKey"
  ],
  "Resource": "*",
  "Condition": {
    "StringNotEquals": {
      "aws:PrincipalAccount": [
        "714XXXXXXXXX"
      ]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If I perform same operation for decryption, I get error for &lt;strong&gt;AccountB(599XXXXXXXXX)&lt;/strong&gt; with explicit Deny in RCP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkel7c6ulkcsnnvxg7in1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkel7c6ulkcsnnvxg7in1.png" alt="kms-deny" width="800" height="141"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Before implementing Resource Control Policies - RCPs
&lt;/h2&gt;

&lt;p&gt;RCPs are useful feature from AWS to design data perimeter in AWS, it protect data, block breaches, and enforce real cloud perimeters — but if deployed carelessly, they can also break production workloads. We should must create and test RCPs before applying to critical workloads. &lt;/p&gt;

&lt;p&gt;Before you deploy any RCP, make sure it passes through a formal test cycle:&lt;/p&gt;

&lt;h3&gt;
  
  
  What to test
&lt;/h3&gt;

&lt;p&gt;Use &lt;strong&gt;AWS IAM Access Analyzer&lt;/strong&gt; to understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which resources are currently public&lt;/li&gt;
&lt;li&gt;Which are shared externally&lt;/li&gt;
&lt;li&gt;Which identities depend on cross-account access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This helps you avoid breaking legitimate access paths.&lt;/p&gt;

&lt;p&gt;Then validate policy logic using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM Policy Simulator&lt;/li&gt;
&lt;li&gt;Controlled sandbox accounts&lt;/li&gt;
&lt;li&gt;Dev OUs before prod&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also test &lt;strong&gt;interactions between SCPs and RCPs&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember:&lt;/strong&gt; a deny from either one will block access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other High-Impact RCP Use Cases to Explore
&lt;/h3&gt;

&lt;p&gt;AWS maintains a fantastic &lt;a href="https://github.com/aws-samples/resource-control-policy-examples" rel="noopener noreferrer"&gt;repository&lt;/a&gt; of real-world RCP patterns.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🔐 Enforce Org-Only STS Access&lt;/li&gt;
&lt;li&gt;🔑 Lock Down OIDC Providers (GitHub Actions, etc.)&lt;/li&gt;
&lt;li&gt;🗄️ Block External Sharing of Data Stores&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When used thoughtfully, RCPs close the last remaining gaps around data access, cross-account trust, and misconfigurations. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Good security isn’t about trusting people — it’s about designing systems that remain safe even when mistakes happen. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Thanks for reading — hope this helps you build safer, more resilient AWS environments.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>cloud</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>CloudWatch Investigations: Your AI-Powered Troubleshooting Sidekick</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Sun, 04 Jan 2026 20:04:30 +0000</pubDate>
      <link>https://forem.com/aws-builders/cloudwatch-investigations-your-ai-powered-troubleshooting-sidekick-1p8j</link>
      <guid>https://forem.com/aws-builders/cloudwatch-investigations-your-ai-powered-troubleshooting-sidekick-1p8j</guid>
      <description>&lt;p&gt;Remember those 3 AM incidents when you’re frantically switching between dashboards, digging through logs, and wondering if you should just restart everything? We all have been through the situation where we worked in non-business hours, weekends, midnights to troubleshoot production issues and its quite energy draining task. What if in this GenAI world we get AI assistant that works 24*7 and guide us through the chaos. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Enter CloudWatch Investigations – a generative AI-powered feature that’s changing how we handle incidents in AWS environments.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When something breaks, instead of you jumping between CloudWatch metrics, logs, deployment history, CloudTrail, X-Ray, and health dashboards, CloudWatch Investigations does the first round of detective work for you.&lt;/p&gt;

&lt;p&gt;It uses &lt;strong&gt;generative AI&lt;/strong&gt; to scan your system’s telemetry and quickly surface:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the &lt;strong&gt;metrics that look suspicious&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;the &lt;strong&gt;logs that matter&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;recent deployments or config changes&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;and even &lt;strong&gt;possible root-cause hypotheses&lt;/strong&gt;, especially when multiple resources are involved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of this is presented visually, so you can &lt;em&gt;see&lt;/em&gt; how things are connected instead of guessing.&lt;/p&gt;

&lt;p&gt;It's like having an extra team member who's been staring at your system architecture 24/7 .&lt;/p&gt;

&lt;p&gt;Let's get started to check how to configure CloudWatch investigation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Getting Started&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;In AWS Console, Go to CloudWatch → AI Operation (left pane), if you are configuring account for first time, you need to do setup

&lt;ul&gt;
&lt;li&gt;Creating Investigation Group

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Configure Retention days:&lt;/strong&gt; For how much time you would like keep investigation. Please note that The retention cannot be changed once it is configured.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customise Encryption:&lt;/strong&gt; You can have customer managed key for encryption. But make sure you give permission to access key.&lt;/li&gt;
&lt;li&gt;Next, it will create new role with required permission for investigation. These will be read-only permission which needed for investigation. You can also create new role here. &lt;em&gt;By default, it uses : &lt;/em&gt;&lt;em&gt;AIOpsAssistantPolicy, AmazonRDSPerformanceInsightsFullAccess and AIOpsAssistantIncidentReportPolicy&lt;/em&gt;**&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Once the investigation group is created, you will be able to see Optional Enhanced configuration&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1spvmmiwunrwkco9yqve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1spvmmiwunrwkco9yqve.png" alt="aiops" width="800" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under Enhanced integration, you will be able to include tags related to application. This will help Cloudwatch to narrow down investigation. This is quite useful setting as its efficient to narrow down investigation.&lt;/li&gt;
&lt;li&gt;Access to CloudTrail event, to help CloudWatch investigations better discover relevant change events.&lt;/li&gt;
&lt;li&gt;Optionally, X-ray, Application Signals and EKS access entries.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;DEMO&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now, we have configuration ready, let's start with demo. Part of this blog, I have simple Event booking app as follows.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn5ffadjdzcia5303ysc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn5ffadjdzcia5303ysc.png" alt="aws-aiops" width="506" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;User books appointment by providing details and selecting available slots, admin approves/rejects requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkozxyvq9iznoglkgcmtm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkozxyvq9iznoglkgcmtm.png" alt="eventapp" width="800" height="787"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllimbivmjxhl0q1bjjj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllimbivmjxhl0q1bjjj6.png" alt="eventapp-admin" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Disruption in Application&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Part of this demo I have modified Lambda role permission where I have removed KMS permission.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiin3jm8mwuqlgg8mpn5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftiin3jm8mwuqlgg8mpn5.png" alt="aiopsiam" width="800" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now imagine scenario, suddenly users started reporting errors they are not able see the slots, it throws an errors. Also Admin not able to see any appointments at all.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi52n3j48s5hi55bhayh0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi52n3j48s5hi55bhayh0.png" alt="sloterrors" width="800" height="1263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As we know application design, entry point for application is CloudFront, we start checking Cloudfront and see there is increase in 5xx errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwqx1wv2nvg980u3g8kt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiwqx1wv2nvg980u3g8kt.png" alt="5xx" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Starting Investigation&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Under CloudWatch metrics 5xx, you can start already start investigation why its throwing 5xx errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fise08cnr2e8i2s4rgcw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fise08cnr2e8i2s4rgcw4.png" alt="cwistart" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It will pick the timestamp automatically or you can adjust from what time you would like to start investigation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtblnvhl8rdyut2wxzq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtblnvhl8rdyut2wxzq1.png" alt="aiopos" width="800" height="739"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once investigation is started, it will take 10-15 minutes to finish investigation, we can also view progress of investigation. But instead, we can always start communicating user/business or start other parallel activities. &lt;/li&gt;
&lt;li&gt;On completing investigation, it correctly pointed out what went wrong and why we are getting 5xx errors 🥳🥳🥳 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we can use under Root Cause Summary, it was IAM configuration issue which was causing issue. &lt;/p&gt;

&lt;p&gt;ANALYSIS: This failure pattern represents an IAM configuration issue rather than a service degradation, as evidenced by the specific KMS permission errors and the NEW occurrence pattern indicating a recent permission change affecting the eventap staging service components.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxef2h5slenp9q6n25g8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxef2h5slenp9q6n25g8.png" alt="aiopsrca" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3zby2mefosswpyxmrzq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3zby2mefosswpyxmrzq.png" alt="aiopsrca" width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have root cause in 15 minutes. This is huge advantage for anyone who works on production system and need to keep system running.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Going one step further, instead of checking metrics every time and start investigating. We can have CloudWatch Alarm in place, as soon as resources metrics gets ALARM start, CloudWatch Investigation will get automatically started and I did same.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrf9rlg7rqf6i0v0i83e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdrf9rlg7rqf6i0v0i83e.png" alt="aiops" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobsot5prjtyo0wu80iv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobsot5prjtyo0wu80iv4.png" alt="aiopsaction" width="800" height="215"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I hope this blog gives you a good idea of how you can get started with CloudWatch Investigations. There may be moments where you don’t fully agree with the AI’s suggestions — and that’s perfectly fine. You’re always in control. You can accept what makes sense, discard what doesn’t, and guide the investigation in the right direction.&lt;/p&gt;

&lt;p&gt;The beauty is that you can start small with zero setup, and then gradually level up by adding richer telemetry, cross-account visibility, and automation runbooks. Over time, this leads to fewer guesswork-driven fixes, faster MTTR, and much calmer incident calls — even at 3 AM.&lt;/p&gt;

&lt;p&gt;Instead of panic-driven troubleshooting and endless tab-hopping across metrics, logs, and dashboards, you get context first: what changed, what’s related, and what’s most likely broken.&lt;/p&gt;

&lt;p&gt;Thanks for reading, and happy troubleshooting 🚀&lt;/p&gt;

</description>
      <category>aws</category>
      <category>genai</category>
      <category>cloud</category>
      <category>sre</category>
    </item>
    <item>
      <title>How to Use Amazon SNS Data Protection Policies to Prevent Sensitive Data Leakage</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Sat, 29 Nov 2025 19:05:06 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-to-use-amazon-sns-data-protection-policies-to-prevent-sensitive-data-leakage-1dgn</link>
      <guid>https://forem.com/aws-builders/how-to-use-amazon-sns-data-protection-policies-to-prevent-sensitive-data-leakage-1dgn</guid>
      <description>&lt;p&gt;When we build things using &lt;strong&gt;event-driven architecture&lt;/strong&gt;, we almost always run into &lt;strong&gt;Amazon SNS&lt;/strong&gt; and for good reason. It’s simple, scalable, and makes it incredibly easy to fan out messages to multiple subscribers.&lt;/p&gt;

&lt;p&gt;Imagine a fintech or healthcare application that sends transaction alerts or patient updates via SMS or email. These messages may accidentally include sensitive information such as account details, names, or dates of birth. Encrypting SNS topics and applying strict access controls helps ensure compliance, but it’s equally important to prevent sensitive data from leaking into messages themselves.&lt;br&gt;
That’s where ** SNS Message Data Protection** comes in — because your architecture isn’t just about fast delivery, it’s also about secure delivery.&lt;/p&gt;

&lt;p&gt;In this blog, I will walk through how we can protect personal and sensitive information while sending notifications through SNS using Data Protection Policies.&lt;/p&gt;
&lt;h2&gt;
  
  
  Data Protection in SNS
&lt;/h2&gt;

&lt;p&gt;Amazon SNS uses &lt;strong&gt;data protection policies&lt;/strong&gt; to identify and manage sensitive data (like PII and PHI) in message payloads with &lt;strong&gt;Predefined or custom data identifiers by using Machine learning and pattern matching.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each policy allows you to define operations based on detection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Audit – Log findings without interrupting delivery&lt;/li&gt;
&lt;li&gt;De-identify – Mask or redact sensitive data&lt;/li&gt;
&lt;li&gt;Deny – Block messages containing sensitiv
e data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A policy is defined in JSON format and includes elements like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DataDirection (Inbound/Outbound)&lt;/li&gt;
&lt;li&gt;Principal (IAM identity publishing/subscribing)&lt;/li&gt;
&lt;li&gt;DataIdentifier (e.g., name, phone number)&lt;/li&gt;
&lt;li&gt;Operation (Audit, De-identify, Deny)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only one data protection policy per SNS topic is allowed, but it can have multiple statements. This helps organizations enforce privacy controls and reduce compliance risks.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why should I use message data protection?
&lt;/h2&gt;

&lt;p&gt;Introducing SNS Data Protection into your governance, risk, and compliance programs helps you automatically detect, prevent, and control data leakage. It safeguards regulated data (PII/PHI) and reduces the overhead of building your own detection or masking pipeline.&lt;/p&gt;
&lt;h2&gt;
  
  
  Defining SNS topics with policy
&lt;/h2&gt;

&lt;p&gt;We define three SNS topics — each configured with its own data protection policy: Audit, De-identify, and Deny. These sample policies demonstrate how SNS handles sensitive data under different rules.&lt;/p&gt;

&lt;p&gt;Below are the three sample data protection policies used in this blog—Deny to block sensitive data, Audit to log sensitive content without stopping delivery, and De-identify to automatically mask regulated fields before the message is published.&lt;/p&gt;
&lt;h3&gt;
  
  
  Audit - Data protection policy
&lt;/h3&gt;

&lt;p&gt;This policy detects email, date of birth, and credit card numbers.&lt;br&gt;
If found, the finding is logged in CloudWatch but delivery continues.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One important thing to note, CloudWatch log group name in this case must have prefix &lt;code&gt;/aws/vendedlogs/&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;


&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Description": "Audit sensitive data without blocking delivery",
  "Version": "2021-06-01",
  "Statement": [
    {
      "DataDirection": "Inbound",
      "DataIdentifier": [
        "arn:aws:dataprotection::aws:data-identifier/EmailAddress",
        "arn:aws:dataprotection::aws:data-identifier/DateOfBirth",
        "arn:aws:dataprotection::aws:data-identifier/CreditCardNumber"
      ],
      "Operation": {
        "Audit": {
          "FindingsDestination": {
            "CloudWatchLogs": {
              "LogGroup": "/aws/vendedlogs/sns-audit/"
            }
          },
          "SampleRate": "99"
        }
      },
      "Principal": [
        "*"
      ],
      "Sid": "AuditSensitiveData"
    }
  ],
  "Name": "sns-audit-policy"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  De-Identify : Data protection policy
&lt;/h3&gt;

&lt;p&gt;This policy masks sensitive fields using &lt;code&gt;#&lt;/code&gt; characters. Subscribers never see actual sensitive data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Description": "Mask or redact sensitive data",
  "Version": "2021-06-01",
  "Statement": [
    {
      "DataDirection": "Inbound",
      "DataIdentifier": [
        "arn:aws:dataprotection::aws:data-identifier/EmailAddress",
        "arn:aws:dataprotection::aws:data-identifier/DateOfBirth",
        "arn:aws:dataprotection::aws:data-identifier/CreditCardNumber"
      ],
      "Operation": {
        "Deidentify": {
          "MaskConfig": {
            "MaskWithCharacter": "#"
          }
        }
      },
      "Principal": [
        "*"
      ],
      "Sid": "DeidentifySensitiveData"
    }
  ],
  "Name": "sns-deidentify-policy"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deny : Data protection policy
&lt;/h3&gt;

&lt;p&gt;This policy blocks the publish request entirely if sensitive data is detected.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Description": "Block messages containing sensitive data",
  "Version": "2021-06-01",
  "Statement": [
    {
      "DataDirection": "Inbound",
      "DataIdentifier": [
        "arn:aws:dataprotection::aws:data-identifier/EmailAddress",
        "arn:aws:dataprotection::aws:data-identifier/DateOfBirth",
        "arn:aws:dataprotection::aws:data-identifier/CreditCardNumber"
      ],
      "Operation": {
        "Deny": {}
      },
      "Principal": [
        "*"
      ],
      "Sid": "DenySensitiveData"
    }
  ],
  "Name": "sns-deny-policy"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Demo
&lt;/h3&gt;

&lt;p&gt;I have created simple lambda function to test these topics:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import os
import json
import logging

sns = boto3.client('sns')
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    message = {
        "patientId": "PAT123456",
        "name": "John Doe",
        "dob": "12-01-2012",
        "diagnosis": "Flu"
    }

    topics = ["AUDIT_TOPIC_ARN", "DEIDENTIFY_TOPIC_ARN", "DENY_TOPIC_ARN"]
    results = {}

    for topic_env in topics:
        topic_arn = os.environ.get(topic_env)

        try:
            response = sns.publish(
                TopicArn=topic_arn,
                Message=json.dumps(message)
            )
            results[topic_env] = {
                "status": "success",
                "messageId": response.get("MessageId")
            }
            logger.info(f"Published to {topic_env}: {response.get('MessageId')}")

        except sns.exceptions.InvalidParameterException as e:
            # Common case: DENY_TOPIC_ARN rejects sensitive fields
            logger.error(f"[{topic_env}] Sensitive data detected or invalid parameter: {str(e)}")
            results[topic_env] = {"status": "failed", "error": "Sensitive data not allowed"}

        except sns.exceptions.AuthorizationErrorException as e:
            logger.error(f"[{topic_env}] Access denied: {str(e)}")
            results[topic_env] = {"status": "failed", "error": "Access denied"}

        except Exception as e:
            # Catch any unexpected exception
            logger.error(f"[{topic_env}] Unexpected error: {str(e)}")
            results[topic_env] = {"status": "failed", "error": str(e)}

    return {
        "status": "completed",
        "results": results
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I'm sending below JSON where I have date of birth as personal information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  {
        "patientId": "PAT123456",
        "name": "John Doe",
        "dob": "12-01-2012",
        "diagnosis": "Flu"
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On testing I get below results ,&lt;/p&gt;

&lt;p&gt;For Audit, I do get message but its get logged under CloudWatch log group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hsjyms7ulub7qw6lahy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hsjyms7ulub7qw6lahy.png" alt="sns-audit" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlw1txtrfze6540y7aj0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwlw1txtrfze6540y7aj0.png" alt="sns-audit-cw" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For De-identify, we get masked message for date of birth&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuve8o2mnwt4q1uzijya.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpuve8o2mnwt4q1uzijya.png" alt="sns-de-identify" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And for Deny, we get Access Denied error.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;**_&amp;gt; Access denied: An error occurred (AuthorizationError) when calling the Publish operation: One or more data identifiers were found_**

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is summary for three data protection policy types:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Policy Type&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Purpose&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;What Happens When Sensitive Data Is Detected?&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Impact on Message Delivery&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Ideal Use Case&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Audit&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Monitor sensitive data exposure&lt;/td&gt;
&lt;td&gt;Logs findings to CloudWatch using &lt;code&gt;/aws/vendedlogs/&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;✔️ Delivered&lt;/td&gt;
&lt;td&gt;Compliance monitoring, security insights, debugging sensitive data flow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;De-Identify&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Mask or redact sensitive data before delivery&lt;/td&gt;
&lt;td&gt;Sensitive fields are replaced with &lt;code&gt;#&lt;/code&gt; or a chosen mask&lt;/td&gt;
&lt;td&gt;✔️ Delivered (masked)&lt;/td&gt;
&lt;td&gt;Sending events to analytics systems, external subscribers, or downstream apps that shouldn't see PII&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Deny&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Prevent sensitive data from being published&lt;/td&gt;
&lt;td&gt;Publish request is blocked with an &lt;code&gt;AuthorizationError&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;❌ Not delivered&lt;/td&gt;
&lt;td&gt;Strict compliance environments (PCI/HIPAA), preventing accidental PII leakage&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I hope this walkthrough gives you a clear understanding of how SNS Data Protection policies work and where they can be applied across real-world scenarios. By using these capabilities, teams can significantly reduce compliance risks, strengthen data governance, and build more secure, trustworthy systems—without sacrificing the speed or scalability of their event-driven architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Stay secure. Stay responsible. Keep building.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>data</category>
      <category>privacy</category>
    </item>
    <item>
      <title>How to Automate IAM Best Practices in CI/CD with IAM Access Analyzer</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Sat, 19 Apr 2025 05:15:09 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-to-automate-iam-best-practices-in-cicd-with-iam-access-analyzer-1keo</link>
      <guid>https://forem.com/aws-builders/how-to-automate-iam-best-practices-in-cicd-with-iam-access-analyzer-1keo</guid>
      <description>&lt;p&gt;Managing IAM (Identity and Access Management) policies securely is one of the most important parts of working with AWS. Developers may accidentally create overly-permissive policies that grant more access than necessary — for example, allowing &lt;code&gt;iam:PassRole&lt;/code&gt; to all roles, or opening up &lt;code&gt;sts:AssumeRole&lt;/code&gt; without restriction. Without proper checks in place, these risky permissions can silently make their way into your production environment.&lt;/p&gt;

&lt;p&gt;In a organizations with multiple accounts, the impact of such mistakes can multiply. That’s why having strong guardrails like &lt;strong&gt;IAM Access Analyzer&lt;/strong&gt; become critical — ensuring that only safe and intentional access is allowed.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;🛡️ Before we go further, it's important to understand that AWS Service Control Policies (SCPs) and Resource Control Policies (RCPs) are the first line of defence in any AWS multi-account setup. They define the maximum permissions any user or role can have, regardless of their IAM policies. This blog will not cover SCPs and RCPs, we’ll focus on validating IAM policies during development to ensure we follow least privilege and catch access issues early.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this blog, we’ll explore how to bring IAM policy validation into CI/CD pipeline using AWS IAM Access Analyzer. This is an example of shift-left security — catching misconfigurations as early as possible, before anything gets merged or deployed.&lt;/p&gt;

&lt;p&gt;We'll walk through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up a GitHub Actions workflow that validates IAM policies.&lt;/li&gt;
&lt;li&gt;Using IAM Access Analyzer checks.&lt;/li&gt;
&lt;li&gt;Building CloudFormation templates that intentionally pass and fail to understand how policy validation works in practice.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  IAM Access Analyzer
&lt;/h2&gt;

&lt;p&gt;IAM Access Analyzer is a security feature in AWS that helps ensure your IAM policies follow best practices and don’t grant unintended access. It supports several capabilities like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validating IAM policies against AWS best practices or security standards.&lt;/li&gt;
&lt;li&gt;Identifying unused permissions&lt;/li&gt;
&lt;li&gt;Analyzing external sharing of AWS resources&lt;/li&gt;
&lt;li&gt;Detecting public access to resources.&lt;/li&gt;
&lt;li&gt;Generating policies based on CloudTrail access logs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this blog, we’ll focus specifically on Custom policy checks, which help to validate IAM policies against your organizations security standards — like least privilege or restricted actions. IAM Access Analyzer provides custom policy check APIs to validate IAM policies:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- CheckNoNewAccess&lt;/strong&gt; – Detects if a policy introduces new permissions compared to a baseline.&lt;br&gt;
&lt;strong&gt;- CheckAccessNotGranted&lt;/strong&gt; – Ensures specific actions or access are not allowed.&lt;br&gt;
&lt;strong&gt;- CheckNoPublicAccess&lt;/strong&gt; – Identifies if a resource policy allows public access.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;👉&lt;/strong&gt; IAM Access Analyzer custom policy check is a paid feature. As of now, AWS charges $0.0020 per API call for these checks. For example, you make 10,000 calls each month to the IAM Access Analyzer APIs to run custom policy checks across 5 accounts signed up for consolidated billing with AWS Organizations, cost will be $0.0020*10,000 API calls = $20 per month&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now let's start with playing around IAM Access Analyzer. &lt;a href="https://github.com/SakivV/aws-iam-policy-validation-automation" rel="noopener noreferrer"&gt;Github code &lt;/a&gt;for this blog.&lt;/p&gt;
&lt;h2&gt;
  
  
  Pre-Requisite
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Account&lt;/li&gt;
&lt;li&gt;Github Account&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  CI-CD Flow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsipbvy75kayo8scz5ve5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsipbvy75kayo8scz5ve5.png" alt="github_action_workflow" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Developer pushes changes to repo.&lt;/li&gt;
&lt;li&gt;Github action performs IAM Access Analyzer validation.&lt;/li&gt;
&lt;li&gt;If policy created by developer violates, workflow gets failed.&lt;/li&gt;
&lt;li&gt;If policy created by developer is as per security standard, workflow proceeds with new checks and depployment.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Hands-On
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Step 1 - Configure AWS and Github
&lt;/h3&gt;

&lt;p&gt;In order to, configure Github action, we need to first integrate AWS Account and Github, basically we need to establish trust. To do that follow below steps:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you already have github configured with AWS, you can skip this step.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;Login into AWS account, go to IAM → Identity providers →Add provider.

&lt;ul&gt;
&lt;li&gt;For the provider URL: Use &lt;a href="https://token.actions.githubusercontent.com" rel="noopener noreferrer"&gt;https://token.actions.githubusercontent.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;For the "Audience": Use sts.amazonaws.com&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Once done, create a role for GitHub. We need to set trust policy for role as below. You need to replace values as per your accounts.

&lt;ul&gt;
&lt;li&gt;Principal should be ARN of Identity provider you configured above. &lt;/li&gt;
&lt;li&gt;For Condition, paste github repo link.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::012345678910:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com",
          "token.actions.githubusercontent.com:sub": "repo:&amp;lt;githuborg&amp;gt;/&amp;lt;github-repo&amp;gt;:*"
        }
      }
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For this role, I have assigned below permission :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AmazonS3ReadOnlyAccess&lt;/li&gt;
&lt;li&gt;IAMAccessAnalyzerReadOnlyAccess&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Depending on your scenario, you can add more permission like creating IAM roles, policies. &lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2 - Configure Github Secrets
&lt;/h3&gt;

&lt;p&gt;While running Github action, we will be using role which get assumed to perform necessary actions. But instead of directly hardcoding role ARN we will be configuring it as secrete. For that&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to your repo → Setting → Secrete &amp;amp; Variable → New repository secretes. Enter name for secrets and paste role ARN&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfl7bxmzy2xskxzgp87l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfl7bxmzy2xskxzgp87l.png" alt="github_secrets" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will be creating two secretes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IAM Role&lt;/strong&gt; : This will be used by Github action.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reference policy Object S3 URL&lt;/strong&gt; : This will be reference policy which we will be using for validation. As good practice, I will be storing this reference policy on restricted S3 bucket. Part of workflow, reference policy gets downloaded. You can create dedicated S3 bucket for this or use existing one, make sure github role have permission to access it.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Step 3 - Reference Policy
&lt;/h3&gt;

&lt;p&gt;I will be using below reference policy. This reference policy allows all actions by default but explicitly denies a list of CloudFormation actions (like CreateStack, UpdateStack, DeleteStack, etc.) on stacks with names under &lt;code&gt;platformteam/*.&lt;/code&gt; It’s designed to protect critical infrastructure owned by the platform team from unauthorized changes, even if general CloudFormation access is permitted elsewhere.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        },
        {
            "Effect": "Deny",
            "Action": [
                "cloudformation:CancelUpdateStack",
                "cloudformation:ContinueUpdateRollback",
                "cloudformation:CreateChangeSet",
                "cloudformation:CreateStack",
                "cloudformation:DeleteChangeSet",
                "cloudformation:DeleteStack",
                "cloudformation:DescribeChangeSet",
                "cloudformation:DescribeChangeSetHooks",
                "cloudformation:DescribeStackEvents",
                "cloudformation:DescribeStackResource",
                "cloudformation:DescribeStackResourceDrifts",
                "cloudformation:DescribeStackResources",
                "cloudformation:DescribeStacks",
                "cloudformation:DetectStackDrift",
                "cloudformation:DetectStackResourceDrift",
                "cloudformation:ExecuteChangeSet",
                "cloudformation:GetStackPolicy",
                "cloudformation:GetTemplate",
                "cloudformation:GetTemplateSummary",
                "cloudformation:ListChangeSets",
                "cloudformation:ListStackResources",
                "cloudformation:RecordHandlerProgress",
                "cloudformation:RollbackStack",
                "cloudformation:SetStackPolicy",
                "cloudformation:SignalResource",
                "cloudformation:TagResource",
                "cloudformation:UntagResource",
                "cloudformation:UpdateStack",
                "cloudformation:UpdateTerminationProtection"
            ],
            "Resource": "arn:aws:cloudformation:*:*:stack/platformteam/*"
        }
    ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4 - Github Workflow
&lt;/h3&gt;

&lt;p&gt;With below Github workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs on pull requests and pushes to the main branch&lt;/li&gt;
&lt;li&gt;Assumes AWS IAM role using OIDC authentication&lt;/li&gt;
&lt;li&gt;Downloads reference policies from S3 using GitHub secrets&lt;/li&gt;
&lt;li&gt;Performs two IAM Access Analyzer checks:

&lt;ul&gt;
&lt;li&gt;✅ CloudFormation Access Check – Ensures no new permissions are 
introduced (CHECK_NO_NEW_ACCESS)&lt;/li&gt;
&lt;li&gt;✅ Principal Check – Verifies only approved principals can 
assume the role (CHECK_NO_NEW_ACCESS on trust policy)
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: cfn-policy-validator-workflow
on:
  pull_request:
    types: [opened, review_requested]

  push:
    branches:
      - 'main'

permissions:
  id-token: write
  contents: read
  issues: write

jobs: 
  cfn-iam-policy-validation: 
    name: iam-policy-validation
    runs-on: ubuntu-latest
    permissions: write-all
    steps:
      - name: Checkout code
        id: checkOut
        uses: actions/checkout@v4

      - name: Configure AWS Credentials
        id: configureCreds
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.OIDC_IAM_ROLE }}
          aws-region: us-east-1
          role-session-name: GitHubSessionName

      - name: Fetch reference policy from s3
        id: getReferencePolicy
        run: |
          aws s3 cp ${{ secrets.REFERENCE_IDENTITY_POLICY_CLOUDFORMATION }} ./cloudformation-policy-reference.json
          aws s3 cp ${{ secrets.REFERENCE_IDENTITY_POLICY_PRINCIPAL }} ./allowlist-account-principal.json
        shell: bash

      - name: CloudFormation-Access-Checks
        id: run-aws-check-no-new-access
        uses: aws-actions/cloudformation-aws-iam-policy-validator@v1.0.1
        with:
          policy-check-type: 'CHECK_NO_NEW_ACCESS'
          template-path: './cloudformation-sample/cfn-iam-pass-role-sample.yaml'
          reference-policy: './cloudformation-policy-reference.json'
          reference-policy-type: "IDENTITY"
          region: us-east-1
      - name: New-Principal-Checks
        id: run-aws-check-no-new-access-assumerole
        uses: aws-actions/cloudformation-aws-iam-policy-validator@v1.0.1
        with:
          policy-check-type: 'CHECK_NO_NEW_ACCESS'
          template-path: './cloudformation-sample/cfn-iam-role-principal-sample.yaml'
          reference-policy: './allowlist-account-principal.json'
          reference-policy-type: "RESOURCE"
          region: us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  CloudFormation Template - Failed
&lt;/h3&gt;

&lt;p&gt;Now below template, grants CloudFormation permissions to all stacks (Resource: '&lt;em&gt;'), violating the reference policy by potentially allowing changes to `platformteam/&lt;/em&gt;` stacks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# lambda-pass-template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Resources:
  LambdaFriendlyRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: LambdaSafeCFRole
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
      Policies:
        - PolicyName: CFNonPlatformTeamAccess
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - cloudformation:CreateStack
                  - cloudformation:UpdateStack
                  - cloudformation:DeleteStack
                Resource: '*'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After pushing changes to main (ideally pushing to main is not best practice but for this blog only i'm doing), we will see github workflow gets failed as policy violates our reference. policy.&lt;br&gt;
When a custom policy check fails, IAM Access Analyzer returns the statement ID (Sid) and statement index of the specific policy statement that caused the failure. The index is zero-based (starting from 0).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4jncx7wd5a8yhodcsal.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4jncx7wd5a8yhodcsal.png" alt="cfn_check_fail" width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vpp9smk5vf8lpewrlk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4vpp9smk5vf8lpewrlk5.png" alt="cfn_check_fail" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  CloudFormation Template - Pass
&lt;/h3&gt;

&lt;p&gt;With below template, a lambda function CloudFormation permissions scoped to &lt;code&gt;devteam/*&lt;/code&gt; stacks, staying compliant with the reference policy by avoiding access to protected &lt;code&gt;platformteam/*&lt;/code&gt; stacks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# lambda-pass-template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Resources:
  LambdaFriendlyRole:
    Type: AWS::IAM::Role
    Properties:
      RoleName: LambdaSafeCFRole
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
            Action: sts:AssumeRole
      Policies:
        - PolicyName: CFNonPlatformTeamAccess
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              - Effect: Allow
                Action:
                  - cloudformation:CreateStack
                  - cloudformation:UpdateStack
                  - cloudformation:DeleteStack
                Resource: arn:aws:cloudformation:us-east-1:*:stack/devteam/*

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tdq6yqot2uzpugjnqvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tdq6yqot2uzpugjnqvg.png" alt="cfn_check_pass" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope this example gave you an idea about implementation and how it works. If you would like play more on this, you can find &lt;a href="https://github.com/aws-samples/iam-access-analyzer-custom-policy-check-samples" rel="noopener noreferrer"&gt;reference repo. &lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Validating IAM policies early in the development process helps prevent security risks before they reach production. By integrating IAM Access Analyzer with GitHub Actions, you can enforce least privilege, block overly-permissive changes, and shift security left — all without slowing down development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thanks, cloud builder! Now go forth and validate those IAM policies like a pro. 🔐🛠️&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>security</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>How to monitor Unused Amazon EBS Volumes</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Thu, 30 Jan 2025 05:05:17 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-to-monitor-unused-amazon-ebs-volumes-317f</link>
      <guid>https://forem.com/aws-builders/how-to-monitor-unused-amazon-ebs-volumes-317f</guid>
      <description>&lt;p&gt;EBS storage is a fundamental component of cloud storage, often used as the primary storage attached to EC2 instances. Even though an EBS volume is attached to an instance, it has a separate lifecycle. If we don’t monitor EBS storage for unused volumes, cloud costs can escalate significantly.&lt;/p&gt;

&lt;p&gt;Unless the &lt;strong&gt;Delete on Termination&lt;/strong&gt; option is selected during instance creation, terminating an EC2 instance detaches the EBS volume but doesn’t delete it. Specially in development and testing environments, where EC2 instances are frequently launched and terminated, this often results in a large number of unused EBS volumes.&lt;/p&gt;

&lt;p&gt;These unused EBS volumes continue to accrue charges in your AWS account, regardless of whether they are being used.&lt;/p&gt;

&lt;h4&gt;
  
  
  So why Delete Unused EBS Volumes:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EBS volumes incur charges based on storage usage. Removing unused volumes reduces unnecessary costs.&lt;/li&gt;
&lt;li&gt;Regularly cleaning up unused resources simplifies account management and reduces clutter.&lt;/li&gt;
&lt;li&gt;Unused volumes may contain sensitive or outdated data. Deleting these volumes can prevents accidental exposure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this blog, we will explore how to configure an AWS Config Rule and set up an automatic remediation action using AWS Systems Manager Automation to delete unused EBS volumes. Main goal of this blog is to bring awareness about unused EBS volumes in AWS environment. Deleting  EBS volume can be optional step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; It is not advisable to use this solution directly in production. It is always good to test solution and also decide what data strategy within team/business like retention period etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution Overview
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb7adhs0a2mj0kcjwmhr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmb7adhs0a2mj0kcjwmhr.png" alt="aws_ebs_monitor" width="791" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Detects unused EBS volumes using the AWS-managed Config Rule.&lt;/li&gt;
&lt;li&gt;Automatically creates a snapshot of the volume. I would recommend this step if you are going to delete unused volumes.&lt;/li&gt;
&lt;li&gt;Deletes the unused volume.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Implementation Steps
&lt;/h3&gt;

&lt;p&gt;You can refer this &lt;a href="https://github.com/SakivV/aws-ebs-volume-monitor" rel="noopener noreferrer"&gt;Github&lt;/a&gt; repo and deploy required resources using cloudformation. &lt;/p&gt;

&lt;p&gt;After cloning this repo, you can deploy Cloudformation template via console. It takes two parameter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Config rule name - String&lt;/li&gt;
&lt;li&gt;IsSnapshotRequired - Boolean.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This Cloudformation will deploy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM Role which will be assumed during detection and remediation.&lt;/li&gt;
&lt;li&gt;AWS Config Rule with remediation action.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remediation is AWS Managed automation document so we don't build our own as of now.&lt;/p&gt;

&lt;p&gt;One you deployed cloudormation template, confirm config rule creation  by going AWS Config console. AWS Config - Rules&lt;/p&gt;

&lt;h3&gt;
  
  
  Test the Solution
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To test the solution, I'm creating an EC2 instance with 2 EBS volumes attached. While creating an EC2 instance, make sure for EBS volume you select &lt;strong&gt;Delete on Termination value No&lt;/strong&gt;. To see this option, you need to click Advance under Storage(volume) option while creation an EC2 instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzvxjdcmmxspv7e184si.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flzvxjdcmmxspv7e184si.png" alt="ec2_ebs_option" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After creating an EC2 instance, in few minute AWS Config rule get evaluated automatically. You should able to see volumes are in Compliant state as they are attached to EC2 instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx49kz1rqdb21p8zacbs0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx49kz1rqdb21p8zacbs0.png" alt="config_compliant" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now delete EC2 instance. Once EC2 instance deleted, EBS volume status changed to Available. Since volumes are not attached to any EC2 instance now, config rule make these volumes Noncompliant and start deleting it by taking snapshot.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynjy42b8nqx3xy10o0cl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fynjy42b8nqx3xy10o0cl.png" alt="config_noncompliant" width="800" height="95"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can check status of remediation by going to System Manager -&amp;gt; Automation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma8wmjhmduzoau5ecrls.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fma8wmjhmduzoau5ecrls.jpeg" alt="ssm_automation" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Below you can also EBS snapshot has also been created.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwu28068bf7xr7g4uqhbj.png" alt="ebs_snap" width="800" height="117"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;I hope this blog gives you an idea how we can leverage AWS Config and AWS Systems Manager to manage not attached EBS volumes. If you don't want to delete volume, that can also done. You can just make volumes Noncompliant, just fetch list and send it to respective teams. Feel free to modify solution as per your need.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Stay secure, optimised in Cloud!!!&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>monitoring</category>
      <category>cloudstorage</category>
    </item>
    <item>
      <title>How to Set Up Cross-Account EventBridge</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Mon, 06 Jan 2025 15:43:27 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-to-set-up-cross-account-eventbridge-5d4c</link>
      <guid>https://forem.com/aws-builders/how-to-set-up-cross-account-eventbridge-5d4c</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Amazon EventBridge is a powerful event bus service that makes it easier to build event-driven architectures. It allows you to connect different AWS services or even external SaaS applications through a simple and scalable setup. &lt;/p&gt;

&lt;p&gt;While EventBridge is incredibly versatile, its ability to target endpoints or consumers is typically restricted to the same AWS account.&lt;br&gt;
&lt;em&gt;&lt;strong&gt;The exception is an event bus in a different account, which can be a valid target.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To achieve this, events must be pushed from the source account's event bus to the destination account's event bus. This cross-account communication is essential for managing critical events and ensuring centralized visibility and control.&lt;/p&gt;

&lt;p&gt;In this blog, we'll demonstrate this with an example: when a security service in AWS is disabled, it generates an event that is forwarded to a central monitoring account event bus, triggering an alarm. This approach helps maintain a robust, scalable, and secure event-driven architecture across AWS accounts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture and Workflow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjx5bmb192ipnhxlugxjp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjx5bmb192ipnhxlugxjp.png" alt="cross-account-event-bridge" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In given architecture :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We have two accounts workload account (source account for event) and central-monitoring account (destination account).&lt;/li&gt;
&lt;li&gt;If Security service like SecurityHub disable in workload account, event rule on default event bus will get triggered. &lt;/li&gt;
&lt;li&gt;This event rule has target and respective permission to send event to central-monitoring account event custom bus. You can configure default event bus as well.&lt;/li&gt;
&lt;li&gt;Custom event bus in central-monitoring account have similar event rule as above. This event rule triggers Lambda function , that can enrich event and send it to SNS.&lt;/li&gt;
&lt;li&gt;On SNS, if you subscription conifgured, you will get notification.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Deployment
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;To create resources via Cloudformation you can refer this &lt;a href="https://github.com/SakivV/aws-cross-account-event-bridge.git" rel="noopener noreferrer"&gt;Github repo&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Part of this stack below resources get created&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Central Account - Receiver&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Custom event bus with permission for workload i.e. source account.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "AllowSourceAccountPutEvents",
    "Effect": "Allow",
    "Principal": {
      "AWS": "arn:aws:iam::99999999999:root"
    },
    "Action": "events:PutEvents",
    "Resource": "arn:aws:events:us-east-1:111111111:event-bus/CentralMonitoringBus"
  }]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Here in principle I have specified root of workload account. It means any entity from workload account can publish message. My advice would be to have specific role.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Event rule that matches event related to security hub
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "AllowSourceAccountPutEvents",
    "Effect": "Allow",
    "Principal": {
      "AWS": "arn:aws:iam::714258651552:root"
    },
    "Action": "events:PutEvents",
    "Resource": "arn:aws:events:us-east-1:11111111:event-bus/CentralMonitoringBus"
  }]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Lambda function that will be the target of above rule. This lambda will extract details from event and send it to SNS.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Workload Account - Source&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Event rule on default event bus. It get trigger when security hub is disable. It has target set to custom event bus of central account.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In workload account, I disabled security hub, this triggered event rule in workload account.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbip9dgjxktur02z8sf8n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbip9dgjxktur02z8sf8n.png" alt="event_rule" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This event sent further details to central account where event rule triggers lambda function that extract details from event like Accountid, service which is getting disabled. These details will get send to SNS. For this demo I have created Email subscription under SNS topic. So I get notification over an email. As per your need you can set communication channel.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lt610770fqw0odg9mc2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lt610770fqw0odg9mc2.png" alt="event_rule" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhi3ttfbh9vktdr3rafx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhi3ttfbh9vktdr3rafx.png" alt="lambda_trigger" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0i72n5g0azd8t9sdkqz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0i72n5g0azd8t9sdkqz1.png" alt="email" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I hope this blog gives you an idea how we can set cross account communication between AWS accounts via even bridge. Event-bridge is one of the important service in AWS which we can use effectively to build scalable application.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eventdriven</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Guide to AWS Certifications: Choosing the Right Path for Your Role</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Sun, 29 Dec 2024 17:13:31 +0000</pubDate>
      <link>https://forem.com/aws-builders/guide-to-aws-certifications-choosing-the-right-path-for-your-role-3b3h</link>
      <guid>https://forem.com/aws-builders/guide-to-aws-certifications-choosing-the-right-path-for-your-role-3b3h</guid>
      <description>&lt;p&gt;As cloud computing continues to grow, AWS certification is one of the important step for IT/Non-IT professionals who want to advance their careers. AWS provides a range of certifications designed for various roles.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll explore the AWS certification paths for several key positions, helping you decide which certifications to pursue based on your career goals.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If you are from Non-IT background, working in sales and marketing then it is always good to Start with AWS Cloud Practitioner Certification. AWS Certified Cloud Practitioner that validates foundational knowledge of AWS Cloud and terminology.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you are from IT background, having 1–3 years of IT/STEM experience. I would suggest to skip AWS Certified Cloud Practitioner and start with Solution Architect Associate or Developer Associate certification. In below diagrams, I have denoted Cloud Practitioner as an optional with dotted border.&lt;br&gt;
Below I have list down the Role and corresponding certification path.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also if you would like to understand eco system of AI services in AWS , AWS AI Practitioner is good way to start your AI journey. &lt;/p&gt;

&lt;p&gt;Below I have list down the Role and corresponding certification path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solutions Architect
&lt;/h2&gt;

&lt;p&gt;As a solutions architect you need to design, develop, and manage cloud infrastructure and resources. Collaborate with DevOps to migrate applications to the cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3xlf1pwcuyogjndhv07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3xlf1pwcuyogjndhv07.png" alt="aws_solution_Architect" width="800" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Application Architect
&lt;/h2&gt;

&lt;p&gt;As an Application architect, you need to make cloud native application scalable, reliable, and manageable across the entire enterprise.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uljev12gvgo61crjvso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uljev12gvgo61crjvso.png" alt="aws_app_Architect" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Data Engineer
&lt;/h2&gt;

&lt;p&gt;As a Cloud Data Engineer, you need to use Cloud native services to automate the collection and processing of structured and semi-structured data, and monitor the performance of data pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvthst2tqz12grbpsfny8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvthst2tqz12grbpsfny8.png" alt="Cloud_Data_Engineer" width="800" height="673"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud-Native-Software Development Engineer
&lt;/h2&gt;

&lt;p&gt;As a Software Development engineer, you develop, build, and maintain software across various platforms and devices. You need understand about serverless development options, what cloud-native option that can be used effectively for software development, release process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tnhpd2zvc94jfy1nqry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tnhpd2zvc94jfy1nqry.png" alt="Development_Engineer" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Systems Administrator
&lt;/h2&gt;

&lt;p&gt;As an Systems Admin you install, upgrade, and maintain computer components and software on Cloud VMs, integrate automation processes. Understand what are the various option on cloud to automate the process of patching, making sure systems are compliant. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flafkrp52m0z4vsvkmcm9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flafkrp52m0z4vsvkmcm9.png" alt="Systems_Administrator" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud DevSecOps Engineer
&lt;/h2&gt;

&lt;p&gt;As a DevOps engineer you need to design, deploy, and operate large-scale global hybrid cloud environments. Advocate for end-to-end automated CI/CD DevOps pipelines.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpsn9ceap3lu89qm60lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnpsn9ceap3lu89qm60lw.png" alt="Cloud_DevSecOps" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud engineer
&lt;/h2&gt;

&lt;p&gt;As a Cloud engineer, you need implement and operate an organization’s networked computing infrastructure. Set up security systems to ensure data safety.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexh8jvvbmisyo60fjkno.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexh8jvvbmisyo60fjkno.png" alt="Cloud_Engineer" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Security Architect
&lt;/h2&gt;

&lt;p&gt;Design and implement enterprise cloud solutions. Apply governance to identify, communicate, and minimize business and technical risks. Similar to engineer, you should be aware security services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2e6h8i577jdxcdsfnjp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2e6h8i577jdxcdsfnjp.png" alt="Cloud_Security_Architect" width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Network Engineer
&lt;/h2&gt;

&lt;p&gt;Design and implement computer and information networks, including local area networks (LAN), wide area networks (WAN), intranets, and extranets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz4pcamwp6436u46ofdp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbz4pcamwp6436u46ofdp.png" alt="Network_Engineer" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  MLOps Engineer
&lt;/h2&gt;

&lt;p&gt;Build and maintain AI and ML platforms and infrastructure, ensuring scalability, reliability, and efficiency across diverse environments to support data scientists and engineers. Design, implement, and operationally support AI/ML model activity and deployment pipelines, including continuous integration/continuous deployment (CI/CD) workflows, monitoring, and performance optimization to streamline production-grade machine learning solutions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4acvoo2isflqqrfnc6j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4acvoo2isflqqrfnc6j.png" alt="MLOps_Engineer" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Machine Learning Engineer
&lt;/h2&gt;

&lt;p&gt;As ML engineer, you design machine learning systems, models, and frameworks. Build artificial intelligence (AI) systems to automate predictive models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhb1ruphf3npnathvaq2c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhb1ruphf3npnathvaq2c.png" alt="Machine_Learning_Engineer" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data scientist
&lt;/h2&gt;

&lt;p&gt;Develop and implement AI/ML models to solve complex business problems. Train and fine-tune models using large datasets to optimize accuracy and efficiency, ensuring alignment with business objectives. Evaluate model performance through rigorous testing and validation, deploying them into production environments&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxc1rm9seto8dvt8z38i0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxc1rm9seto8dvt8z38i0.png" alt="Data_scientist" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I hope this blogs guides you deciding which certification path to choose to get desired role.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
    </item>
    <item>
      <title>AWS EBS Encryption Simplified : Protecting Your Cloud Data Effectively</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Thu, 12 Dec 2024 20:31:05 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-ebs-encryption-simplified-protecting-your-cloud-data-effectively-3pn7</link>
      <guid>https://forem.com/aws-builders/aws-ebs-encryption-simplified-protecting-your-cloud-data-effectively-3pn7</guid>
      <description>&lt;p&gt;When it comes to data storage on the AWS Cloud, AWS offers a variety of services tailored to meet different needs. Two of the most widely used are Amazon S3 (for object storage) and Amazon Elastic Block Store (EBS) (for block storage). If you need a block device for mounting on instances, with fast data access and long-term persistence, EBS is the go-to choice. &lt;/p&gt;

&lt;p&gt;Amazon EBS is tightly integrated with services like EC2 and RDS, making it a reliable and versatile option for many workloads. &lt;/p&gt;

&lt;p&gt;But here’s the question we all need to ask ourselves: What Should We Care About When Storing Data on AWS? &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The simple answer: Security.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AWS follows a shared responsibility model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS’s Responsibility: "Security of the Cloud"&lt;/li&gt;
&lt;li&gt;Customer’s Responsibility: "Security in the Cloud"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means that any data we store on AWS services, including EBS, is our responsibility to protect. If an attacker gains access to your AWS environment, unencrypted data can be an easy target.&lt;/p&gt;

&lt;p&gt;So, how do we secure data stored on EBS? &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;EBS Encryption.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Encrypting your EBS volumes ensures that your data is protected at rest. Encryption also secures all backups created from the volume and snapshots copied from it.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll explore two key methods for encrypting EBS volumes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Default Encryption: Encrypt new volumes automatically during creation.&lt;/li&gt;
&lt;li&gt;Encrypt Existing Non-Encrypted Volumes: Add encryption to volumes that were initially created without it.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Enable Default encryption
&lt;/h2&gt;

&lt;p&gt;By default, when account gets created this setting is disabled. We can enable this easily by going to EC2 Dashboard.&lt;/p&gt;

&lt;p&gt;Go to EC2 Dashboard → Under Account Attribute — Data protection and security → Manage&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0k5h7j43s7e51tkz9fy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0k5h7j43s7e51tkz9fy.png" alt="default_ebs_encryption" width="800" height="364"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you click on Manage, you should able to enable encryption by just selecting checkbox. One important thing to note here, KMS Key you will be using. Here I have selected default key, but I would suggest to create Customer Managed KMS key, make sure KMS key policy that should give permission to role/user which going to be used by EC2/Application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5nm5kc9fds6ioihot4q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5nm5kc9fds6ioihot4q.png" alt="default_ebs_encryption" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you enabled this setting, whenever you create EC2 instance it’s corresponding EBS volume will get encrypted with above key.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;BUT, what about the volume which are created without encryption ?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Encrypting existing Non-Encrypted Volume
&lt;/h2&gt;

&lt;p&gt;It may happen that while creating EC2 instance or EBS we didn’t created volume with encryption. This should not be a problem if we discovered earlier before attacker or auditor discovers ;)&lt;/p&gt;

&lt;p&gt;Encrypting a non-encrypted volume is a five step process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Take snapshot of non-encrypted volume.&lt;/li&gt;
&lt;li&gt;Copy &amp;amp; Encrypt Snapshot&lt;/li&gt;
&lt;li&gt;Create volume from Encrypted Snapshot&lt;/li&gt;
&lt;li&gt;Stop EC2 instance &amp;amp; Detach non-encrypted volume.&lt;/li&gt;
&lt;li&gt;Attached encrypted volume &amp;amp; Start EC2 instance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;As a part of this blog, I have done this process in my test environment. I would highly recommend to performance this steps in your test environment first and test your application. If all things work in test , only then proceed on production.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Let’s start encrypting a non-encrypted volume.&lt;/p&gt;

&lt;p&gt;As a part of this blog, I have spin-up EC2 instance, installed apache server on it and added simple HTML page. One thing also note, Availability Zone in which you created EC2 instance. EBS and EC2 are zone specific. You cannot attach EBS from AZ-1 to EC2 instance in AZ-2.&lt;/p&gt;

&lt;p&gt;So make sure to note-down the Availability Zone of EC2, EBS , it will be easy in performing steps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp9ywemlpaa126nn59yo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp9ywemlpaa126nn59yo.png" alt="aws_ec2_volume" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fity4ue6e4mojoehr1vub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fity4ue6e4mojoehr1vub.png" alt="aws_ec2_unencrypted" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in above screenshot, the volume which is attached to EC2 instance is not encrypted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;1. Take snapshot of non-encrypted volume&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Part of this step, go to EBS volume attached to EC2 instance. You can easily do this by selecting EC2 → Go to Storage Tab →Click on corresponding EBS volume which start with vol-&lt;/p&gt;

&lt;p&gt;In this step it also good to note down device name for respective volume, in this case it &lt;code&gt;/dev/xvda&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You will get navigated to below page where you can select from Action to Create Snapshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk9da5jly2soef81t0fu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk9da5jly2soef81t0fu.png" alt="aws_volume" width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find snapshot section on left pane of AWS console. Go to Snapshot console, click on arrange by creation date (it will be easy for us to check recent snapshot if you have too many snapshots.). You will find recent snapshot creation in process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3nbj3g35thq7o6x5ik3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3nbj3g35thq7o6x5ik3.png" alt="aws_ebs_snapshot" width="800" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After some time depending on size, you should able see the status Completed and progress Available. It means snapshot created successfully.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;2. Copy &amp;amp; Encrypt Snapshot&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once snapshot is created successfully from step one, it’s time to copy snapshot and also encrypt. To do that, in snapshot console, select snapshot we created earlier → Action → Copy Snapshot.&lt;/p&gt;

&lt;p&gt;During this copy process, you can also encrypt snapshot. For encryption you can provide default KMS key or customer managed KMS key. Make sure KMS key you will be selecting should have KMS key policy which grant access to corresponding EC2 role/ user who will be access application.&lt;/p&gt;

&lt;p&gt;Here I have selected default KMS key which have permission to users and roles belongs this account only.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7xecwxjdb69kxti00ij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr7xecwxjdb69kxti00ij.png" alt="aws_ebs_snapshot" width="800" height="78"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdm10jkthp3loa2rx2lf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdm10jkthp3loa2rx2lf.png" alt="aws_ebs_snapshot" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Copy Snapshot, this operation should take some time to complete. Depending on size, time can vary. After 2 minute (in my case), I was able to see encrypted snapshot available.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6y8rzwuwpe3hiksurox.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj6y8rzwuwpe3hiksurox.png" alt="aws_ebs_snapshot" width="800" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;3. Create volume from Encrypted Snapshot&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Now we have encrypted snapshot available, we can create encrypted volume. Select encrypted snapshot → Action → Create Volume from Snapshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqq6sd9e1uzjmkdesz31b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqq6sd9e1uzjmkdesz31b.png" alt="aws_ebs_snapshot" width="800" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will get pop-up after you click Create Volume from Snapshot.&lt;/p&gt;

&lt;p&gt;On first option, you can select Volume time GP2, GP3, io1/2 etc. But I would recommend not to change volume type in this case. Keep the volume same as it was before like if current EC2 is on GP2 keep GP2 only, if it is on GP3, keep GP3 only. Our goal here is to encrypt volume only, not to play with performance or other parameters of volume. So if size was 8 GiB, keep 8GiB only. In your case if it is 100GiB, keep it that only.&lt;/p&gt;

&lt;p&gt;So jumping on to availability zone, make sure to select correct availability zone. If your current EC2 instance is in us-east-1a, you select us-east-1a. In my case I have EC2 instance in us-east-1f so I selected here us-east-1f.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr45ve4654adwxegsi6qv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr45ve4654adwxegsi6qv.png" alt="aws_copy_snapshot" width="800" height="981"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on Create Volume. Volume should be available quickly under Volume console. As you can see, Volume state is Avaialble, it means it is not attached to any instance and the one above which is In-use attached to EC2 instance but it is not encrypted.&lt;/p&gt;

&lt;p&gt;So our next step is to attached this encrypted volume to EC2 instance. But before that we need to detach Non-encypted volume from EC2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl37fq18z7318u89f5b8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl37fq18z7318u89f5b8m.png" alt="aws_ebs_volume" width="800" height="97"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;4. Stop EC2 instance &amp;amp; Detach non-encrypted volume&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As this step leads to application downtime or non-avaialblity. I would suggest to have communication to user who access application. Or best way to have high-available application which is behind Load Balancer and do one server at a time.&lt;/p&gt;

&lt;p&gt;In this step, we are going to detach non-ecrypted volume. Before that it is recommended to stop EC2 instance first. Go to corresponding EC2 instance → Instance State →Stop Instance.&lt;/p&gt;

&lt;p&gt;Once instance stopped, go to corresponding EC2 volume attached to EC2 instance. It should take you to the volume console, select volume → Actions → Detach Volume. You can confirm when pop-up occured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac0fy5yurf08wdm8mpwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac0fy5yurf08wdm8mpwu.png" alt="aws_ebs_detach" width="800" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once done, volume state should change to available, may be you also need refresh to reflect state.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;5. Attached encrypted volume &amp;amp; Start EC2 instance&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
In this step we will be attaching encrypted volume to EC2 instance.&lt;/p&gt;

&lt;p&gt;Select encrypted volume → Actions → Attach volume.&lt;/p&gt;

&lt;p&gt;You will get below pop-up.&lt;/p&gt;

&lt;p&gt;In instance drop down, select the instance you stopped in pervious step, you can copy instance-id from EC2 console and just search if there are too many instance in drop-down. If you are not able to see instance, check volume availability zone, it may happen EC2 and EBS volume you created from snapshot are in different AZs.&lt;/p&gt;

&lt;p&gt;In device name filed, make sure to enter same name that you have noted down in step-1. In my case it is &lt;code&gt;/dev/xvda&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Click on attach volume.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uk0hhisrvojyoli87vo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uk0hhisrvojyoli87vo.png" alt="aws_ebs_attached" width="800" height="733"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once volume is attached. Volume state will change In-Use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n19vjtuwuujjy0wcxfn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8n19vjtuwuujjy0wcxfn.png" alt="aws_ebs_attached" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to EC2 console and start EC2 instance. If the instance is not started, check have you mentioned correct device name as before.&lt;/p&gt;

&lt;p&gt;Wait of system checks to finish.&lt;/p&gt;

&lt;p&gt;Try accessing your application. In my case I was successfully able to access application 🚀🚀🚀&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymkp0a2gi2o9wf3eu4fw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymkp0a2gi2o9wf3eu4fw.png" alt="aws_ebs_encrypted" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;I hope you found this blog useful. Happy Cloud Computing 🚀&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>data</category>
      <category>security</category>
    </item>
    <item>
      <title>Protect Sensitive Data on AWS: A Beginner’s Guide to Amazon Macie</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Wed, 11 Dec 2024 20:41:49 +0000</pubDate>
      <link>https://forem.com/aws-builders/protect-sensitive-data-on-aws-a-beginners-guide-to-amazon-macie-1h3</link>
      <guid>https://forem.com/aws-builders/protect-sensitive-data-on-aws-a-beginners-guide-to-amazon-macie-1h3</guid>
      <description>&lt;p&gt;The exponential growth of data has enabled businesses to innovate more in their products and services, making them more personalized. As organizations adopt Cloud technology like Amazon Web Services (AWS) to modernize their data capabilities and innovate around it, they face the complexity of managing vast information spread across multiple AWS accounts. Each account corresponds to distinct business units and use cases, adding layers of complexity not only in volume but also in the sensitivity and diversity of the data involved.&lt;/p&gt;

&lt;p&gt;This involvement ranges from health information (PHI) and payment card industry (PCI) data to personally identifiable information (PII) and proprietary organizational intellectual property. A data breach involving this information can cause significant financial and reputational losses. Therefore, identifying and safeguarding this sensitive data scattered across accounts becomes one of the critical challenges.&lt;/p&gt;

&lt;p&gt;In AWS, Amazon Macie emerges as a critical solution in addressing this challenge. Macie plays a pivotal role in answering below fundamental questions that are essential for robust data security and compliance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Understanding Data Storage:&lt;/strong&gt; What data do I have in my S3 buckets, and where is it located?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Assessing Data Exposure:&lt;/strong&gt; How is my data being shared and stored, and is it publicly or privately accessible?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Near Real-Time Data Classification:&lt;/strong&gt; How can I classify my data in near real-time to ensure constant vigilance?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Identifying Sensitive Information:&lt;/strong&gt; What PII or PHI might be exposed publicly, and how can I mitigate this risk?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Automating Compliance and Security:&lt;/strong&gt; How do I build and implement workflow remediation to meet my specific security and compliance needs?&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;em&gt;What is Amazon Macie?&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon Macie is a data security service that uses machine learning and pattern matching to identify sensitive data in Amazon S3, offering insights into security risks and automated protection. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It continuously evaluates S3 buckets using built-in &amp;amp; custom criteria for potential data security or privacy concerns providing findings and detailed statistics for informed decision-making. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Macie integrates with Amazon EventBridge and AWS Security Hub, enhancing its capabilities for monitoring and remediation of data security issues.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Use Cases of Amazon Macie
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;- Compliance Monitoring and Reporting:&lt;/strong&gt; Organisations subject to regulations like GDPR, HIPAA, or PCI-DSS can use Macie to automatically discover and classify sensitive data, ensuring compliance by identifying where this data resides and how it’s being used or accessed.&lt;/p&gt;

&lt;p&gt;**- Intellectual Property Protection: **Companies can leverage Macie to detect and protect intellectual property stored in S3 buckets, ensuring that proprietary information is not inadvertently exposed or accessed by unauthorised users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Mergers and Acquisitions (M&amp;amp;A) Data Security Assessmentss:&lt;/strong&gt; During M&amp;amp;A activities, Macie can be used to quickly assess the data security posture of acquired or merging entities, identifying sensitive data and ensuring that it complies with corporate policies and regulations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Educational Institutions Protecting Student Information:&lt;/strong&gt; Schools and universities can use Macie to safeguard student records and sensitive information, ensuring compliance with education-related privacy regulations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Healthcare Data Management:&lt;/strong&gt; Healthcare organisations can employ Macie to secure patient data, classifying and protecting health information in accordance with HIPAA and other health data protection standards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Amazon Macie
&lt;/h2&gt;

&lt;p&gt;Enabling Amazon Macie is a very easy task. Amazon Macie is a regional service, so you need to make sure you select the respective region as per your need from the top left corner in AWS Console.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In order to get started with Amazon Macie, AWS Console Mavie Getting Started → Enable Macie.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In multi-account environments, you can monitor Macie’s usage across your organisation in AWS Organizations through the usage page of the delegated administrator account. For this blog, I’m using a single account set up.&lt;/p&gt;

&lt;p&gt;Part of pre-requisite, we need to create S3 bucket to store results.  For every object analysed, Macie logs details in ‘sensitive data discovery results,’ including objects it couldn’t analyse due to errors. These results are stored for 90 days, but for longer retention, you can configure Macie to save these results in an S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptaj1eukj6r98hhpo75k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptaj1eukj6r98hhpo75k.png" alt="aws_s3_macie" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Just to understand how macie shows findings, you can generate sample findings in Macie. Go to Macie Console → Setting → Generate Sample finding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp9d9bu0j3okpxmu9a6p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkp9d9bu0j3okpxmu9a6p.png" alt="amazon_macie_sample_findings" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To view finding, you can go to the Finding page on the same console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rn2dmobryk0wz36y89y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rn2dmobryk0wz36y89y.png" alt="amazon_macie_sample_findings" width="800" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, in above screenshots Macie generates sample findings related to financial, personal data. You can drill more by selecting specific to finding, it will show bucket, total number of sensitive data, type of information.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s create job now.
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;You can create a custom job where you can define specific bucket, criteria. Go to Job → Create Job&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi32wirzj29mdkcatozay.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi32wirzj29mdkcatozay.png" alt="macie_job" width="800" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select bucket to scan → Refine scope, you can select frequency of job, type of files to scan. For demo purposes, I choose One time job. But depending on your requirement we can schedule a job Daily, weekly and monthly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle45xkn2tbwatjofgpx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fle45xkn2tbwatjofgpx5.png" alt="macie_job" width="800" height="442"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In this job, I will be checking the date of birth, I’m choosing a custom managed identifier – “DATE_OF_BIRTH”. So this job should be able to detect these keywords in files : bday,b-day, birth date, birthday, date of birth, dob&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtl3l912bfjs4x4x1agy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtl3l912bfjs4x4x1agy.png" alt="macie_job" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I created the sample file with below content and uploaded it to the bucket.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Date of Birth: 1961-04-21
Future Date: 2024-03-04
--------------------
date of birth: 1912-01-20
Future Date: 2024-03-06
--------------------
dob: 1956-05-25
Future Date: 2024-03-16
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This One-Time job will get executed once you finish with creation. However, you can also execute this job any time after creation.
In few minutes, Macie shows below findings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fartnlg8rthxvz4ugru7m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fartnlg8rthxvz4ugru7m.png" alt="aws_macie_finding" width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd97491bxypywnf6mykjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd97491bxypywnf6mykjd.png" alt="aws_macie_finding" width="800" height="909"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above we just see the example of Custom Managed Identifier – date of birth. These identifiers are defined by AWS. You can find a list &lt;a href="https://docs.aws.amazon.com/macie/latest/user/mdis-reference.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But let’s say you want to define some customer criteria which is not in the AWS list, you can do it using Custom Data Identifier. With &lt;a href="https://docs.aws.amazon.com/macie/latest/user/custom-data-identifiers.html" rel="noopener noreferrer"&gt;custom data identifiers&lt;/a&gt;, you can define detection criteria that reflect your organisation’s particular scenarios, intellectual property, or proprietary data—for example, employee IDs, customer account numbers, or internal data classifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Macie Benefits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Compliance Assurance: With its capability to discover and classify sensitive data according to various regulatory standards, Macie assists organisations in meeting compliance requirements for regulations such as GDPR, HIPAA, and PCI-DSS, thereby mitigating the risk of compliance-related penalties.&lt;/li&gt;
&lt;li&gt;Automated monitoring and actions: Amazon Macie seamlessly integrates with other AWS services like Amazon EventBridge, AWS Security Hub, and AWS Lambda, facilitating the creation of automated workflows which enables quick response and remediation actions.&lt;/li&gt;
&lt;li&gt;Global Data Visibility: It provides organisations with a unified view of their sensitive data across multiple AWS regions and accounts, enhancing the ability to manage data security on a global scale. This visibility is crucial for multinational companies dealing with data residency and sovereignty issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Macie pricing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Macie uses a usage-based pricing model, charging based on the volume of data processed for sensitive data discovery.&lt;/li&gt;
&lt;li&gt;First-time users receive a 30-day free trial.&lt;/li&gt;
&lt;li&gt;During the free trial, Amazon Macie provides an estimate of monthly costs after the trial ends.&lt;/li&gt;
&lt;li&gt;Costs can be controlled by limiting the amount of data scanned, such as:

&lt;ul&gt;
&lt;li&gt;Excluding CloudTrail logs from scans.&lt;/li&gt;
&lt;li&gt;Focusing on files with specific extensions.&lt;/li&gt;
&lt;li&gt;Scanning files based on tags.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In a world where data breaches can lead to significant financial and reputational losses, the role of Amazon Macie in safeguarding sensitive data is invaluable. Macie brings benefits that enhance an organisation’s security posture. I hope this blog gives you a good understanding of AWS’s data security service and its importance.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
      <category>privacy</category>
    </item>
    <item>
      <title>AWS Centralised Root Access Management : Simplifying Operations</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Sun, 01 Dec 2024 06:51:42 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-centralised-root-access-security-simplifying-operations-3gmh</link>
      <guid>https://forem.com/aws-builders/aws-centralised-root-access-security-simplifying-operations-3gmh</guid>
      <description>&lt;p&gt;I’m sure many of us came across managing Root credentials for multiple account and setting up password for those accounts, setting MFA for those accounts. It is one of the manual process that I don’t find efficient and also sometimes can leads human error. And if we don't set up root credentials, we have critical finding in SecurityHub then ;)&lt;/p&gt;

&lt;p&gt;Also, previously each AWS account was provisioned with root user credentials that granted unrestricted access which kind of contradictory to AWS principle "Least Privilege Access." ;)&lt;/p&gt;

&lt;p&gt;But Recently AWS launched new capability in IAM where you can centrally manage root access for member accounts in AWS Organizations which I find really nice one. Even for root now, you can have short-term credentials and limited access. Let's explore this feature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Newly created accounts in AWS Organizations come without root credentials by default, ensuring member accounts cannot sign in as the root user or recover passwords for it. So these are accounts are secure by default like no one can login with root until and unless org admin enables it.&lt;/li&gt;
&lt;li&gt;After centralising root access, you can opt to delete root credentials from member accounts. This includes removing the root user password, access keys, signing certificates, and deactivating or deleting multi-factor authentication (MFA).&lt;/li&gt;
&lt;li&gt;Centralised monitoring of root credential status across all member accounts aids in demonstrating compliance with security policies and regulatory requirements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And one good thing is, the new capability does not provide full root access but allows temporary credentials for five specific actions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Re-enabling Account Recovery:&lt;/strong&gt; Reactivating account recovery without root credentials.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditing Root User Credentials:&lt;/strong&gt; Read-only access to review root user information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deleting Root User Credentials:&lt;/strong&gt; Removing console passwords, access keys, signing certificates, and MFA devices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unlocking an S3 Bucket Policy:&lt;/strong&gt; Editing or deleting an S3 bucket policy that denies all principals.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Unlocking an SQS Queue Policy:&lt;/strong&gt; Editing or deleting an Amazon SQS resource policy that denies all principals.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's get hands-on on this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Your AWS accounts must be managed under AWS Organizations.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://dev.toCentralised"&gt;Enable trusted access for AWS Identity and Access Management&lt;/a&gt; (IAM) in AWS Organizations.&lt;/li&gt;
&lt;li&gt;The following permissions are necessary:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;iam:EnableOrganizationsRootCredentialsManagement&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;iam:EnableOrganizationsRootSessions&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;organizations:RegisterDelegatedAdministrator&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;organizations:EnableAwsServiceAccess&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hands-On
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Enabling Centralised Root Access
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;In the AWS Management Console, navigate to the IAM section, select "Root access management" from left pane and enable the desired features. In this demo, I have enabled all features where I can delete S3, SQS policy and root password recovery.&lt;/li&gt;
&lt;li&gt;Additionally, if you want delegated admin for this type of activity, assign a dedicated member account as the delegated administrator for IAM to manage root access and perform privileged tasks, ensuring separation of duties and enhanced security.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now you can either manage these root credential from delegated admin or management i.e Organizations account account.&lt;/p&gt;

&lt;p&gt;Once enabled, you will get below view type. We can see in screenshot, for one account root user credentials are present and for other credentials are not present. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy2z9gtkkil3ljqxmqi5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmy2z9gtkkil3ljqxmqi5.png" alt="aws_root_creds_mgmt" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  New Account Creation
&lt;/h3&gt;

&lt;p&gt;Here I have created new account named - &lt;code&gt;cloudgyan45-dataplatform&lt;/code&gt; and by default there are not credentials. Even if you try Forgot password wizard which we normally do for Root user, you will not able to reset password. I have tried and get below message:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Forgot password output : Got an email for password Reset:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbpxz3jmx705hbjokt7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbpxz3jmx705hbjokt7y.png" alt="aws_root_creds_pwd" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;But if I click on link, we get output :&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vziuuy19lciqzdotr70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6vziuuy19lciqzdotr70.png" alt="aws_root_creds_reset" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Existing Account
&lt;/h3&gt;

&lt;p&gt;Even for existing account, you will be able to delete root credentials and you don't need to manage root password anymore in your password storage or file ;)&lt;/p&gt;

&lt;p&gt;Select existing account, on top right corner, click on &lt;code&gt;Take privileged action&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqedq1c33duo7qs8sgrjg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqedq1c33duo7qs8sgrjg.png" alt="aws_root_take_action" width="538" height="154"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9el12unwmll0j38ffhfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9el12unwmll0j38ffhfz.png" alt="aws_root_take_action" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you delete root credential, you don't need worry about root logins as no one can logged in with root now. But make sure you haven't configured root credential or keys in your application. :)&lt;/p&gt;

&lt;h3&gt;
  
  
  Taking Privileged action
&lt;/h3&gt;

&lt;p&gt;Now, let's try out last part, taking privileged action. Imagine scenario where by mistake someone put S3 bucket bucket policy as Deny to all or Deny to admin roles as well. Normally, in this case, we logged via root and we delete policy. Consider below kind of policy :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyAllAccess",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": [
        "arn:aws:s3:::&amp;lt;bucket-name&amp;gt;",
        "arn:aws:s3:::&amp;lt;bucket-name&amp;gt;/*"
      ]
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I tried applying below policy to one of the bucket I have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&amp;gt; Please note that I have created bucket for demo purpose. Don't try this on your production or any other major workload.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Apply S3 bucket policy:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Friofxvc3dmvyauhb5ygw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Friofxvc3dmvyauhb5ygw.png" alt="aws_root_s3" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access is blocked to bucket&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv9mc7c6ut5pn4cm544j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzv9mc7c6ut5pn4cm544j.png" alt="aws_root_s3_deny" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, take privileged action - delete bucket policy in this case by login into management account or delegated admin for root access management. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuk77v7ea2prvwqsia42y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuk77v7ea2prvwqsia42y.png" alt="aws_root_take_action" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After deletion, we can access bucket again and also policy got deleted.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferm6j35qbqfd52s1jo9q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferm6j35qbqfd52s1jo9q.png" alt="aws_root_take_action" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdukl1ecijlxquzr9msx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdukl1ecijlxquzr9msx0.png" alt="aws_root_take_action" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Things to consider while implementing feature
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Make sure access to organisation or delegated admin account is very well protected.&lt;/li&gt;
&lt;li&gt;Monitor CloudTrail events : &lt;code&gt;AssumeRoot&lt;/code&gt; . This API operation generated someone try to take any action via root access management.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Centralising root access management in AWS Organisations is a powerful feature that simplifies administration and reduces security risks. Properly applied, this feature can significantly enhance your organisation’s security posture and operational efficiency.&lt;/p&gt;

&lt;p&gt;Thank you reading this blog, appreciate your time !!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>Amazon Inspector Integrations: Strengthening Cloud Security with Security Hub and EventBridge</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Tue, 12 Nov 2024 20:59:52 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-inspector-integrations-strengthening-cloud-security-with-security-hub-and-eventbridge-33g7</link>
      <guid>https://forem.com/aws-builders/amazon-inspector-integrations-strengthening-cloud-security-with-security-hub-and-eventbridge-33g7</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Learn how to leverage Amazon Inspector’s integrations for real-time alerts and automated responses.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this  part of our Amazon Inspector series, we’ll explore how to integrate Inspector with AWS Security Hub and EventBridge to enhance your cloud security strategy. By connecting these services, you can centralize vulnerability alerts and automate responses across your AWS environment. If you missed the earlier parts, check out &lt;a href="https://dev.to/aws-builders/amazon-inspector-explained-boosting-cloud-security-for-your-aws-workloads-ole"&gt;Part 1: Introduction to Amazon Inspector&lt;/a&gt; and &lt;a href="https://dev.to/aws-builders/amazon-inspector-deep-dive-cis-benchmark-container-image-and-sbom-39ap"&gt;Part 2: ECR Scanning, CIS Scans, and SBOM&lt;/a&gt; for a complete understanding. Now, let’s dive into the power of integration!&lt;/p&gt;

&lt;p&gt;We are going to see how we can integrate Inspector with &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS SecurityHub&lt;/li&gt;
&lt;li&gt;Event Bridge&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  SecurityHub Integration
&lt;/h2&gt;

&lt;p&gt;AWS Security Hub gives you a centralized view of your AWS security posture, allowing you to assess your environment against industry standards and best practices. It gathers security data from multiple AWS accounts, services, and third-party tools, helping you analyze trends and focus on the most critical security issues. It can be integration with Amazon GuardDuty, Audit, Macie etc. You can view complete list &lt;a href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-internal-providers.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;By default, inspector sends finding to SecurityHub. If you don't have SecurityHub enabled, you can enable it easily. SecurityHub offers 30 days of trial period.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign into Console, search for SecuritHub → Activate.&lt;/li&gt;
&lt;li&gt;Under SecurityHub, in navigation → Integrations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In below screenshot, you can see by default it accepts findings. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bmrqzebktex0j5ur9k1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bmrqzebktex0j5ur9k1.png" alt="aws_inspector_sec_hub" width="543" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you don't want findings in security hub, you can stop it.&lt;/p&gt;

&lt;p&gt;Since SecurityHub can  have integration with lot of services, in order to filter findings for inspector, you can try below filter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdp65o4bwqlkab8f1nke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdp65o4bwqlkab8f1nke.png" alt="aws_inspector_sec_hub" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With inspector findings in SecurityHub, you will have complete security posture of you Cloud. 🚀🚀&lt;/p&gt;

&lt;h2&gt;
  
  
  EventBridge Integration.
&lt;/h2&gt;

&lt;p&gt;One of my favorite service. It is like salt we need in each dish if we want to make some recipe. So in any type architecture you cannot forgot this service.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We can create event rule, that can get triggered on initial scan, trigger if there is critical vulnerability.&lt;/li&gt;
&lt;li&gt;In this demo, I will be creating rule that gets triggered when initial scan is getting completed for EC2 instance and send summary over an email. But if you would like explore can find more patterns,  &lt;a href="https://docs.aws.amazon.com/inspector/latest/user/eventbridge-integration.html" rel="noopener noreferrer"&gt;explore here.&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In below diagram, you can find possibilities to have alert in place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtlwvy90lqppjfsowcu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtlwvy90lqppjfsowcu6.png" alt="aws_inspector_eventbridge" width="693" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this blog, let's keep it simple. Create an rule for initial_scan and send it to SNS as target. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sign-in into Console → Go to EventBridge → EvenBus → Rules → Create Rule&lt;/li&gt;
&lt;li&gt;Enter details like name and description, keep default eventbus and Rule with event pattern.&lt;/li&gt;
&lt;li&gt;Click Next, select Custom Pattern and paste below JSON:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "source": ["aws.inspector2"],
  "detail-type": ["Inspector2 Scan"],
  "detail": {
    "scan-status": ["INITIAL_SCAN_COMPLETE"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Click Next → Select target as a SNS topic. In my case, I have SNS topic created with Email as subscription.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click Next, review rule setting → finished.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now we have rule created, create an EC2 instance. Inspector will do scan as soon as EC2 instance created. Once scan is done, you should get an alert on your preferred communication channel. &lt;/p&gt;

&lt;p&gt;In my case, I have an email configured, I get summary over an email.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F596liobh413djqresqvy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F596liobh413djqresqvy.png" alt="aws_inspector_scan" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on your need, you can also try transforming event message with the help of lambda and send that message to channel.&lt;/p&gt;




&lt;p&gt;Integrating Amazon Inspector with AWS Security Hub and EventBridge strengthens your security by centralizing alerts and automating responses. With these tools working together, you can monitor, prioritize, and act on vulnerabilities efficiently across your AWS environment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Thank you for reading this blog, appreciate your time and passion.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
      <category>securityhub</category>
    </item>
    <item>
      <title>Amazon Inspector Deep-Dive : CIS Benchmark, Container image and SBOM</title>
      <dc:creator>vikasbanage</dc:creator>
      <pubDate>Tue, 12 Nov 2024 16:20:45 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-inspector-deep-dive-cis-benchmark-container-image-and-sbom-39ap</link>
      <guid>https://forem.com/aws-builders/amazon-inspector-deep-dive-cis-benchmark-container-image-and-sbom-39ap</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/aws-builders/amazon-inspector-explained-boosting-cloud-security-for-your-aws-workloads-ole"&gt;first part&lt;/a&gt; of our Amazon Inspector series, we covered  basics Amazon Inspector and covered EC2 and Lambda scanning part. &lt;/p&gt;

&lt;p&gt;Now, let’s explore more feature within Inspector: ECR scanning, CIS benchmarks, and SBOM generation. These features give you a more thorough view of your security posture, from container image vulnerabilities to best-practice configurations and software transparency. Whether you’re safeguarding your containerized workloads, ensuring compliance, or tracking your software components, Amazon Inspector has the tools to enhance your security strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  ECR Scan - Scanning Docker Images
&lt;/h2&gt;

&lt;p&gt;Amazon Inspector scans container images in Amazon Elastic Container Registry (ECR) for software vulnerabilities, generating findings on package risks. When you enable Amazon Inspector as the preferred scanning service for your private registry, you have two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Basic Scanning&lt;/strong&gt;: Configure repositories to scan images on push or perform manual scans.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced Scanning&lt;/strong&gt;: Perform deeper scans at the registry level, detecting vulnerabilities in operating system and programming language packages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's do quick demo. I assume here, you have activate ECR scan type in Amazon Inspector. If not, you can do it by going to Inspector → In the navigation pane, choose Account management → Select Scan Type and activate.&lt;/p&gt;

&lt;h3&gt;
  
  
  ECR Scan Demo
&lt;/h3&gt;

&lt;p&gt;For this blog purpose, I have dockerise simple NodeJS app which has express dependency. I'm using outdate packages and image here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;app.js
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express');
const app = express();
const port = 3000;

// Basic route to trigger a response
app.get('/', (req, res) =&amp;gt; {
    res.send('Hello, this is a vulnerable Node.js app!');
});

// Start the server
app.listen(port, () =&amp;gt; {
    console.log(`App listening at http://localhost:${port}`);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;package.json
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "vulnerable-node-app",
  "version": "1.0.0",
  "description": "A vulnerable Node.js app",
  "main": "app.js",
  "dependencies": {
    "express": "3.0.0"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Dockerfile
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Node.js vulnerable Dockerfile

# Use an outdated Node.js base image
FROM node:10

# Set working directory
WORKDIR /usr/src/app

# Copy package.json and install outdated dependencies
COPY package.json ./
RUN npm install

# Copy app source code
COPY . .

# Expose the app port
EXPOSE 3000

# Start the app
CMD ["node", "app.js"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build docker image and push to ECR repository. Within few minutes, Inspector will scan ECR repository and images and will generate findings.&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding Findings
&lt;/h3&gt;

&lt;p&gt;To view findings Go to Inspector → Findings → Container Images&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflurd52tammghsipxccj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflurd52tammghsipxccj.png" alt="aws_inspector_ecr" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Findings will give us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CVEs which are critical to fix. &lt;/li&gt;
&lt;li&gt;On selection of specific CVEs, it gives which package is currently installed and which package will solve vulnerability. &lt;/li&gt;
&lt;li&gt;Under remediation, it will give us hint what needs to be done.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here fix is easy, we just need update LTS of NodeJs image and express. &lt;/p&gt;

&lt;h2&gt;
  
  
  CIS Scan
&lt;/h2&gt;

&lt;p&gt;Amazon Inspector’s CIS scans assess your EC2 instance configurations against Center for Internet Security (CIS) benchmarks to ensure they meet security standards. You can run these scans on-demand or on a schedule after enabling EC2 scanning. &lt;br&gt;
To target specific instances, create a scan configuration with instance tags and a CIS Benchmark level, which can be applied across multiple accounts if you’re a delegated administrator.&lt;/p&gt;
&lt;h3&gt;
  
  
  Configuring CIS Scan
&lt;/h3&gt;

&lt;p&gt;To grant permissions to run CIS scans, attach &lt;code&gt;AmazonSSMManagedInstanceCore&lt;/code&gt; and &lt;code&gt;AmazonInspector2ManagedCispolicy&lt;/code&gt; to EC2 instance role.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to Inspector → CIS Scan → Create Scan&lt;/li&gt;
&lt;li&gt;Enter required details. LEVEL_1 corresponds to foundational security and LEVEL_2 is for more critical workload for data security.&lt;/li&gt;
&lt;li&gt;In this demo, I'm targeting EC2 instance which have tag &lt;code&gt;CISScan=True&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below screenshot is for individual account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsw17hpf3glmdh2afkue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsw17hpf3glmdh2afkue.png" alt="aws_inspector_cis" width="800" height="793"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you would like to configure CIS scan from delegated admin i.e. from central account , it is also possible. From central setting you should able to specify more than one account, also manage setting centrally. &lt;/p&gt;
&lt;h3&gt;
  
  
  Understanding Scan
&lt;/h3&gt;

&lt;p&gt;Within few minutes, CIS scan should be complete.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd3pjk9j529xtpa1jgv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjd3pjk9j529xtpa1jgv6.png" alt="aws_inspector_cis" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Under Resource Status you should able to see ❌ which means CIS checks failed, ➖ means resources is not evaluated and ✅ CIS checks are passed.&lt;/li&gt;
&lt;li&gt;If you click on specific title, it gives you details about that CIS checks e.g. in my case we see journald should be  configured to write logfiles to persistent disk. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Running CIS on Private EC2 instance
&lt;/h3&gt;

&lt;p&gt;When running CIS scans on private instances, you’ll need &lt;strong&gt;VPC endpoints&lt;/strong&gt; for Systems Manager services. Key endpoints include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ssm.amazonaws.com&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ssmmessages.amazonaws.com&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ec2messages.amazonaws.com&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inspector uses &lt;strong&gt;OVAL&lt;/strong&gt; (Open Vulnerability and Assessment Language) definitions for assessments, stored in Amazon S3. Allow listing these Amazon S3 buckets in VPCs ensures access to the required definitions for CIS scans:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;inspector2.amazonaws.com&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;s3.amazonaws.com&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ssm.amazonaws.com&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ssmmessages.amazonaws.com&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup allows Inspector to benchmark, assess, and secure EC2 instances according to CIS standards.&lt;/p&gt;
&lt;h2&gt;
  
  
  Software Bill of Materials - SBOM
&lt;/h2&gt;

&lt;p&gt;A Software Bill of Materials (SBOM) lists all open-source and third-party components in your codebase. Amazon Inspector generates SBOMs for monitored resources, which can be exported in CycloneDX or SPDX formats to an Amazon S3 bucket. &lt;em&gt;Note that SBOM export is not currently supported for Windows EC2 instances.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why SBOMs are important :&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SBOM provides a detailed inventory of software components, allowing organizations to identify and address vulnerabilities in third-party and open-source components more effectively.&lt;/li&gt;
&lt;li&gt;SBOM ensures transparency by documenting all components within the software, which is vital for regulatory compliance and meeting industry standards.&lt;/li&gt;
&lt;li&gt;In the event of a security incident, an SBOM allows teams to quickly locate and assess affected components, speeding up response and mitigation efforts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, let's start for quick demo.&lt;/p&gt;

&lt;p&gt;Before starting with SBOM export, we need to have Customer Managed KMS and S3 bucket created in advance.&lt;/p&gt;

&lt;p&gt;I have created Customer Managed KMS with below policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Id": "key-consolepolicy-3",
    "Statement": [
        {
            "Sid": "Enable IAM User Permissions",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": "kms:*",
            "Resource": "*"
        },
        {
            "Sid": "Allow access for Key Administrators",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:role/&amp;lt;rolename&amp;gt;"
                ]
            },
            "Action": [
                "kms:Create*",
                "kms:Describe*",
                "kms:Enable*",
                "kms:List*",
                "kms:Put*",
                "kms:Update*",
                "kms:Revoke*",
                "kms:Disable*",
                "kms:Get*",
                "kms:Delete*",
                "kms:TagResource",
                "kms:UntagResource",
                "kms:ScheduleKeyDeletion",
                "kms:CancelKeyDeletion",
                "kms:RotateKeyOnDemand"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow Amazon Inspector to use the key",
            "Effect": "Allow",
            "Principal": {
                "Service": "inspector2.amazonaws.com"
            },
            "Action": [
                "kms:Decrypt",
                "kms:GenerateDataKey*"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "111122223333"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:inspector2:us-east-1:111122223333:report/*"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, create S3 bucket with below policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "allow-inspector",
            "Effect": "Allow",
            "Principal": {
                "Service": "inspector2.amazonaws.com"
            },
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:AbortMultipartUpload"
            ],
            "Resource": "arn:aws:s3:::&amp;lt;bucketname&amp;gt;/*",
            "Condition": {
                "StringEquals": {
                    "aws:SourceAccount": "111122223333"
                },
                "ArnLike": {
                    "aws:SourceArn": "arn:aws:inspector2:us-east-1:111122223333:report/*"
                }
            }
        },
        {
            "Sid": "allow-role-access",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/&amp;lt;rolename&amp;gt;"
            },
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:ListBucket",
                "s3:GetObjectAcl",
                "s3:PutObjectAcl",
                "s3:PutBucketPolicy"
            ],
            "Resource": [
                "arn:aws:s3:::&amp;lt;bucketname&amp;gt;",
                "arn:aws:s3:::&amp;lt;bucketname&amp;gt;/*"
            ]
        }
    ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once we have KMS and S3 bucket in place, let's export SBOM for EC2.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go Inspector → In Navigation choose Export SBOM.&lt;/li&gt;
&lt;li&gt;On the Export SBOMs page, use the Add filter menu to select specific resources for the report. Without filters, Amazon Inspector will export reports for all active resources. For this demo purpose, I'm exporting EC2 which has specific tag.&lt;/li&gt;
&lt;li&gt;Select Export format. Choose anyone.&lt;/li&gt;
&lt;li&gt;Choose S3 bucket and KMS key we have created in above steps and Start Export.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbefp8kookohh9pi446c6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbefp8kookohh9pi446c6.png" alt="aws_inspector_sbom" width="800" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's wait to finish export. Once finished, go to S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffb2gh2dg3bvf9skh2mzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffb2gh2dg3bvf9skh2mzu.png" alt="aws_inspector_s3_sbom" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For now, we will just download file and try to understand it. Below I have pasted part of downloaded file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   "bomFormat":"CycloneDX",
   "specVersion":"1.4",
   "version":1,
   "metadata":{
      "timestamp":"2024-11-09T14:48:33Z",
      "component":{
         "type":"operating-system",
         "name":"AMAZON_LINUX_2023"
      },
      "properties":[
         {
            "name":"amazon:inspector:ami",
            "value":"ami-063d43db0594b521b"
         },
         {
            "name":"amazon:inspector:arch",
            "value":"x86_64"
         },
         {
            "name":"amazon:inspector:account_id",
            "value":"11112222333"
         },
         {
            "name":"amazon:inspector:resource_type",
            "value":"AWS_EC2_INSTANCE"
         },
         {
            "name":"amazon:inspector:instance_id",
            "value":"i-0a4358565c08e4"
         },
         {
            "name":"amazon:inspector:resource_arn",
            "value":"arn:aws:ec2:us-east-1:11112222333:instance/i-0a4358565c08e4"
         }
      ]
   },
   "components":[
      {
         "type":"application",
         "name":"libedit",
         "purl":"pkg:rpm/libedit@3.1-38.20210714cvs.amzn2023.0.2?arch=X86_64&amp;amp;epoch=0&amp;amp;upstream=libedit-3.1-38.20210714cvs.amzn2023.0.2.src.rpm",
         "version":"3.1",
         "bom-ref":"40853ebb7fa05c9370e08063b4fd6e94"
      },
      {
         "type":"application",
         "name":"python3-libcomps",
         "purl":"pkg:rpm/python3-libcomps@0.1.20-1.amzn2023?arch=X86_64&amp;amp;epoch=0&amp;amp;upstream=python3-libcomps-0.1.20-1.amzn2023.src.rpm",
         "version":"0.1.20",
         "bom-ref":"b85e33c25b9c33135da9c73eb32c429c"
      },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;File contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Properties for EC2 instance like AMI-ID, architecture. &lt;/li&gt;
&lt;li&gt;Under components, you see all packages on EC2 instance, its version.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But there is no fun just to have JSON in S3, download and checking manually, so what we can do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect it to Athena to search for specific package. &lt;/li&gt;
&lt;li&gt;Integrate with OpenSearch to build package search engine.&lt;/li&gt;
&lt;li&gt;Analyze File with Lambda as soon as SBOM export done for any specific package.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But these points are for other blogs or some other day ;)&lt;/p&gt;




&lt;p&gt;In this second part of Inspector series, we explored how Amazon Inspector’s ECR scanning, CIS benchmarks, and SBOM exports strengthen your cloud security. These tools help you detect vulnerabilities, ensure compliance, and gain visibility into your software components.  In next part, we will be checking on what services we can integrate with Amazon Inspector.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Appreciate your time and passion for reading blog !!!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
      <category>vulnerabilities</category>
    </item>
  </channel>
</rss>
