<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Salaudeen O. Abdulrasaq</title>
    <description>The latest articles on Forem by Salaudeen O. Abdulrasaq (@sirlawdin).</description>
    <link>https://forem.com/sirlawdin</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/sirlawdin"/>
    <language>en</language>
    <item>
      <title>AWS Security Groups Now Show Related Resources</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Thu, 12 Feb 2026 11:06:52 +0000</pubDate>
      <link>https://forem.com/sirlawdin/aws-security-groups-now-show-related-resources-32gb</link>
      <guid>https://forem.com/sirlawdin/aws-security-groups-now-show-related-resources-32gb</guid>
      <description>&lt;p&gt;AWS has quietly rolled out a useful update to the EC2 Security Groups console, you can now see which resources are associated with a security group directly from the SG details page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrmhind5s9p0g3in8nvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqrmhind5s9p0g3in8nvw.png" alt=" " width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's New?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There's a new &lt;em&gt;&lt;strong&gt;"Related resources"&lt;/strong&gt;&lt;/em&gt; tab that scans your AWS resources and displays which ones are using that specific security group. In the example above, it found 2 resources (an ENI and an EC2 instance) linked to this &lt;em&gt;jump-box-sg&lt;/em&gt; security group out of 69 total resources scanned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you've ever tried to delete or modify a security group and wondered "what's actually using this?"  this feature is for you. Previously, you'd need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run CLI commands with filters&lt;/li&gt;
&lt;li&gt;Search through multiple services manually&lt;/li&gt;
&lt;li&gt;Use third-party tools or scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now it's one click away.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auditing:&lt;/strong&gt; Quickly identify orphaned or over-permissive security groups&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact analysis:&lt;/strong&gt; Know exactly what you'll affect before modifying rules&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cleanup:&lt;/strong&gt; Confidently delete unused security groups&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance:&lt;/strong&gt; Document which resources share network access policies&lt;/p&gt;

&lt;p&gt;Small quality-of-life improvements like this make a real difference in day-to-day infrastructure management.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>awsupdate</category>
    </item>
    <item>
      <title>Building Highly Available Vault on a Budget: Raft + MinIO for Resilience</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Sun, 05 Oct 2025 07:58:12 +0000</pubDate>
      <link>https://forem.com/sirlawdin/building-highly-available-vault-on-a-budget-raft-minio-for-resilience-29mg</link>
      <guid>https://forem.com/sirlawdin/building-highly-available-vault-on-a-budget-raft-minio-for-resilience-29mg</guid>
      <description>&lt;p&gt;&lt;strong&gt;Event:&lt;/strong&gt; &lt;a href="https://www.youtube.com/live/J3fmtnbao8g?si=-0Zq6BUoLoboff1A&amp;amp;t=2520" rel="noopener noreferrer"&gt;HashiTalks: Africa 2025&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Category:&lt;/strong&gt; DevOps, HashiCorp Vault, Automation&lt;/p&gt;

&lt;p&gt;Modern infrastructure depends on secure secret management, yet achieving high availability (HA) often feels like a luxury reserved for enterprise budgets.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll explore how you can build a highly available, production-grade HashiCorp Vault cluster using Raft Integrated Storage and the community edition of &lt;a href="https://www.min.io/" rel="noopener noreferrer"&gt;MinIO&lt;/a&gt; (an S3-compatible object storage) for resilient backups. Everything was implemented in my homelab.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Approach?
&lt;/h2&gt;

&lt;p&gt;Many organizations, especially those operating on tight budgets or in hybrid environments, require strong security but cannot always afford enterprise secret management service.&lt;/p&gt;

&lt;p&gt;Instead of depending on Vault Enterprise with external storage solutions such as Consul, etcd, or DynamoDB, my approach leverages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vault OSS (Open Source) for secret management&lt;/li&gt;
&lt;li&gt;Raft storage for native leader election and data replication&lt;/li&gt;
&lt;li&gt;MinIO for S3-compatible snapshot backups&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup provides &lt;strong&gt;HA&lt;/strong&gt;, &lt;strong&gt;fault tolerance&lt;/strong&gt;, and easy &lt;strong&gt;disaster recovery&lt;/strong&gt;, all without extra licensing costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;Here’s what the setup looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2wzi4kd7g2f3gm6qy4h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2wzi4kd7g2f3gm6qy4h.png" alt=" " width="691" height="601"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each Vault node runs in HA mode using Raft. The leader handles writing and replicating data to followers automatically.&lt;br&gt;
Snapshots are taken periodically and pushed to MinIO for offsite/off-cluster recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All three servers used in this setup are provisioned within a homelab environment.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Prepare Your Environment
&lt;/h2&gt;

&lt;p&gt;Provision 3 servers (VMs or bare metal).&lt;br&gt;
Ensure network connectivity between them and install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install vault
sudo mkdir -p /opt/vault/tls /opt/vault/data /opt/vault/backups
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create a Local Certificate Authority (CA)&lt;/strong&gt;&lt;br&gt;
Vault requires TLS for secure communication. I’ll start by generating my own self-signed CA and use it to issue Vault’s server certificate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generate a Private Key for the CA&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa -out vault-ca-key.pem 4096

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create a Self-Signed CA Certificate&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl req -x509 -new -nodes -key vault-ca-key.pem \
  -subj "/CN=Vault-CA" \
  -days 3650 -out vault-ca-cert.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Generate a Vault Server Private Key and CSR;&lt;/strong&gt; This should be done for all the vault servers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa -out vault-server-key.pem 2048

openssl req -new -key vault-server1.key -out vault-server-1.csr -config vault-server1.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the content of &lt;code&gt;vault-server1.conf&lt;/code&gt;; update accordingly for other vault instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/vault-cert/vault-ca (0.146s)
cat vault-server1.conf
[req]
default_bits = 2048
prompt = no
default_md = sha256
req_extensions = v3_req
distinguished_name = dn

[dn]
C = NG
ST = Abuja
L = Abuja
O = VaultOrg
OU = DevOps
CN = vault-server-1

[v3_req]
subjectAltName = @alt_names

[alt_names]
DNS.1 = vault-server-1
DNS.2 = vault1.internal.local
DNS.3 = 192-168-64-5.nip.io
IP.1 = 192.168.64.5

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sign the Server Certificate with the CA&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl x509 -req -in vault-server-1.csr \
  -CA vault-ca-cert.pem -CAkey vault-ca-key.pem -CAcreateserial \
  -out vault-server-cert1.pem -days 365 -sha256

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Copy Certificates&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Move your generated certs to /opt/vault/tls:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo mkdir -p /opt/vault/tls
sudo cp vault-server-cert1.pem vault-server-key.pem vault-ca-cert.pem /opt/vault/tls

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 2: Configure Vault for HA Using Raft Storage
&lt;/h2&gt;

&lt;p&gt;Configure accordingly on all three vault servers.&lt;br&gt;
make the certificate name on each server uniform as shown in the vault.hcl file below.&lt;br&gt;
Edit /etc/vault.d/vault.hcl:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@vault-server-1:/opt/vault/backups# cat /etc/vault.d/vault.hcl
1 ui = true
2 disable_mlock = true
3
4 storage "raft" {
5   path = "/opt/vault/data"
6   node_id = "vault-server-1"
7 
8   retry_join {
9     leader_tls_servername = "vault-server-1"
10     leader_api_addr = "https://192.168.64.5:8200"
11     leader_ca_cert_file = "/opt/vault/tls/vault-ca.pem"
12     leader_client_cert_file = "/opt/vault/tls/vault-cert.pem"
13     leader_client_key_file = "/opt/vault/tls/vault-key.pem"
14   }
15 
16   retry_join {
17     leader_tls_servername = "vault-server-2"
18     leader_api_addr = "https://192.168.64.6:8200"
19     leader_ca_cert_file = "/opt/vault/tls/vault-ca.pem"
20     leader_client_cert_file = "/opt/vault/tls/vault-cert.pem"
21     leader_client_key_file = "/opt/vault/tls/vault-key.pem"
22   }
23 
24   retry_join {
25     leader_tls_servername = "vault-server-3"
26     leader_api_addr = "https://192.168.64.7:8200"
27     leader_ca_cert_file = "/opt/vault/tls/vault-ca.pem"
28     leader_client_cert_file = "/opt/vault/tls/vault-cert.pem"
29     leader_client_key_file = "/opt/vault/tls/vault-key.pem"
30   }
31 }
32 
33 listener "tcp" {
34   address = "0.0.0.0:8200"
35   tls_cert_file = "/opt/vault/tls/vault-cert.pem"
36   tls_key_file = "/opt/vault/tls/vault-key.pem"
37 }
38 
39 api_addr = "https://192.168.64.5:8200"
40 cluster_addr = "https://192.168.64.5:8201"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start and enable Vault:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl enable vault
systemctl start vault
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@vault-server-1:/opt/vault/tls# systemctl status vault
● vault.service - "HashiCorp Vault - A tool for managing secrets"
     Loaded: loaded (/usr/lib/systemd/system/vault.service; enabled; preset: enabled)
     Active: active (running) since Sun 2025-09-28 18:53:51 UTC; 23s ago
   Main PID: 8944 (vault)
     Tasks: 8 (limit: 4549)
    Memory: 23.9M (peak: 24.1M)
     CPU: 750ms
  CGroup: /system.slice/vault.service
          └8944 /usr/bin/vault server -config=/etc/vault.d/vault.hcl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Initialize and unseal Vault, then join other nodes to form a Raft cluster.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Initialize Vault on one of the vault nodes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ vault operator init -key-shares=5 -key-threshold=3
Get "https://192.168.64.5:8200/v1/sys/seal-status": tls: failed to verify certificate: x509: certificate signed by unknown authority

$ export VAULT_ADDR="https://192.168.64.5:8200"
$ export VAULT_CACERT="/opt/vault/tls/vault-ca.pem"

$ echo 'export VAULT_CACERT="/opt/vault/tls/vault-ca.pem"' &amp;gt; ~/.bashrc
$ source ~/.bashrc

$ vault operator init -key-shares=5 -key-threshold=3
Unseal Key 1: oh0oYmK6oQkdU9oPQF35nMYKgZoZEhFHS40adcTK13Xh
Unseal Key 2: I7gby7hcwW/njhGiPVLOVBTBgUQGk50C7+FQZKtLKUvq
Unseal Key 3: OzeDrBJJ@epEonEFc69Nctj5uvhW+u9+tVMEo4jQSkeP
Unseal Key 4: ryf5ZXCmaulkseIYmOXIRAavOWEOvDdSbu7sdinUvnfU
Unseal Key 5: 87BDV1ithJhk8Abtb5SIDpeXtdH3/p70nwc650S1eUm+
Initial Root Token: hvs.zK4B84b2f5pyCgeYtv5WG8RR

Vault initialized with 5 key shares and a key threshold of 3. Please securely distribute the key shares printed above. When the Vault is re-sealed, restarted, or stopped, you must supply at least 3 of these keys to unseal it before it can start servicing requests.

Vault does not store the generated root key. Without at least 3 keys to reconstruct the root key, Vault will remain permanently sealed!

It is possible to generate new unseal keys, provided you have a quorum of existing unseal keys shares. See "vault operator rekey" for more information.

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@vault-server-1:/opt/vault/tls# vault operator unseal
Unseal Key (will be hidden): 
Key Value
--- ----
Seal Type shamir
Initialized true
Sealed true
Total Shares 5
Threshold 3
Unseal Progress 2/3
Unseal Nonce d4ab9730-3088-e2b7-33bc-b4e69ec1b568
Version 1.20.4
Build Date 2025-09-23T13:22:38Z
Storage Type raft
Removed From Cluster false
HA Enabled true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;vault status&lt;/code&gt; command to view the current state of the cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@vault-server-1:/opt/vault/tls# vault status
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.20.4
Build Date 2025-09-23T13:22:38Z
Storage Type raft
Cluster Name vault-cluster-edcc5476
Cluster ID 130c35ea-023d-c34e-4d69-c217fa974d04
Removed From Cluster false
HA Enabled true
HA Cluster https://192.168.64.5:8201
HA Mode active
Active Since 2025-09-28T19:11:36.208910265Z
Raft Committed Index 37
Raft Applied Index 37

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verify Cluster
&lt;/h2&gt;

&lt;p&gt;Check Raft peers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault operator raft list-peers

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get current leader: &lt;code&gt;vault operator raft list-peers&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@vault-server-1:/opt/vault/tls# vault operator raft list-peers
Node        Address             State  Voter
vault-server-1  192.168.64.5:8201  leader  true
vault-server-2  192.168.64.6:8201  follower  true
vault-server-3  192.168.64.7:8201  follower  true
root@vault-server-1:/opt/vault/tls#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the UI page of the vault and log in with root token (this token was generated alongside sealed keys during initialization) to further confirm status.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwq4m3vr1q9o813dmguy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwq4m3vr1q9o813dmguy.png" alt="UI Login page" width="800" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Enable KV Secrets Engine (Version 2)
&lt;/h2&gt;

&lt;p&gt;Run the below command on the leader node.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault secrets enable -path=secret kv-v2

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a key-value pair:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault kv put secret/demo event="HashiTalksAfrica" year="2025" location="Online"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault kv get secret/demo

===== Secret Path =====
secret/data/demo

======= Metadata =======
Key                Value
---                -----
created_time       2025-10-04T07:42:19.123456Z
custom_metadata    &amp;lt;nil&amp;gt;
deletion_time      n/a
destroyed          false
version            1

========== Data ==========
Key         Value
---         -----
event       HashiTalksAfrica
year        2025
location    Online
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46h2dzfmj8puntb990dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46h2dzfmj8puntb990dw.png" alt="Showing Secret Created on Vault UI " width="800" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Automate Vault Snapshot Backups to MinIO
&lt;/h2&gt;

&lt;p&gt;Follow the link to &lt;a href="https://docs.min.io/community/minio-object-store/operations/deployments/baremetal-deploy-minio-on-ubuntu-linux.html" rel="noopener noreferrer"&gt;install MinIO on a server&lt;/a&gt; (as a container or system service)or use AWS S3 if you prefer it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;root@vault-server-2:~# ssh root@192.168.64.4
The authenticity of host '192.168.64.4 (192.168.64.4)' can't be established.
ECDSA key fingerprint is SHA256:Ld5Gu7HBMnLLTYhVQzFvuYfPODcUwEZOHLBKXXXXXX.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.64.4' (ECDSA) to the list of known hosts.
root@192.168.64.4's password: 

root@minio-server:~# mkdir /opt/minio
root@minio-server:~# cd /opt/minio

# Download the latest MinIO binary for Linux/amd64
root@minio-server:/opt/minio# wget https://dl.min.io/server/minio/release/linux-amd64/minio
--2025-10-08 06:46:00--  https://dl.min.io/server/minio/release/linux-amd64/minio
Resolving dl.min.io (dl.min.io)... 178.128.69.202, 138.68.11.125
Connecting to dl.min.io (dl.min.io)|178.128.69.202|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 74123264 (71M) [application/octet-stream]
Saving to: 'minio'

minio                   100%[=======================] 70.71M  4.14MB/s    in 17s     

2025-10-08 06:46:17 (4.14 MB/s) - 'minio' saved [74123264/74123264]

# Make the binary executable
root@minio-server:/opt/minio# chmod +x minio

# Create a systemd service file for MinIO
root@minio-server:/opt/minio# cat &amp;gt; /etc/systemd/system/minio.service &amp;lt;&amp;lt;EOF
[Unit]
Description=MinIO
Documentation=https://docs.min.io
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/opt/minio/minio

[Service]
WorkingDirectory=/opt/minio/
ExecStart=/opt/minio/minio server /opt/minio/data
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

# Start and enable the MinIO service
root@minio-server:/opt/minio# systemctl enable minio
Created symlink /etc/systemd/system/multi-user.target.wants/minio.service → /etc/systemd/system/minio.service.
root@minio-server:/opt/minio# systemctl start minio

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Install MinIO client (&lt;em&gt;mc&lt;/em&gt;) on all vault servers;&lt;/strong&gt; This client will enable the vault server to run &lt;em&gt;mc&lt;/em&gt; command against the minIO server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Download the correct ARM64 version
wget https://dl.min.io/client/mc/release/linux-arm64/mc -O /usr/local/bin/mc

# Make it executable
chmod +x /usr/local/bin/mc

# Verify installation
mc --version
-- 2025-10-05 06:40:05 -- https://dl.min.io/client/mc/release/linux-arm64/mc
Resolving dl.min.io (dl.min.io)... 178.128.69.202, 138.68.11.125
Connecting to dl.min.io (dl.min.io)|178.128.69.202|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 28704952 (27M) [application/octet-stream]
Saving to: '/usr/local/bin/mc'

/usr/local/bin/mc        100%[=======================]  27.37M  2.71MB/s    in 10s     

2025-10-05 06:40:16 (2.71 MB/s) - '/usr/local/bin/mc' saved [28704952/28704952]

mc version RELEASE.2025-08-13T08-35-41Z (commit-id=7394ce0dd2a80935aded936b09fa12cbb3cb8096)
Runtime: go1.24.6 linux/arm64
Copyright (c) 2015-2025 MinIO, Inc.
License GNU AGPLv3 &amp;lt;https://www.gnu.org/licenses/agpl-3.0.html&amp;gt;
root@vault-server-2:~#

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Add MinIO endpoint alias on all vault servers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg1yc0rj3ykpyb91qte6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsg1yc0rj3ykpyb91qte6.png" alt=" " width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a minio bucket with the name vault using the below command:&lt;br&gt;
PS: You can run the command on the vault node or on the MinIO server, you can also create the bucket from the UI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create the bucket
root@vault-server-1:~# mc mb myminio/vault
root@vault-server-1:~# mc mb myminio/vault
[2025-28-08 20:55:07 UTC]   0B vault

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5: Verify Cluster Backup
&lt;/h2&gt;

&lt;p&gt;Firstly, create a backup manually and push to minio before automating the process with a shell script &lt;em&gt;vault-backup-to-minio.sh&lt;/em&gt;, &lt;em&gt;systemd service&lt;/em&gt; and a &lt;em&gt;systemd timer&lt;/em&gt; to trigger the service at predefined intervals.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nmpxgn5vzf102zrj40b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5nmpxgn5vzf102zrj40b.png" alt="Create Backup file" width="800" height="108"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjh2q2d2sg6utuh7mv68o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjh2q2d2sg6utuh7mv68o.png" alt="Push to MinIO" width="800" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv924wqzoiis7y0wky60z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv924wqzoiis7y0wky60z.png" alt="Verify from MinIO UI Page" width="800" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automating Backup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Backup Script — /usr/local/bin/vault-backup-to-minio.sh&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PS:&lt;/strong&gt; configure the below steps on all the vault instances.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/usr/bin/env bash
set -euo pipefail

#########################CONFIGURATION########################
VAULT_ADDR="${VAULT_ADDR:-https://192.168.64.5:8200}"
VAULT_CACERT="${VAULT_CACERT:-/opt/vault/tls/vault-ca.pem}"
BACKUP_DIR="${BACKUP_DIR:-/opt/vault/backups}"
MINIO_ALIAS="${MINIO_ALIAS:-myminio}"
MINIO_BUCKET="${MINIO_BUCKET:-vault}"
RETENTION_DAYS="${RETENTION_DAYS:-14}"
LOGFILE="${LOGFILE:-/var/log/vault-backup.log}"
MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-$(cat /root/.accesskey)}
MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-$(cat /root/.secretkey)}
###############################################################

export MC_CONFIG_DIR=/opt/vault/mc-config
mkdir -p $MC_CONFIG_DIR

mc alias set myminio http://192.168.64.4:9000 ${MINIO_ACCESS_KEY} ${MINIO_SECRET_KEY}

mkdir -p "$BACKUP_DIR"
touch "$LOGFILE"
# restrict log file
chmod 600 "$LOGFILE"

echo "$(date -u '+%F %T') INFO: starting vault backup" &amp;gt; "$LOGFILE"

# 1) Check that this node is active (leader).
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" --cacert "$VAULT_CACERT" "${VAULT_ADDR}/v1/sys/health?standbyok=false" || echo "000")

if [ "$HTTP_CODE" -eq 200 ]; then
  echo "$(date) INFO: this node is active (HTTP 200)" &amp;gt;&amp;gt; "$LOGFILE"
elif [ "$HTTP_CODE" -eq 429 ]; then
  echo "$(date) INFO: this node is standby (HTTP 429) - exiting" &amp;gt;&amp;gt; "$LOGFILE"
  exit 1
elif [ "$HTTP_CODE" -eq 503 ]; then
  echo "$(date) ERROR: vault sealed or not initialized (HTTP 503) - exiting" &amp;gt;&amp;gt; "$LOGFILE"
  exit 1
else
  echo "$(date) ERROR: unexpected vault health HTTP code: $HTTP_CODE - exiting" &amp;gt;&amp;gt; "$LOGFILE"
  exit 1
fi

# 2) Create snapshot
TIMESTAMP=$(date +'%F-%H%M%S')
SNAPFILE="${BACKUP_DIR}/vault-${TIMESTAMP}.snap"
echo "$(date) INFO: creating snapshot $SNAPFILE" &amp;gt;&amp;gt; "$LOGFILE"

if vault operator raft snapshot save "$SNAPFILE"; then
  echo "$(date) INFO: snapshot created: $SNAPFILE" &amp;gt;&amp;gt; "$LOGFILE"
else
  echo "$(date) ERROR: snapshot creation failed" &amp;gt;&amp;gt; "$LOGFILE"
  exit 2
fi

# 3) Compress (saves bandwidth)
if gzip -f "$SNAPFILE"; then
  SNAPFILE="${SNAPFILE}.gz"
  echo "$(date) INFO: compressed snapshot to $SNAPFILE" &amp;gt;&amp;gt; "$LOGFILE"
else
  echo "$(date) WARN: gzip failed, continuing with uncompressed snap" &amp;gt;&amp;gt; "$LOGFILE"
fi

# 4) Upload to MinIO
echo "$(date) INFO: uploading $SNAPFILE to ${MINIO_ALIAS}/${MINIO_BUCKET}/" &amp;gt;&amp;gt; "$LOGFILE"
if mc cp "$SNAPFILE" "${MINIO_ALIAS}/${MINIO_BUCKET}/"; then
  echo "$(date) INFO: upload successful" &amp;gt;&amp;gt; "$LOGFILE"
else
  echo "$(date) ERROR: upload to MinIO failed" &amp;gt;&amp;gt; "$LOGFILE"
  exit 3
fi

# 5) Local retention - delete local files older than RETENTION_DAYS
echo "$(date) INFO: removing local snapshots older than ${RETENTION_DAYS} days" &amp;gt;&amp;gt; "$LOGFILE"
find "$BACKUP_DIR" -type f -name 'vault-*.snap*' -mtime +"$RETENTION_DAYS" -print -delete &amp;gt;&amp;gt; "$LOGFILE" 2&amp;gt;&amp;amp;1 || true

echo "$(date) INFO: vault backup completed" &amp;gt;&amp;gt; "$LOGFILE"
exit 0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make it executable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chmod +x /usr/local/bin/vault-backup-to-minio.sh

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Systemd Unit:&lt;/strong&gt; /etc/systemd/system/vault-backup.service&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Vault Raft Snapshot Backup to MinIO
Wants=network-online.target
After=network-online.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/vault-backup-to-minio.sh
User=root
Group=root
# Protect system a bit more
ProtectSystem=full
ProtectHome=true
PrivateTmp=true
NoNewPrivileges=true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Timer:&lt;/strong&gt; /etc/systemd/system/vault-backup.timer&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Run Vault backup script every 6 hours

[Timer]
OnCalendar=*:0/6  # every 6 hours on the hour
Persistent=true  # if missed while down, run at boot

[Install]
WantedBy=timers.target

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enable and start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl daemon-reload
systemctl enable --now vault-backup.timer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 6: Delete Secrets and Restore from MinIO
&lt;/h2&gt;

&lt;p&gt;Delete the secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault kv delete -versions=1 secret/demo
vault kv metadata delete secret/demo
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download snapshot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mc cp myminio/vault-backups/vault-snapshots/vault-2025-10-06-064537.snap.gz /opt/vault/backups/

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unzip and restore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gunzip vault-2025-10-06-064537.snap.gz
vault operator raft snapshot restore vault-2025-10-06-064537.snap

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify the secret is restored:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vault kv get secret/demo

===== Secret Path =====
secret/data/demo

======= Metadata =======
Key                Value
---                -----
created_time       2025-10-04T07:42:19.123456Z
custom_metadata    &amp;lt;nil&amp;gt;
deletion_time      n/a
destroyed          false
version            1

========== Data ==========
Key         Value
---         -----
event       HashiTalksAfrica
year        2025
location    Online
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By combining Vault’s integrated storage (Raft) with MinIO, you will be able to build a highly available and resilient Vault setup—entirely with open-source tools and minimal cost.&lt;/p&gt;

&lt;p&gt;This approach ensures that even if all Vault nodes fail, you can restore your data from a secure snapshot stored in MinIO, getting back to operational state in minutes. It’s a practical solution for teams running on-premises or in budget-conscious environments, where traditional enterprise Vault may not be feasible.&lt;/p&gt;

&lt;p&gt;However, while this setup provides HA and data durability, there’s one more layer to make it truly production-ready:&lt;/p&gt;

&lt;h2&gt;
  
  
  Recommendation
&lt;/h2&gt;

&lt;p&gt;Place a load balancer (such as Nginx, HAProxy, or an L4 balancer like Keepalived) in front of your Vault cluster.&lt;br&gt;
The load balancer should:&lt;/p&gt;

&lt;p&gt;Continuously check the health and leadership status of Vault nodes using the &lt;code&gt;/v1/sys/health&lt;/code&gt; or &lt;code&gt;/v1/sys/leader&lt;/code&gt; endpoint.&lt;/p&gt;

&lt;p&gt;Forward client traffic only to the current leader node, since Vault accepts writes exclusively on the leader.&lt;/p&gt;

&lt;p&gt;Automatically re-route requests to the new leader after failover, ensuring seamless continuity for applications and users.&lt;/p&gt;

&lt;p&gt;This way, your Vault deployment achieves:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;High Availability&lt;/li&gt;
&lt;li&gt;Data Durability&lt;/li&gt;
&lt;li&gt;Automatic Failover&lt;/li&gt;
&lt;li&gt;Seamless Client Experience&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With this foundation, you now have a &lt;strong&gt;production-grade&lt;/strong&gt;, &lt;strong&gt;open-source&lt;/strong&gt; Vault cluster that’s &lt;strong&gt;cost-effective&lt;/strong&gt;, &lt;strong&gt;recoverable&lt;/strong&gt;, and &lt;strong&gt;resilient&lt;/strong&gt; against both node and data failures.&lt;/p&gt;

</description>
      <category>hashicorpvault</category>
      <category>hashitalkafrica2025</category>
      <category>awss3</category>
    </item>
    <item>
      <title>Using IaC (Terraform) to Deploy a FastAPI Microservice on EKS with for Datadog Monitoring</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Mon, 17 Mar 2025 21:42:27 +0000</pubDate>
      <link>https://forem.com/sirlawdin/using-iac-terraform-to-deploy-a-fastapi-microservice-on-eks-with-for-datadog-monitoring-16b</link>
      <guid>https://forem.com/sirlawdin/using-iac-terraform-to-deploy-a-fastapi-microservice-on-eks-with-for-datadog-monitoring-16b</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Modern cloud applications need &lt;strong&gt;scalability, observability, and automation&lt;/strong&gt;. Manually deploying infrastructure is outdated; instead, we use &lt;strong&gt;Infrastructure as Code (IaC)&lt;/strong&gt; for reproducibility and &lt;strong&gt;monitoring tools&lt;/strong&gt; to keep track of application health.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll build and deploy a &lt;strong&gt;FastAPI microservice&lt;/strong&gt; on &lt;strong&gt;Amazon EKS (Elastic Kubernetes Service)&lt;/strong&gt; using &lt;strong&gt;Terraform&lt;/strong&gt;, while integrating &lt;strong&gt;Datadog APM (Application Performance Monitoring)&lt;/strong&gt; for real-time tracing and insights.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Stack?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FastAPI&lt;/strong&gt;  – A lightweight, high-performance web framework for APIs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EKS (Elastic Kubernetes Service)&lt;/strong&gt; ☁️ – Managed Kubernetes for deploying containerized workloads.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; 🏗️ – Automates infrastructure provisioning using declarative code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Datadog&lt;/strong&gt; 🐶 – Provides monitoring, logging, and distributed tracing for better observability.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/Sirlawdin/fastapi-iac-monitoring" rel="noopener noreferrer"&gt;project&lt;/a&gt; is structured as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;eks-datadog-tracing/
│── .gitignore              # Git ignore file
│── .terraform.lock.hcl     # Terraform lock file
│── main.tf                 # Main Terraform configuration
│── outputs.tf              # Terraform outputs
│── providers.tf            # Terraform providers setup
│── README.md               # Project documentation
│── terraform.tfvars        # Terraform variable values
│── variables.tf            # Terraform variables
│
├── app/                    # Application source code
│   ├── Dockerfile          # Docker configuration for the application
│   ├── main.py             # FastAPI application source code
│   ├── requirements.txt    # Dependencies for the application
│
└── modules/                # Modular Terraform configurations
    ├── application/        # Application-specific module
    │   ├── main.tf         # Terraform configuration for the application module
    │   ├── outputs.tf      # Outputs for the application module
    │   ├── ReadMe.md       # Documentation for the module
    │   ├── variables.tf    # Variables for the module
    │
    ├── datadog/            # Datadog monitoring and logging module
    │   ├── datadog.tf      # Datadog configuration
    │   ├── datadog_dashboard.tf  # Datadog dashboards setup
    │   ├── datadog_metric.tf     # Datadog metrics configuration
    │   ├── outputs.tf      # Outputs for the Datadog module
    │   ├── variables.tf    # Variables for the Datadog module
    │
    ├── eks/                # EKS cluster module
    │   ├── main.tf         # Terraform configuration for EKS
    │   ├── outputs.tf      # Outputs for the EKS module
    │   ├── variables.tf    # Variables for the EKS module
    │
    ├── vpc/                # VPC networking module
    │   ├── main.tf         # Terraform configuration for VPC
    │   ├── outputs.tf      # Outputs for the VPC module
    │   ├── variables.tf    # Variables for the VPC module

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;Terraform provisions an EKS cluster and networking infrastructure.&lt;br&gt;
The FastAPI application is containerized with Docker and deployed to EKS.&lt;br&gt;
Datadog APM is integrated to collect real-time traces and metrics.&lt;br&gt;
Observability is improved with Datadog dashboards, metrics, and alerts.&lt;/p&gt;

&lt;p&gt;Building the Infrastructure with Terraform&lt;br&gt;
1️⃣ Provisioning the VPC&lt;br&gt;
The VPC module creates the necessary networking resources&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "vpc" {
  source               = "./modules/vpc"
  vpc_name            = var.vpc_name
  vpc_cidr_block      = var.vpc_cidr_block
  vpc_private_subnets = var.vpc_private_subnets
  vpc_public_subnets  = var.vpc_public_subnets
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ Deploying the EKS Cluster&lt;br&gt;
We use the EKS module to create a Kubernetes cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "eks" {
  source = "./modules/eks"

  k8s_name          = var.k8s_name
  vpc_id            = module.vpc.vpc_id
  subnet_ids        = module.vpc.private_subnet_ids
  cluster_version   = "1.24"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run Terraform commands to deploy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
terraform apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once completed, EKS is ready to deploy our FastAPI app.&lt;/p&gt;

&lt;p&gt;Deploying the FastAPI App on EKS&lt;br&gt;
1️⃣ Writing the FastAPI Application&lt;br&gt;
Our FastAPI app (app/main.py) exposes several endpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from fastapi import FastAPI
import time
import random
from ddtrace import tracer

app = FastAPI()

@tracer.wrap()
@app.get("/")
def hello():
    return {"message": "Hello, World!"}

@tracer.wrap()
@app.get("/slow")
def slow_function():
    time.sleep(2)
    return {"message": "This function is slow!"}

@tracer.wrap()
@app.get("/cpu-intensive")
def cpu_intensive():
    total = sum(i * i for i in range(10**6))
    return {"message": "CPU-intensive task completed!"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ Containerizing the Application&lt;br&gt;
We package our app with a Dockerfile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.9
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build and push the image to a container registry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t your-docker-repo/fastapi-app:latest .
docker push your-docker-repo/fastapi-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3️⃣ Deploying to Kubernetes&lt;br&gt;
Create a Kubernetes deployment file (k8s/deployment.yaml):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: fastapi-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: fastapi
  template:
    metadata:
      labels:
        app: fastapi
    spec:
      containers:
        - name: fastapi
          image: your-docker-repo/fastapi-app:latest
          ports:
            - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Apply the deployment:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f k8s/deployment.yaml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Monitoring with Datadog&lt;br&gt;
1️⃣ Setting Up Datadog APM&lt;br&gt;
We configure Datadog in main.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
from ddtrace import tracer
tracer.configure(
    hostname="datadog-agent.datadog",
    port=8126
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ Enabling Logs and Metrics&lt;br&gt;
We define Datadog metrics in Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "datadog_monitor" "high_latency" {
  name    = "High Latency Alert"
  type    = "query alert"
  query   = "avg(last_5m):avg:trace.http.request.duration{service:fastapi-app} &amp;gt; 500"
  message = "Alert! API response time is too high!"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3️⃣ Viewing Metrics&lt;br&gt;
Once deployed, log in to Datadog and navigate to:&lt;/p&gt;

&lt;p&gt;APM &amp;gt; Services – View real-time traces.&lt;br&gt;
Metrics &amp;gt; Dashboards – Monitor CPU, latency, and traffic.&lt;/p&gt;

&lt;p&gt;Testing the API&lt;br&gt;
To check if everything is running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Basic "Hello World"
curl http://your-app-url/

# Simulate slow responses
curl http://your-app-url/slow
curl http://your-app-url/random-delay

# Heavy load endpoints
curl http://your-app-url/cpu-intensive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also view Swagger UI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://your-app-url/docs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
By following this setup, we successfully: &lt;br&gt;
✅ Deployed FastAPI as a microservice&lt;br&gt;
✅ Used Terraform to provision EKS and infrastructure&lt;br&gt;
✅ Integrated Datadog for tracing, logging, and monitoring&lt;/p&gt;

&lt;p&gt;This approach ensures scalability, observability, and automation, making it ideal for production environments.&lt;/p&gt;

&lt;p&gt;If you're interested in extending this, you can:&lt;/p&gt;

&lt;p&gt;Add autoscaling policies for Kubernetes pods&lt;br&gt;
Implement Datadog alerts for anomaly detection&lt;br&gt;
Enable log aggregation using Fluentd&lt;br&gt;
feel free to contribute! &lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>datadog</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>What's Next After Passing The AWS Cloud Practitioner and Solution Architect Exam</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Mon, 18 Nov 2024 07:14:52 +0000</pubDate>
      <link>https://forem.com/sirlawdin/whats-next-after-passing-the-aws-cloud-practitioner-and-solution-architect-exam-255m</link>
      <guid>https://forem.com/sirlawdin/whats-next-after-passing-the-aws-cloud-practitioner-and-solution-architect-exam-255m</guid>
      <description>&lt;p&gt;Congratulations if you just passed your AWS Cloud Practitioner (CCP) and the AWS Solutions Architect Associate (SAA) exam.&lt;br&gt;
Here’s a structured path to help you grow further and leverage your new skills effectively:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Share Your Knowledge Through Content Creation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Start creating content based on what you’ve learned in the program, no matter how simple or basic the topic may seem. Remember, there’s always a demand for beginner-friendly "Cloud 101" resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ideas for Content:&lt;/strong&gt; Write blog posts, create YouTube videos, or share threads on social media about AWS concepts, services, or technologies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benefits:&lt;/strong&gt; This not only reinforces your learning but also helps build your online presence and credibility in the cloud space.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Join the AWS Community Builders Program&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;After building a content portfolio, join the &lt;a href="https://pulse.aws/application/BM2AKLSX" rel="noopener noreferrer"&gt;AWS Community Builders Program waitlist&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages of Being an AWS Community Builder:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exclusive access to AWS training and resources.&lt;/li&gt;
&lt;li&gt;Networking opportunities with cloud enthusiasts and industry professionals.&lt;/li&gt;
&lt;li&gt;Recognition in the AWS community, which can boost your career.&lt;/li&gt;
&lt;li&gt;Swags, AWS Credit, and Voucher to take other AWS Certification Exams.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How Your Content Helps:&lt;/strong&gt; Having published content on AWS technologies demonstrates your passion and commitment, increasing your chances of being accepted into the program.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Work on Personal Projects&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Apply your knowledge by building hands-on projects. Personal projects not only deepen your understanding but also make you stand out when applying for jobs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Get Inspiration:&lt;/strong&gt; Check out this &lt;a href="https://www.youtube.com/watch?v=zA8guDqfv40" rel="noopener noreferrer"&gt;YouTube video&lt;/a&gt; on project ideas to kickstart your journey.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Examples:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create a simple web application hosted on AWS.&lt;/p&gt;

&lt;p&gt;Implement a serverless architecture with AWS Lambda and API Gateway.&lt;/p&gt;

&lt;p&gt;Set up an automated CI/CD pipeline using AWS CodePipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Start Applying for Jobs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh881yb8lm5m2uewoigl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffh881yb8lm5m2uewoigl.png" alt=" " width="606" height="765"&gt;&lt;/a&gt;&lt;br&gt;
While working on projects, begin applying for roles such as Cloud Engineer, DevOps Engineer, or Solutions Architect. Highlight:&lt;/p&gt;

&lt;p&gt;Your certifications.&lt;/p&gt;

&lt;p&gt;Content you've created.&lt;/p&gt;

&lt;p&gt;Hands-on project experience.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscommunitybuilder</category>
      <category>awscertification</category>
    </item>
    <item>
      <title>Using CloudFormation to deploy a web app with HA</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Sun, 17 Nov 2024 15:21:29 +0000</pubDate>
      <link>https://forem.com/sirlawdin/using-cloudformation-to-deploy-a-web-app-with-ha-3ed9</link>
      <guid>https://forem.com/sirlawdin/using-cloudformation-to-deploy-a-web-app-with-ha-3ed9</guid>
      <description>&lt;h2&gt;
  
  
  INTRODUCTION
&lt;/h2&gt;

&lt;p&gt;In this post, I am thrilled to share an exciting project I had the opportunity to work on. I embarked on a journey that not only expanded my knowledge but also empowered me to apply cutting-edge cloud computing and DevOps practices. Specifically, this project delved into &lt;em&gt;Infrastructure as Code&lt;/em&gt; with &lt;strong&gt;AWS CloudFormation&lt;/strong&gt;, providing me with invaluable hands-on experience in building scalable cloud infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  STATEMENT OF PROBLEM
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario&lt;/strong&gt;&lt;br&gt;
Your company is creating an Instagram clone.&lt;/p&gt;

&lt;p&gt;Developers want to deploy a new application to the AWS infrastructure.&lt;/p&gt;

&lt;p&gt;You have been tasked with provisioning the required infrastructure and deploying a dummy application, along with the necessary supporting software.&lt;/p&gt;

&lt;p&gt;This needs to be automated so that the infrastructure can be discarded as soon as the testing team finishes their tests and gathers their results.&lt;/p&gt;

&lt;p&gt;Optional - To add more challenge to the project, once the project is completed, you can try deploying sample website files located in a public S3 Bucket to the Apache Web Server running on an EC2 instance.&lt;/p&gt;


&lt;h2&gt;
  
  
  Server specs
&lt;/h2&gt;

&lt;p&gt;Launch Configuration was created for application servers in order to deploy four servers, two located in each of your private subnets. The launch configuration was used by an auto-scaling group. Two vCPUs were used with 4GB of RAM. The Operating System used is Ubuntu 18. An Instance size and Machine Image (AMI) that best fits this spec was chosen.&lt;/p&gt;


&lt;h2&gt;
  
  
  MY SOLUTION:
&lt;/h2&gt;
&lt;h3&gt;
  
  
  Architecture Diagram
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqm6kyswz3bbjzhzznkfe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqm6kyswz3bbjzhzznkfe.png" alt="WebApp Diagram" width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is the content of the parameter files and configuration files for the network infrastructure, s3 buckets, and servers(EC2 instances)&lt;/p&gt;
&lt;h4&gt;
  
  
  Network Parameters
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;network.json&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
    {
    "ParameterKey": "EnvironmentName",
    "ParameterValue": "UdacityProject"
    },
    {
    "ParameterKey": "VPCCIDR",
    "ParameterValue": "10.0.0.0/16"
    },
    {
    "ParameterKey": "PubSubnet1CIDR",
    "ParameterValue": "10.0.1.0/24"
    },
    {
    "ParameterKey": "PubSubnet2CIDR",
    "ParameterValue": "10.0.2.0/24"
    },
    {
    "ParameterKey": "PrivSubnet1CIDR",
    "ParameterValue": "10.0.3.0/24"
    },
    {
    "ParameterKey": "PrivSubnet2CIDR",
    "ParameterValue": "10.0.4.0/24"
    }

    ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  S3 bucket Parameters
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;s3bucket.json&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[{
    "ParameterKey": "EnvironmentName",
    "ParameterValue": "UdacityProject"
},
{
    "ParameterKey": "S3BucketName",
    "ParameterValue": "udacityprojects3webserverbucket"
}
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Server (EC2 Instance) Parameters
&lt;/h4&gt;

&lt;p&gt;&lt;em&gt;servers.json&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
    {
    "ParameterKey": "EnvironmentName",
    "ParameterValue": "UdacityProject"
    }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Network Configuration
&lt;/h4&gt;

&lt;p&gt;The &lt;em&gt;network.yaml&lt;/em&gt; file below creates a network infrastructure with public and private subnets, routing, and internet access. It includes parameters for customizing the environment, VPC CIDR, and subnet CIDR blocks. The code creates resources such as VPC, internet gateway, subnets (public and private), NAT gateways, and route tables. Outputs are defined to export important values like VPC ID, route table IDs, and subnet IDs. This CloudFormation template enables the creation of a network setup suitable for routing internet traffic to both public and private subnets.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;network.yaml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: "2010-09-09"
Description: Creates the required network infrastructure for public and private routing with internet access
Parameters: 
  EnvironmentName: 
    Description: An Environment name that will be prefixed to resources
    Type: String
  VPCCIDR:
    Type: String
  PrivSubnet1CIDR:
    Type: String
  PrivSubnet2CIDR:
    Type: String
  PubSubnet1CIDR:
    Type: String
  PubSubnet2CIDR:
    Type: String

Resources:
  myVPC:
    Type: AWS::EC2::VPC
    Properties:
      CidrBlock: !Ref VPCCIDR
      EnableDnsHostnames: true
      EnableDnsHostnames: true
      Tags:
        - Key: Name
          Value: "MainVPC"

# Create Internet Gateway
  InternetGateway:
    Type: AWS::EC2::InternetGateway
    Properties:
      Tags:
        - Key: Name
          Value: !Ref EnvironmentName

# Attached the internet Gateway to myVPC
  InternetGatewayAttached:    
    Type: AWS::EC2::VPCGatewayAttachment
    Properties: 
      InternetGatewayId: !Ref InternetGateway
      VpcId: !Ref myVPC

# Creating Public and Private Subnets in the same availability zone(us-east-1a).
  PublicSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      AvailabilityZone: "us-east-1a"
      CidrBlock: !Ref PubSubnet1CIDR
      VpcId:
        Ref: myVPC
      MapPublicIpOnLaunch: true
      Tags:
      - Key: Name
        Value: !Sub ${EnvironmentName} Public Subnet (AZ1)

  PrivateSubnet1:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: myVPC
      CidrBlock: !Ref PrivSubnet1CIDR
      MapPublicIpOnLaunch: false
      AvailabilityZone: "us-east-1a"
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName} Private Subnet (AZ1)


# Creating Public and Private Subnets in the same availability zone(us-east-1b).
  PublicSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      AvailabilityZone: "us-east-1b"
      CidrBlock: !Ref PubSubnet2CIDR
      VpcId:
        Ref: myVPC
      MapPublicIpOnLaunch: true
      Tags:
      - Key: Name
        Value: !Sub ${EnvironmentName} Public Subnet (AZ2)

  PrivateSubnet2:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId:
        Ref: myVPC
      CidrBlock: !Ref PrivSubnet2CIDR
      MapPublicIpOnLaunch: false
      AvailabilityZone: "us-east-1b"
      Tags:
        - Key: Name
          Value: !Sub ${EnvironmentName} Private Subnet (AZ2)

# Elastic IP for the NATGateway in Subnet1
  EIP1:
    Type: AWS::EC2::EIP
    DependsOn: InternetGatewayAttached
    Properties:
        Domain: myVPC
        Tags:
        - Key: Name
          Value: "Elastic IP for our NATGateway1"
  EIP2:
    Type: AWS::EC2::EIP
    DependsOn: InternetGatewayAttached
    Properties:
        Domain: myVPC
        Tags:
        - Key: Name
          Value: "Elastic IP for our NATGateway2"

# Creating NAT gateway in publicsubnet1
  NAT1:
    Type: AWS::EC2::NatGateway
    Properties:
        AllocationId:
          Fn::GetAtt:
          - EIP1
          - AllocationId
        SubnetId: !Ref PublicSubnet1
        Tags:
        - Key: Name
          Value: "NAT to be used by servers in the private subnet"
  NAT2:
    Type: AWS::EC2::NatGateway
    Properties:
        AllocationId:
          Fn::GetAtt:
          - EIP2
          - AllocationId
        SubnetId: !Ref PublicSubnet2
        Tags:
        - Key: Name
          Value: "NAT to be used by servers in the private subnet"



#_______PUBLIC SUBNET________
# # Route Table for Public subnet
  PublicRouteTable:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref myVPC
      Tags:
      - Key: Name
        Value: !Sub ${EnvironmentName} Public Routes

# Create Route for public Subnet 1 &amp;amp; 2 
  PublicRoute:
    Type: AWS::EC2::Route
    DependsOn: InternetGatewayAttached
    Properties:
        RouteTableId: !Ref PublicRouteTable
        DestinationCidrBlock: 0.0.0.0/0
        GatewayId:
          Ref: InternetGateway

# Associate Route Table to Public subnet 1 &amp;amp; 2
  AssociatePublicRoute:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties: 
      RouteTableId: 
        Ref: PublicRouteTable
      SubnetId: !Ref PublicSubnet1

  AssociatePublicRoute:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties: 
      RouteTableId: 
        Ref: PublicRouteTable
      SubnetId: !Ref PublicSubnet2


# ______PRIVATE SUBNET 1______
# Route Table for Private subnet1

  PrivateRouteTable1:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref myVPC
      Tags:
      - Key: Name
        Value: !Sub ${EnvironmentName} Private Routes (AZ1)

# Create Route for Private subnet1
  PrivateRoute1:
    Type: AWS::EC2::Route
    Properties:
        RouteTableId: !Ref PrivateRouteTable1
        DestinationCidrBlock: 0.0.0.0/0
        NatGatewayId:
          Ref: NAT1

# Private Route Table1 Association to Private Subnet1
  AssociatePrivateRoute:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties: 
      RouteTableId: !Ref PrivateRouteTable1
      SubnetId: !Ref PrivateSubnet1



# _____PRIVATE SUBNET 2_____
# Route Table for Private subnet2
  PrivateRouteTable2:
    Type: AWS::EC2::RouteTable
    Properties:
      VpcId: !Ref myVPC
      Tags:
      - Key: Name
        Value: !Sub ${EnvironmentName} Private Routes (AZ2)

# Create Route for Private subnet2
  PrivateRoute2:
    Type: AWS::EC2::Route
    Properties:
        RouteTableId: !Ref PrivateRouteTable2
        DestinationCidrBlock: 0.0.0.0/0
        NatGatewayId:
          Ref: NAT2

# Private Route Table2 Association to Private Subnet2
  AssociatePrivateRoute:
    Type: AWS::EC2::SubnetRouteTableAssociation
    Properties: 
      RouteTableId: !Ref PrivateRouteTable2
      SubnetId: !Ref PrivateSubnet2


Outputs:
  myVPC:
    Description: The VPC created for this project
    Value: !Ref myVPC
    Export:
      Name: !Sub ${EnvironmentName}-VPCID

  PublicRouteTable: 
        Description: Public Route Table
        Value: !Ref PublicRouteTable
        Export:
          Name: !Sub ${EnvironmentName}-PUB-RT

  PrivateRouteTable1: 
        Description: Private route Table1
        Value: !Ref PrivateRouteTable1
        Export:
          Name: !Sub ${EnvironmentName}-PRI-RT1

  PrivateRouteTable2: 
        Description: Private route Table2
        Value: !Ref PrivateRouteTable2
        Export:
          Name: !Sub ${EnvironmentName}-PRI-RT2

  PublicSubnets:
        Description: A list of the public subnets
        Value: !Join [ ",", [ !Ref PublicSubnet1, !Ref PublicSubnet2 ]]
        Export:
          Name: !Sub ${EnvironmentName}-PUB-SUBNETS

  PublicSubnet1:
        Description: public subnet 1 in "us-east-1a"
        Value: !Ref PublicSubnet1
        Export:
          Name: !Sub ${EnvironmentName}-PUB-SUB1

  PublicSubnet2:
        Description: public subnet 2 in us-east-1b
        Value: !Ref PublicSubnet2
        Export:
          Name: !Sub ${EnvironmentName}-PUB-SUB2

  PrivateSubnets:
        Description: A list of the private subnets
        Value: !Join [ ",", [ !Ref PrivateSubnet1, !Ref PrivateSubnet2 ]]
        Export:
          Name: !Sub ${EnvironmentName}-PRIV-SUBNETS

  PrivateSubnet1:
        Description: private subnet 1 in us-east-1a
        Value: !Ref PrivateSubnet1
        Export:
          Name: !Sub ${EnvironmentName}-PRIV-SUB1

  PrivateSubnet2:
        Description: private subnet 1 in us-east-1b
        Value: !Ref PrivateSubnet2
        Export:
          Name: !Sub ${EnvironmentName}-PRIV-SUB2

  VPCdefaultSecurityGroup:
        Description: Returns the default security group of the created VPC
        Value: !GetAtt myVPC.DefaultSecurityGroup
        Export:
          Name: !Sub ${EnvironmentName}-myVPC-SG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  S3 Bucket Configuration
&lt;/h4&gt;

&lt;p&gt;The &lt;em&gt;s3bucket.yaml&lt;/em&gt; file below creates an S3 bucket for deploying a high-availability web app. The bucket is configured with public read access, an index document, and an error document. A bucket policy allows all actions on the bucket, and an IAM role with AmazonS3FullAccess policy is created to enable EC2 instances to manage the web app. Outputs include the IAM role, website URL, and secure website URL. This CloudFormation template facilitates the setup of an S3 bucket for hosting a high-availability web app with appropriate permissions and URL accessibility.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;s3bucket.yaml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Description: &amp;gt;
  Create an S3 bucket for deploying a high-availability web-app.


Parameters:
  EnvironmentName:
    Description: An environment name that will be prefixed to resource names.
    Type: String

  S3BucketName:
    Description: S3 bucket name.
    Type: String


Resources:
  S3WebServer:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Ref S3BucketName
      AccessControl: PublicRead
      WebsiteConfiguration:
        IndexDocument: index.html
        ErrorDocument: error.html
      Tags: 
        - Key: Name
          Value: !Sub ${EnvironmentName} s3webserver bucket
    DeletionPolicy: Delete

  S3WebAppPolicy:
    Type: AWS::S3::BucketPolicy
    Properties:
      Bucket: !Ref S3WebServer
      PolicyDocument:
        Statement:
          - Effect: Allow
            Action: s3:*
            Resource: !Join ['', ['arn:aws:s3:::', !Ref 'S3WebServer', '/*']]
            Principal:
              AWS: '*'

  WebServerIAMRole:
    Type: 'AWS::IAM::Role'
    Properties:
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/AmazonS3FullAccess'
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Effect: 'Allow'
            Principal:
              Service:
                - 'ec2.amazonaws.com'
            Action:
              - 'sts:AssumeRole'
      Path: '/'

  MyInstanceProfile: 
    Type: "AWS::IAM::InstanceProfile"
    Properties: 
      Path: "/"
      Roles: 
        - 
          Ref: "WebServerIAMRole"



Outputs:

  WebServerIAMRole:
    Description: 'Allow EC2 instances to manage Web App S3'
    Value: !Ref MyInstanceProfile
    Export:
      Name: !Sub ${EnvironmentName}-IAM-NAME

  # WebServerIAMRole:
  #   Description: Iam Instance Profile Arn
  #   Value: !GetAtt WebServerIAMRole.Arn
  #   Export:
  #     Name: !Sub ${EnvironmentName}-IAM-ARN

  WebsiteURL:
    Value: !GetAtt [S3WebServer, WebsiteURL]
    Description: URL for website hosted on S3
  WebsiteSecureURL:
    Value: !Join ['', ['https://', !GetAtt [S3WebServer, DomainName]]]
    Description: Secure URL for website hosted on S3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Server (EC2 instance) Configuration
&lt;/h4&gt;

&lt;p&gt;The &lt;em&gt;servers.yaml&lt;/em&gt; file below creates a network infrastructure with servers for hosting a high-availability web app.&lt;br&gt;
The code defines several resources, including security groups, launch configuration, auto scaling group, load balancer, listener, target group, scaling policies, and outputs. These resources enable the setup of a load-balanced environment for the web app.&lt;br&gt;
The code provides an output called LoadBalancerEndpoint, which represents the endpoint for reaching the load balancer.&lt;br&gt;
Overall, this CloudFormation template facilitates the creation of a load-balanced infrastructure with auto scaling capabilities for hosting web servers.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;servers.yaml&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AWSTemplateFormatVersion: "2010-09-09"
Description: Creates the required servers in the network infrastructure defined by "network.yml"
Parameters: 
  EnvironmentName: 
    Description: An Environment name that will be prefixed to resources
    Type: String

  InstanceType:
    Description: Amazon EC2 instance type for the instances
    Type: String
    AllowedValues:
      - t2.micro
      - t3.micro
      - t3.small
      - t3.medium
    Default: t2.micro

Mappings:
      WebServerRegion:
        us-east-1:
          HVM64: ami-052efd3df9dad4825


Resources:

#Loadbalancer security group.
  LBSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Allow http request to loadbalancer
      VpcId: 
        Fn::ImportValue:
          !Sub "${EnvironmentName}-VPCID"
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 80
          ToPort: 80
          CidrIp: 0.0.0.0/0
      SecurityGroupEgress:
        - IpProtocol: tcp
          FromPort: 80
          ToPort: 80
          CidrIp: 0.0.0.0/0

# Web Server security group.
  InstanceSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: Allow http to Webserver
      VpcId: 
        Fn::ImportValue:
          !Sub "${EnvironmentName}-VPCID"
      SecurityGroupIngress:
        - IpProtocol: tcp
          FromPort: 80
          ToPort: 80
          CidrIp: 0.0.0.0/0
        - IpProtocol: tcp
          FromPort: 22
          ToPort: 22
          CidrIp: 0.0.0.0/0
      SecurityGroupEgress:
        - IpProtocol: tcp
          FromPort: 0
          ToPort: 65535
          CidrIp: 0.0.0.0/0

  WebServerLaunchConfig:
    Type: AWS::AutoScaling::LaunchConfiguration
    Properties:
      IamInstanceProfile:
        Fn::ImportValue: !Sub '${EnvironmentName}-IAM-NAME'
      UserData: 
          # wget -P /var/www/html https://project2udacity.s3-us-west-2.amazonaws.com/index.html
        Fn::Base64: !Sub |
          #!/bin/bash
          apt-get update -y
          apt-get install unzip awscli -y
          apt-get install apache2 -y
          systemctl start apache2.service
          sudo rm /var/www/html/index.html
          sudo aws s3 cp s3://udacityprojects3webserverbucket/udagram.zip /var/www/html
          sudo unzip /var/www/html/udagram.zip -d /var/www/html
          sudo rm /var/www/html/udagram.zip 
          systemctl restart apache2.service
      ImageId: !FindInMap [WebServerRegion, !Ref 'AWS::Region', HVM64]
      SecurityGroups: 
        - !Ref InstanceSecurityGroup
      InstanceType: 
        !Ref InstanceType
      BlockDeviceMappings: 
        - DeviceName: /dev/sda1
          Ebs: 
            VolumeSize: '10'
            VolumeType: 'gp2'

  WebServerASG:
    Type: AWS::AutoScaling::AutoScalingGroup
    Properties:
      VPCZoneIdentifier:
      - Fn::ImportValue: !Sub "${EnvironmentName}-PRIV-SUB1"
      - Fn::ImportValue: !Sub "${EnvironmentName}-PRIV-SUB2"
      LaunchConfigurationName: !Ref WebServerLaunchConfig
      MaxSize: '4'
      MinSize: '4'
      DesiredCapacity: '4'
      TargetGroupARNs:
      - Ref: WebServerTargetGroup

  WebServerloadBalancer:
    Type: AWS::ElasticLoadBalancingV2::LoadBalancer
    Properties:
      Subnets:
        - Fn::ImportValue: !Sub "${EnvironmentName}-PUB-SUB1"
        - Fn::ImportValue: !Sub "${EnvironmentName}-PUB-SUB2"
      SecurityGroups:
      - Ref: LBSecurityGroup

  Listener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      DefaultActions:
      - Type: forward
        TargetGroupArn:
          Ref: WebServerTargetGroup
      LoadBalancerArn:
        Ref: WebServerloadBalancer
      Port: '80'
      Protocol: HTTP

  ALBListenerRule:
    Type: AWS::ElasticLoadBalancingV2::ListenerRule
    Properties:
      Actions:
      - Type: forward
        TargetGroupArn:
          Ref: WebServerTargetGroup
      Conditions:
      - Field: path-pattern
        Values: [/]
      ListenerArn:
        Ref: Listener
      Priority: 1

  WebServerTargetGroup:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    Properties:
      HealthCheckIntervalSeconds: 5
      HealthCheckPath: /
      HealthCheckProtocol: HTTP
      HealthCheckTimeoutSeconds: 4
      HealthyThresholdCount: 3
      Port: 80
      Protocol: HTTP
      UnhealthyThresholdCount: 3
      VpcId:
        Fn::ImportValue:
          Fn::Sub: "${EnvironmentName}-VPCID"

# Scaling Policy Description (Up)
  WebServerScaleDown:
    Type: AWS::AutoScaling::ScalingPolicy
    Properties:
      AdjustmentType: ChangeInCapacity
      AutoScalingGroupName: !Ref WebServerASG
      Cooldown: 300
      ScalingAdjustment: 1

# Scaling Policy Description (Down)
  WebServerScaleDown:
    Type: AWS::AutoScaling::ScalingPolicy
    Properties:
      AdjustmentType: ChangeInCapacity
      AutoScalingGroupName: !Ref WebServerASG
      Cooldown: 300
      ScalingAdjustment: -1

Outputs:
    LoadBanlancerEndpoint: 
        Description: this endpoint is used to reach the loadbalancer.
        Value: !Join [ "", [ 'http://', !GetAtt WebServerloadBalancer.DNSName  ]]
        Export:
          Name: !Sub ${EnvironmentName}-LBENDPOINT


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Script Usage&lt;/strong&gt;&lt;br&gt;
This Repository contains some scripts that will be used to create the CloudFormation stack.&lt;/p&gt;

&lt;p&gt;Usage:&lt;br&gt;
Create:&lt;br&gt;
&lt;code&gt;./create.sh (stackName) (script.yml) (parameters.json) (profile)&lt;/code&gt;&lt;br&gt;
Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./create.sh Udagram infrastructure/network.yaml parameters/network.json udacity_user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the content of the create.sh script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation create-stack --stack-name $1 --template-body file://$2 --parameters file://$3 --capabilities "CAPABILITY_IAM" "CAPABILITY_NAMED_IAM" --region=us-east-1 --profile=$4

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;Update:&lt;br&gt;
&lt;code&gt;./update.sh (stackName) (script.yml) (parameters.json) (profile)&lt;/code&gt;&lt;br&gt;
Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./update.sh Udagram infrastructure/network.yaml parameters/network.json udacity_user

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the content of the update.sh script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws cloudformation update-stack --stack-name $1 --template-body file://$2 --parameters file://$3 --capabilities "CAPABILITY_IAM" "CAPABILITY_NAMED_IAM" --region=us-east-1 --profile=$4

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>cloudformation</category>
      <category>iac</category>
      <category>aws</category>
    </item>
    <item>
      <title>Nutanix Hyperconverged Infrastructure: Bridging the Gap Between On-Premise and Public Cloud</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Wed, 23 Oct 2024 18:11:02 +0000</pubDate>
      <link>https://forem.com/sirlawdin/nutanix-hyperconverged-infrastructure-bridging-the-gap-between-on-premise-and-public-cloud-27i4</link>
      <guid>https://forem.com/sirlawdin/nutanix-hyperconverged-infrastructure-bridging-the-gap-between-on-premise-and-public-cloud-27i4</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;A concise overview of Nutanix HCI and the idea of Hyperconverged Infrastructure.&lt;/li&gt;
&lt;li&gt;Discussion on the significance of HCI in contemporary enterprise IT, particularly in relation to digital transformation.&lt;/li&gt;
&lt;li&gt;Transition into how Nutanix HCI corresponds with the services and scalability provided by public clouds.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Nutanix Hyperconverged Infrastructure (HCI)?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Definition and description of HCI:&lt;/strong&gt; a unified software-defined platform that combines compute, storage, and networking.&lt;br&gt;
Explanation of how Nutanix HCI streamlines data center management by integrating resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advantages&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Easier management&lt;/li&gt;
&lt;li&gt;Scalability&lt;/li&gt;
&lt;li&gt;Cost-effectiveness, and &lt;/li&gt;
&lt;li&gt;Adaptability.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Nutanix HCI vs. Public Cloud
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Elasticity &amp;amp; Scalability:&lt;/strong&gt; Nutanix HCI can scale similarly to public clouds (e.g., AWS, Azure, Google Cloud) to accommodate increasing workloads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Distinctions between horizontal and vertical scaling.&lt;br&gt;
Deployment Flexibility:&lt;/strong&gt; Nutanix provides hybrid and multi-cloud solutions, enabling organizations to deploy workloads both on-premises and across public clouds seamlessly.&lt;br&gt;
Comparison with public cloud services that operate on shared infrastructure, while Nutanix gives a private cloud-like level of control.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Management:&lt;/strong&gt; Automated resource allocation and infrastructure oversight on both platforms.&lt;br&gt;
Public cloud services usually charge based on usage, while Nutanix offers cost management through predictable pricing structures.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Nutanix Services Available on HCI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddhuz0tagk84fvljr352.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddhuz0tagk84fvljr352.png" alt=" " width="540" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nutanix Acropolis (AOS):&lt;/strong&gt; The foundational element of Nutanix HCI that delivers VM-centric management, integrated backup, disaster recovery, and data protection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nutanix Prism:&lt;/strong&gt; A centralized management platform for monitoring and managing clusters, akin to cloud dashboards for resource oversight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nutanix Calm:&lt;/strong&gt; Facilitates automation and orchestration of applications across various environments, offering features comparable to cloud automation tools (e.g., AWS CloudFormation or Azure Resource Manager).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nutanix Files:&lt;/strong&gt; A scalable file storage solution that provides capabilities similar to services like AWS EFS or Azure Files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nutanix Objects:&lt;/strong&gt; Object storage within HCI, parallel to offerings like AWS S3 or Google Cloud Storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nutanix Era:&lt;/strong&gt; Database management and provisioning, similar to database services like AWS RDS or Azure SQL Database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nutanix Karbon:&lt;/strong&gt; Kubernetes orchestration for containerized applications, delivering cloud-native capabilities on-premises, akin to EKS or GKE.&lt;/p&gt;

&lt;h2&gt;
  
  
  Seamless Hybrid and Multi-Cloud Integration
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Nutanix's capability to integrate with public clouds via Nutanix Clusters, allowing users to shift workloads between Nutanix HCI and AWS or Azure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Benefits of maintaining both on-premises and cloud environments using Nutanix as a unified infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Recap of how Nutanix HCI connects traditional data centers with public clouds.&lt;br&gt;
Final reflections on the future of Nutanix HCI in hybrid cloud deployments and its potential to simplify cloud operations while providing enterprises with the control and flexibility they require.&lt;/p&gt;

&lt;p&gt;"For further details on integration options, check the &lt;a href="https://www.nutanix.com/support/documentation" rel="noopener noreferrer"&gt;Nutanix Documentation&lt;/a&gt;."&lt;/p&gt;

</description>
      <category>nutanix</category>
      <category>hyperconvergeinfrastructure</category>
    </item>
    <item>
      <title>Automate Uploading Security Scan Results to DefectDojo</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Sun, 15 Sep 2024 18:45:14 +0000</pubDate>
      <link>https://forem.com/sirlawdin/automate-uploading-security-scan-results-to-defectdojo-7e4</link>
      <guid>https://forem.com/sirlawdin/automate-uploading-security-scan-results-to-defectdojo-7e4</guid>
      <description>&lt;p&gt;In my previous blog, I explored &lt;a href="https://dev.to/sirlawdin/secret-scanning-in-ci-pipelines-using-gitleaks-and-pre-commit-hook-1e3f"&gt;secret scanning in CI pipelines using Gitleaks and pre-commit hooks&lt;/a&gt;. Beyond the risk of exposing secrets in repositories and pipelines, vulnerabilities can also be introduced into an application through third-party libraries and dependencies.&lt;/p&gt;

&lt;p&gt;The first step in addressing and managing these vulnerabilities is to accurately collect, categorize, and present them in a readable format.&lt;/p&gt;

&lt;p&gt;A powerful tool for managing security vulnerabilities is DefectDojo, which helps streamline the vulnerability management process.&lt;/p&gt;

&lt;p&gt;"DefectDojo is a security orchestration and vulnerability management platform. DefectDojo allows you to manage your application security program, maintain product and application information, triage vulnerabilities, and push findings to systems like JIRA and Slack. DefectDojo enriches and refines vulnerability data using a number of heuristic algorithms that improve the more you use the platform."&lt;br&gt;
— Source: &lt;a href="https://hub.docker.com/r/defectdojo/defectdojo-django" rel="noopener noreferrer"&gt;DefectDojo DockerHub page&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;The process involves utilizing a Python script (&lt;code&gt;upload.py&lt;/code&gt;) that interfaces with the DefectDojo API to import scan results from various security tools such as Gitleaks, Semgrep, and NJSSCAN. By integrating this script into GitLab CI/CD pipeline from &lt;a href="https://dev.to/sirlawdin/secret-scanning-in-ci-pipelines-using-gitleaks-and-pre-commit-hook-1e3f"&gt;here&lt;/a&gt;, you can automate the process of uploading scan results whenever a new scan is performed.&lt;/p&gt;
&lt;h3&gt;
  
  
  Demo Server
&lt;/h3&gt;

&lt;p&gt;To simplify the process of setting up a DefectDojo instance, you can try out the demo server at &lt;a href="//demo.defectdojo.org"&gt;demo.defectdojo.org&lt;/a&gt;. Log in with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Username: admin
Password: 1Defectdojo@demo#appsec
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that the demo is publicly accessible and regularly reset. Do not put sensitive data in the demo.&lt;/p&gt;

&lt;p&gt;When managing security vulnerabilities in an organization, it's crucial to have a structured approach to tracking, categorizing, and remediating issues. DefectDojo provides this structure through a set of well-defined concepts and terms that facilitate the organization and management of security-related activities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Terms and Concepts in DefectDojo
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Report:&lt;/strong&gt; After testing, DefectDojo enables you to generate detailed reports on the findings. These reports are essential for communicating security issues to stakeholders.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxhhwxkq2zm91wniyji3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxhhwxkq2zm91wniyji3.png" alt="reference: https://www.defectdojo.org/" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsaxrbzd42zwu44g8wekd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsaxrbzd42zwu44g8wekd.png" alt="reference: https://www.defectdojo.org/" width="800" height="518"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product:&lt;/strong&gt; This represents the application or system you're securing. Each product can have different versions or components, allowing you to keep track of the security status of multiple versions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product Type:&lt;/strong&gt; Products are grouped into categories called product types. For example, you might categorize products as web applications, APIs, or microservices. This helps in reporting and filtering across multiple similar products.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kwot41lt829wlkyf5y3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kwot41lt829wlkyf5y3.png" alt="Reference: https://defectdojo.github.io/" width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Engagement:&lt;/strong&gt; DefectDojo organizes security assessments through engagements. Each engagement represents a security testing activity—such as a penetration test or vulnerability scan—on a specific product.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltj7pizdatiocyfzjlvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltj7pizdatiocyfzjlvp.png" alt="Engagement reference: https://www.defectdojo.org/" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test:&lt;/strong&gt; Tests are actions carried out within an engagement, like running a security tool or performing manual code reviews. Each engagement can contain multiple tests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgke1yetxhkf3sd7ns896.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgke1yetxhkf3sd7ns896.png" alt="reference: https://www.defectdojo.org/" width="800" height="348"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Finding:&lt;/strong&gt; Findings are the vulnerabilities or issues discovered during a test. These are tracked, prioritized, and assigned for remediation, and DefectDojo's heuristic algorithms help refine and enrich this data for better decision-making.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytxuigj1uxor6403z7b8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytxuigj1uxor6403z7b8.png" alt="Finding reference: https://www.defectdojo.org/" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Endpoint:&lt;/strong&gt; Products often contain multiple IP addresses, URLs, or domains that need testing. Endpoints allow you to track which specific parts of your product were tested.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Risk Acceptance:&lt;/strong&gt; Not all vulnerabilities can or need to be immediately fixed. In cases where the risk is acknowledged but deemed acceptable, you can mark it as a risk acceptance, ensuring proper documentation.&lt;/p&gt;

&lt;p&gt;By understanding and utilizing these concepts, you can better manage your application security program and streamline your vulnerability management process using DefectDojo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4xu5p4yt05k0speg6uh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4xu5p4yt05k0speg6uh.png" alt="Reference: https://defectdojo.github.io/" width="800" height="656"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The Python Script
&lt;/h2&gt;

&lt;p&gt;Firstly, generate API key as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4yqy8tdvys7scivs13g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4yqy8tdvys7scivs13g.png" alt="Generate API Key" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This API Key will be used in the Python script to automate scan report upload.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yp6n5vwf9vpv4ma4im9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1yp6n5vwf9vpv4ma4im9.png" alt="API Docs" width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s a breakdown of the &lt;code&gt;upload.py&lt;/code&gt; script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests
import sys

file_name = sys.argv[1]
scan_type = ''

if file_name == 'gitleaks.json':
    scan_type = 'Gitleaks Scan'
elif file_name == 'njsscan.sarif':
    scan_type = 'SARIF'
elif file_name == 'semgrep.json':
    scan_type = 'Semgrep JSON Report'

headers = {
    'Authorization': 'Token 548afd6fab3bea9794a41b31da0e9404f733e222'
}

url = 'https://demo.defectdojo.org/api/v2/import-scan/'

data = {
    'active': True,
    'verified': True,
    'scan_type': scan_type,
    'minimum_severity': 'Low',
    'engagement': 19
}

files = {
    'file': open(file_name, 'rb')
}

response = requests.post(url, headers=headers, data=data, files=files)

if response.status_code == 201:
    print('Scan results imported successfully')
else:
    print(f'Failed to import scan results: {response.content}')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Key Components
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;File Handling&lt;/strong&gt;: The script accepts a filename as an argument, determining the type of scan based on the file extension.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API Integration&lt;/strong&gt;: It sends a POST request to the DefectDojo API with the necessary headers and data, including the scan type and engagement ID.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response Handling&lt;/strong&gt;: It checks the response status to confirm whether the upload was successful.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Security Considerations
&lt;/h3&gt;

&lt;p&gt;Ensure that the API token used in the script has the necessary permissions and is kept secure. Avoid hardcoding sensitive information; consider using environment variables or secret management tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitLab CI/CD Integration
&lt;/h2&gt;

&lt;p&gt;To automate the execution of this script, you can create a &lt;a href="https://gitlab.com/Sirlawdin/juice-shop-project/-/blob/main/.gitlab-ci.yml?ref_type=heads" rel="noopener noreferrer"&gt;&lt;code&gt;.gitlab-ci.yml&lt;/code&gt;&lt;/a&gt; file that defines the stages of your CI/CD pipeline. Here’s an example configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variables:
  IMAGE_NAME: sirlawdin/demo-app
  IMAGE_TAG: juice-shop-1.1

stages:
  - cache
  - test
  - build

create_cache:
  image: node:18-bullseye
  stage: cache
  script:
    - yarn install --ignore-engines
  cache:
    key:
      files:
        - yarn.lock
    paths:
      - node_modules/
      - yarn.lock
      - .yarn
    policy: pull-push

yarn_test:
  image: node:18-bullseye
  stage: test
  script:
    - yarn install
    - yarn test
  cache:
    key:
      files:
        - yarn.lock
    paths:
      - node_modules/
      - yarn.lock
      - .yarn
    policy: pull-push

gitleaks:
  stage: test
  image:
    name: ghcr.io/gitleaks/gitleaks:latest
    entrypoint: [""]
  script:
    - gitleaks detect --source . --verbose --format json --report-path gitleaks-report.json
  allow_failure: true
  artifacts:
    when: always
    paths:
      - gitleaks-report.json

njsscan:
  stage: test
  image: python:3.11-slim
  before_script:
    - pip install --upgrade njsscan
  script:
    - njsscan --exit-warning . --sarif -o njsscan.sarif
  allow_failure: true
  artifacts:
    when: always
    paths:
      - njsscan.sarif

semgrep:
  stage: test
  image: returntocorp/semgrep:latest
  variables:
    SEMGREP_RULES: p/javascript
  script:
    - semgrep ci --json --output semgrep.json
  allow_failure: true
  artifacts:
    when: always
    paths:
      - semgrep.json

upload_reports:
  stage: test
  image: python
  needs: ["gitleaks", "njsscan", "semgrep"]
  when: always
  before_script:
    - pip3 install requests
  script:
    - python3 upload-reports.py gitleaks.json
    - python3 upload-reports.py njsscan.sarif
    - python3 upload-reports.py semgrep.json


build_image:
  stage: build
  image: docker:24
  services:
    - docker:24-dind
  before_script:
    - echo $DOCKER_PASS | docker login -u $DOCKER_USER --password-stdin
  script:
    - docker build -t $IMAGE_NAME:$IMAGE_TAG .
    - docker push $IMAGE_NAME:$IMAGE_TAG

include:
  - remote: 'https://gitlab.com/gitlab-org/gitlab/-/raw/master/lib/gitlab/ci/templates/Jobs/Secret-Detection.latest.gitlab-ci.yml'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Explanation of the CI/CD Configuration
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stages&lt;/strong&gt;: Define a stage named &lt;code&gt;security_scan&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Script&lt;/strong&gt;: Call the &lt;code&gt;upload.py&lt;/code&gt; script for each scan result file generated during the CI process.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By automating the upload of security scan results to DefectDojo, you can streamline your vulnerability management workflow. This integration not only saves time but also ensures that security issues are promptly addressed. Implementing this solution within your CI/CD pipeline will enhance the overall security posture. To learn more about DefectDojo, you can use the &lt;a href="https://documentation.defectdojo.com/integrations/importing/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>devsecops</category>
      <category>gitlab</category>
      <category>defectdojo</category>
      <category>python</category>
    </item>
    <item>
      <title>Upcoming AWS Service Deprecations!!!</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Fri, 02 Aug 2024 08:01:50 +0000</pubDate>
      <link>https://forem.com/sirlawdin/upcoming-aws-service-deprecations-5ak1</link>
      <guid>https://forem.com/sirlawdin/upcoming-aws-service-deprecations-5ak1</guid>
      <description>&lt;p&gt;I do not like hearing about AWS deprecations, especially for services that I have used frequently over the years.&lt;/p&gt;

&lt;p&gt;However, as AWS evolves, some services inevitably need to be deprecated. Here’s a brief overview of the notable ones:&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS CodeCommit
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta7eoq20b6ptzivdmg6t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta7eoq20b6ptzivdmg6t.png" alt="AWS CodeCommit" width="310" height="163"&gt;&lt;/a&gt;&lt;br&gt;
Code Commit is the GitHub and GitLab of AWS with seamless integration with some AWS services. AWS CodeCommit has stopped onboarding new customers. From now on, a new repository can only be created by customers with an existing repository in AWS CodeCommit. While still functional, AWS may shift focus to other version control solutions. Users should evaluate alternatives that align with their development workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Cloud9
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flme6vft27f5ypqi79zy7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flme6vft27f5ypqi79zy7.png" alt="AWS Cloud9" width="600" height="300"&gt;&lt;/a&gt;&lt;br&gt;
Cloud9 is an integrated development environment (IDE), similar to Replit, Gitpod, CodeSandbox, or a browser-based version of VS Code. Users should consider transitioning to different IDEs or local setups as AWS has announced deprecation. According to them &lt;br&gt;
&lt;code&gt;“After careful consideration, we have made the decision to close new customer access to AWS Cloud9, effective July 25, 2024. AWS Cloud9 existing customers can continue to use the service as normal. AWS continues to invest in security, availability, and performance improvements for AWS Cloud9, but we do not plan to introduce new features.”&lt;/code&gt; (&lt;a href="https://www.gitpod.io/blog/enterprise-grade-alternatives-to-cloud9" rel="noopener noreferrer"&gt;source&lt;/a&gt;)&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Mobile Hub
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoe7zuz9difr45cyc7kp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feoe7zuz9difr45cyc7kp.png" alt="MobileHub" width="800" height="685"&gt;&lt;/a&gt;&lt;br&gt;
This service simplifies the process of building, testing, and monitoring mobile applications and can make use of one or more AWS services. With the rise of more advanced mobile development platforms, AWS Mobile Hub is being deprecated. Developers are encouraged to migrate to &lt;a href="https://aws.amazon.com/amplify/?gclid=Cj0KCQjwh7K1BhCZARIsAKOrVqFdxiGS3P9LlROYF5mWYo7A_JqnL79dId9ITaNRBBDLDewIjT0NN5QaAq-7EALw_wcB&amp;amp;trk=e37f908f-322e-4ebc-9def-9eafa78141b8&amp;amp;sc_channel=ps&amp;amp;ef_id=Cj0KCQjwh7K1BhCZARIsAKOrVqFdxiGS3P9LlROYF5mWYo7A_JqnL79dId9ITaNRBBDLDewIjT0NN5QaAq-7EALw_wcB:G:s&amp;amp;s_kwcid=AL!4422!3!647301987538!e!!g!!aws%20amplify!19613610159!148358959849" rel="noopener noreferrer"&gt;AWS Amplify&lt;/a&gt; for enhanced mobile app development capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon CloudSearch
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm7wwsc68zk1dyv9465c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxm7wwsc68zk1dyv9465c.jpg" alt=" " width="800" height="440"&gt;&lt;/a&gt;&lt;br&gt;
Amazon CloudSearch is a fully managed service in the cloud that makes it easy to set up, manage, and scale a search solution for your website or application. As AWS introduces more powerful search solutions, Amazon CloudSearch may be phased out. Users should explore Amazon OpenSearch Service for advanced search functionalities.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS CodeStar
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2tamr5hl13n8xzogpnhq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2tamr5hl13n8xzogpnhq.png" alt="AWS CodeStar" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS CodeStar is a cloud‑based development service that provides the tools you need to quickly develop, build, and deploy applications on AWS. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, allowing you to start releasing code faster. &lt;br&gt;
This service is expected to be deprecated as AWS announced the Discontinuation of AWS CodeStar support&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6ancwft0z8blmad1uin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa6ancwft0z8blmad1uin.png" alt="AWS CodeStar" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Quantum Ledger Database (QLDB)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6juekbud7s8bjo93lvhh.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6juekbud7s8bjo93lvhh.jpeg" alt=" " width="318" height="159"&gt;&lt;/a&gt;&lt;br&gt;
 QLDB may see reduced usage as AWS focuses on other database solutions. AWS recently announced that new customers can no longer sign up for Amazon Quantum Ledger Database (QLDB), a managed service providing an immutable transaction log maintained by a central trusted authority. All existing databases will be shut down in one year, and current users are encouraged to migrate to Aurora PostgreSQL.&lt;/p&gt;

&lt;p&gt;To ensure a smooth transition, stay updated on AWS announcements and plan migrations accordingly. Embracing newer services can enhance your development processes and infrastructure.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Secret Scanning in CI pipelines using Gitleaks and Pre-commit Hook.</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Tue, 16 Jul 2024 05:40:34 +0000</pubDate>
      <link>https://forem.com/sirlawdin/secret-scanning-in-ci-pipelines-using-gitleaks-and-pre-commit-hook-1e3f</link>
      <guid>https://forem.com/sirlawdin/secret-scanning-in-ci-pipelines-using-gitleaks-and-pre-commit-hook-1e3f</guid>
      <description>&lt;p&gt;In today's development environment, maintaining the security of your code is as crucial as ensuring its functionality. One of the key aspects of security is managing and safeguarding your secrets, such as API keys, passwords, and tokens. Accidentally committing these secrets to your repository can lead to severe security breaches. Implementing secret scanning in your Continuous Integration (CI) pipelines is essential to mitigate this risk. This blog will guide you through setting up secret scanning in &lt;strong&gt;GitLab&lt;/strong&gt; CI pipelines using &lt;strong&gt;Gitleaks&lt;/strong&gt; and a &lt;strong&gt;pre-commit hook&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Secret Scanning?
&lt;/h2&gt;

&lt;p&gt;Before diving into the technical setup, let's briefly discuss why secret scanning is important:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Prevent Data Leaks:&lt;/strong&gt; Secrets embedded in the code can be easily exposed if not managed properly, leading to unauthorized access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance:&lt;/strong&gt; Many organizations have compliance requirements that mandate the protection of sensitive data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Early Detection:&lt;/strong&gt; Scanning for secrets early in the CI pipeline helps catch issues before they make it to production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Faster Time to Market through Accelerated Releases:&lt;/strong&gt; Automated secret scanning ensures that security checks do not become bottlenecks in the release process, allowing for faster and more frequent releases.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tools Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/gitleaks/gitleaks" rel="noopener noreferrer"&gt;Gitleaks&lt;/a&gt;:&lt;/strong&gt; An open-source tool that scans your codebase for secrets, providing a comprehensive way to detect and prevent leaks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Git Hooks:&lt;/strong&gt; This is a git functionality. It's a way to run custom scripts when certain actions occur.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://gitlab.com/gitlab236/juice-shop" rel="noopener noreferrer"&gt;OWASP Juice Shop&lt;/a&gt;:&lt;/strong&gt; The application code to be used for this demonstration. It is probably the most modern and sophisticated insecure web application. Juice Shop encompasses vulnerabilities from the entire &lt;a href="https://owasp.org/www-project-top-ten" rel="noopener noreferrer"&gt;OWASP Top Ten&lt;/a&gt; and many other security flaws in real-world applications!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gitlab:&lt;/strong&gt; A web-based DevOps lifecycle tool that provides a Git repository manager with features like source code management (SCM), continuous integration (CI), continuous deployment (CD), and more.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;STEP 1:&lt;br&gt;
Fork the &lt;a href="https://gitlab.com/gitlab236/juice-shop" rel="noopener noreferrer"&gt;OWASP Juice Shop Gitlab repository&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82pqdc4k00zbu94hf0w5.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82pqdc4k00zbu94hf0w5.gif" alt="Fork Juice Shop Gitlab Repository" width="1358" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Added .env file that contains some dummy secrets to the repo.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzzzb2f8ia69d76bhidw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzzzb2f8ia69d76bhidw.png" alt="Added dummy secret to .env file" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;STEP 2: &lt;br&gt;
Create a &lt;code&gt;.gitlab-ci.yml&lt;/code&gt; file in the repository. This file is the configuration file that GitLab CI/CD uses to define the pipeline and its various stages, jobs, and actions.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variables:
    IMAGE_NAME: sirlawdin/juice-shop-app
    IMAGE_TAG: juice-shop-1.1

stages:
    - cache
    - test
    - build 

create_cache:
    image: node:18-bullseye
    stage: cache
    script:
      - yarn install
    cache:
      key:
        files:
          - yarn.lock
      paths:
        - node_modules/
        - yarn.lock
        - .yarn
      policy: pull-push

gitleaks:
  stage: test
  image:
    name: zricethezav/gitleaks
    entrypoint: [""]
  script: 
    - gitleaks detect --source . --verbose --report-path gitleaks-report.json

yarn_test:
    image: node:18-bullseye
    stage: test
    script: 
      - yarn install
      - yarn test
    cache:
      key:
        files:
          - yarn.lock
      paths:
        - node_modules/
        - yarn.lock
        - .yarn
      policy: pull-push

build_image:
    stage: build
    image: docker:24
    services:
      - docker:24-dind
    variables:
        DOCKER_USER: sirlawdin
    before_script:
      - echo $DOCKER_PASS | docker login -u $DOCKER_USER --password-stdin
    script:
      - docker build -t $IMAGE_NAME:$IMAGE_TAG .
      - docker push $IMAGE_NAME:$IMAGE_TAG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's a brief explanation of each section:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMAGE_NAME&lt;/strong&gt; and &lt;strong&gt;IMAGE_TAG&lt;/strong&gt;: These variables define the name and tag of the Docker image that will be built and pushed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variables:
    IMAGE_NAME: sirlawdin/juice-shop-app
    IMAGE_TAG: juice-shop-1.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;cache&lt;/strong&gt;: Stage for caching dependencies.&lt;br&gt;
&lt;strong&gt;test&lt;/strong&gt;: Stage for running tests and performing secret scanning.&lt;br&gt;
&lt;strong&gt;build&lt;/strong&gt;: Stage for building and pushing the Docker image.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;stages:
    - cache
    - test
    - build 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;create_cache&lt;/strong&gt;: This job uses the node:18-bullseye image to install dependencies with Yarn.&lt;br&gt;
&lt;strong&gt;Caching&lt;/strong&gt;: The node_modules/, yarn.lock, and .yarn directories are cached based on the yarn.lock file to speed up future pipeline runs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;create_cache:
    image: node:18-bullseye
    stage: cache
    script:
      - yarn install
    cache:
      key:
        files:
          - yarn.lock
      paths:
        - node_modules/
        - yarn.lock
        - .yarn
      policy: pull-push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;gitleaks&lt;/strong&gt;: This job uses the zricethezav/gitleaks image to scan the source code for secrets.&lt;br&gt;
&lt;strong&gt;Report&lt;/strong&gt;: The results are saved to gitleaks-report.json.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gitleaks:
  stage: test
  image:
    name: zricethezav/gitleaks
    entrypoint: [""]
  script: 
    - gitleaks detect --source . --verbose --report-path gitleaks-report.json

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;yarn_test&lt;/strong&gt;: This job installs dependencies and runs tests using the node:18-bullseye image.&lt;br&gt;
&lt;strong&gt;Caching&lt;/strong&gt;: Similar to the create_cache job, it caches dependencies to speed up future runs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn_test:
    image: node:18-bullseye
    stage: test
    script: 
      - yarn install
      - yarn test
    cache:
      key:
        files:
          - yarn.lock
      paths:
        - node_modules/
        - yarn.lock
        - .yarn
      policy: pull-push

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;build_image&lt;/strong&gt;: This job uses the docker:24 image and the Docker-in-Docker (dind) service to build and push the Docker image.&lt;br&gt;
&lt;strong&gt;Docker Login&lt;/strong&gt;: The DOCKER_PASS and DOCKER_USER variables are used to log in to Docker Hub.&lt;br&gt;
&lt;strong&gt;Build and Push&lt;/strong&gt;: The Docker image is built with the specified IMAGE_NAME and IMAGE_TAG, and then pushed to the Docker registry.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build_image:
    stage: build
    image: docker:24
    services:
      - docker:24-dind
    variables:
        DOCKER_USER: sirlawdin
    before_script:
      - echo $DOCKER_PASS | docker login -u $DOCKER_USER --password-stdin
    script:
      - docker build -t $IMAGE_NAME:$IMAGE_TAG .
      - docker push $IMAGE_NAME:$IMAGE_TAG
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running the pipeline, the job at the gitleak job failed due to leaks found in the repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdztkm45xhvpejlujfzr2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdztkm45xhvpejlujfzr2.png" alt="Gitlab job" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PS: The dummy secrets previously added to the repository were all flagged.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/Sirlawdin/juice-shop/.git/
Created fresh repository.
Checking out 5c0f71b8 as detached HEAD (ref is main)...
Skipping Git submodules setup
$ git remote set-url origin "${CI_REPOSITORY_URL}"
Executing "step_script" stage of the job script
00:08
Using docker image sha256:7da57b8e4dc2b857722c7fe447673bd938d452e61acbeb652bf90a210385457a for zricethezav/gitleaks with digest zricethezav/gitleaks@sha256:75bdb2b2f4db213cde0b8295f13a88d6b333091bbfbf3012a4e083d00d31caba ...
$ gitleaks detect --verbose --source .
    ○
    │╲
    │ ○
    ○ ░
    ░    gitleaks
Finding:     API_KEY=T8DWKJKAWUQ27MDSL902DBJAL
USERNAME= JaneDoe
Secret:      T8DWKJKAWUQ27MDSL902DBJAL
RuleID:      generic-api-key
Entropy:     3.973661
File:        .env
Line:        1
Commit:      5c0f71b87695579ad0d6999d485cd7b659ae6be6
Author:      Salaudeen O. Abdulrasaq
Email:       salaudeen.abdulrasaq2008@gmail.com
Date:        2024-07-16T04:30:31Z
Fingerprint: 5c0f71b87695579ad0d6999d485cd7b659ae6be6:.env:generic-api-key:1
Finding:     SECRET_KEY=IFW2dSNA02JD7BDJ2BCJLA
Secret:      IFW2dSNA02JD7BDJ2BCJLA
RuleID:      generic-api-key
Entropy:     3.754442
File:        .env
Line:        4
Commit:      5c0f71b87695579ad0d6999d485cd7b659ae6be6
Author:      Salaudeen O. Abdulrasaq
Email:       salaudeen.abdulrasaq2008@gmail.com
Date:        2024-07-16T04:30:31Z
Fingerprint: 5c0f71b87695579ad0d6999d485cd7b659ae6be6:.env:generic-api-key:4
Finding:     ...2dSNA02JD7BDJ2BCJLA
ACCESS_KEY=ROHXMPGWQLAHJI92NJSD
Secret:      ROHXMPGWQLAHJI92NJSD
RuleID:      generic-api-key
Entropy:     4.121928
File:        .env
Line:        5
Commit:      5c0f71b87695579ad0d6999d485cd7b659ae6be6
Author:      Salaudeen O. Abdulrasaq
Email:       salaudeen.abdulrasaq2008@gmail.com
Date:        2024-07-16T04:30:31Z
Fingerprint: 5c0f71b87695579ad0d6999d485cd7b659ae6be6:.env:generic-api-key:5
2:09PM INF 33 commits scanned.
2:09PM INF scan completed in 7.61s
2:09PM WRN leaks found: 80
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;PRE_COMMIT HOOK&lt;/strong&gt;&lt;br&gt;
Implementing a pre-commit hook to scan for potential leaks in your code can prevent sensitive credentials from being committed to the repository. This proactive measure ensures sensitive information remains secure and never makes it to the repository commits, mitigating the risk of exposure to attackers.&lt;/p&gt;

&lt;p&gt;Git Hooks; There are various types of hooks available in Git, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pre-commit&lt;/li&gt;
&lt;li&gt;Pre-push&lt;/li&gt;
&lt;li&gt;Pre-rebase &lt;/li&gt;
&lt;li&gt;And more...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pre-commit hook is fired when changes made to a repo is about to be committed.&lt;/p&gt;

&lt;p&gt;To create a pre-commit hook:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open your terminal and navigate to the root directory of your Git repository.&lt;br&gt;
&lt;code&gt;cd /path/to/your/repository&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create the pre-commit hook file:&lt;br&gt;
Inside the .git/hooks directory, create a file named pre-commit.&lt;br&gt;
&lt;code&gt;vi .git/hooks/pre-commit&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enter the below script into the pre-commit file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/sh

# Configuration
GITLEAKS_IMAGE="zricethezav/gitleaks"
HOST_FOLDER_TO_SCAN="juice-shop-repo"
CONTAINER_FOLDER="/path"
REPORT_FILE="gitleaks-report.json"

# Function to display an error message and exit
exit_with_error() {
    echo "$1"
    exit 1
}

# Run Gitleaks
echo "Running Gitleaks for secret scanning..."
docker pull ${GITLEAKS_IMAGE} || exit_with_error "Failed to pull Gitleaks Docker image."

# Run Gitleaks container
docker run -v ${HOST_FOLDER_TO_SCAN}:${CONTAINER_FOLDER} ${GITLEAKS_IMAGE} detect --source="${CONTAINER_FOLDER}" --verbose --report-path=${REPORT_FILE}
GITLEAKS_EXIT_CODE=$?

# Check if Gitleaks detected any secrets
if [ ${GITLEAKS_EXIT_CODE} -ne 0 ]; then
    echo "Gitleaks detected secrets in your code. Please fix the issues before committing."
    cat ${REPORT_FILE}
    exit 1
fi

echo "Gitleaks found no secrets. Proceeding with commit."
exit 0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation&lt;/strong&gt;: &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration:&lt;/strong&gt;&lt;br&gt;
Defined variables for image name, host folder, container folder, and report file for easy configuration and readability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Error Handling:&lt;/strong&gt;&lt;br&gt;
The function &lt;em&gt;exit_with_error&lt;/em&gt; was added to handle errors gracefully and provide meaningful error messages.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Pull:&lt;/strong&gt;&lt;br&gt;
Added error handling for the docker pull command to ensure the script exits if pulling the Gitleaks image fails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker Run:&lt;/strong&gt;&lt;br&gt;
It is used to configure variables to run the Gitleaks container and capture its exit code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check Gitleaks Exit Code:&lt;/strong&gt;&lt;br&gt;
Checked the exit code of the Gitleaks run and displayed the report if secrets were detected, preventing the commit.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make the hook executable; Change the file permissions to make the pre-commit hook executable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;chmod +x .git/hooks/pre-commit&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zl7q5dodpxa3swuxoln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zl7q5dodpxa3swuxoln.png" alt="create pre-commit hook file" width="800" height="229"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4tq2pra2h8jcq35j0xc.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4tq2pra2h8jcq35j0xc.gif" alt="Run gitleak at commit" width="760" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The pre-commit hook has helped enforce secret protection by preventing commits until any leaked secrets are removed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7fpvc8aw7mbv2ddl3cr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7fpvc8aw7mbv2ddl3cr.png" alt=" " width="800" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, your pre-commit hook will run every time you attempt to commit changes, helping to enforce code quality and security checks before the commit is finalized.&lt;/p&gt;

</description>
      <category>devsecops</category>
      <category>gitlab</category>
      <category>security</category>
      <category>gitleak</category>
    </item>
    <item>
      <title>Introduction to Containerization on AWS ECS (Elastic Container Service) and Fargate</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Sun, 02 Jun 2024 14:24:15 +0000</pubDate>
      <link>https://forem.com/sirlawdin/introduction-to-containerization-on-aws-ecs-elastic-container-service-and-fargate-6oo</link>
      <guid>https://forem.com/sirlawdin/introduction-to-containerization-on-aws-ecs-elastic-container-service-and-fargate-6oo</guid>
      <description>&lt;p&gt;&lt;strong&gt;Pre-requisite:&lt;/strong&gt; To get the best from this series, it is expected that you understand how Docker works. You can use this &lt;a href="https://docs.docker.com/get-started/overview/" rel="noopener noreferrer"&gt;link&lt;/a&gt; to get an overview of how docker works If you don't have an AWS account yet, you can follow the steps in my blog on &lt;a href="https://dev.to/sirlawdin/how-to-create-an-aws-account-39cn"&gt;How to create an AWS account&lt;/a&gt; to create one for the hands-on exercises.&lt;/p&gt;

&lt;p&gt;This series will be released over the next couple of months. You are going to learn about the following concepts:&lt;/p&gt;

&lt;p&gt;ECS (Elastic Container Service)&lt;br&gt;
Fargate&lt;br&gt;
Load Balancing&lt;br&gt;
Auto Scaling&lt;br&gt;
ECR (Elastic Container Registry)&lt;br&gt;
CI/CD (Continuous Integration/Continuous Deployment)&lt;br&gt;
Blue/Green Deployment&lt;br&gt;
AWS X-Ray&lt;br&gt;
Service Discovery&lt;br&gt;
App Mesh&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is ECS and Fargate?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3xj95pw4r454v56r4d1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3xj95pw4r454v56r4d1.png" alt=" " width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon ECS&lt;/strong&gt; is a fully managed container orchestration service that enables you to run and manage Docker containers at scale. With ECS, you can easily deploy applications in containers without needing to manage the underlying infrastructure. It integrates seamlessly with other AWS services, providing a comprehensive solution for running microservices, batch processes, and long-running applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Fargate&lt;/strong&gt; is a serverless compute engine for containers that work with both Amazon ECS (Elastic Container Service) and Amazon EKS (Elastic Kubernetes Service). Fargate eliminates the need to provision and manage servers, allowing you to specify and pay for resources per application. This makes it easier to build and deploy containerized applications without worrying about the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;ECS takes care of orchestrating your containerized applications, managing their lifecycle, and integrating with other AWS services.&lt;br&gt;
Fargate provides compute power on-demand, automatically adjusting to the needs of your applications without requiring you to manage any servers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85uzhed770tjlkjnarv0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F85uzhed770tjlkjnarv0.png" alt=" " width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This restaurant analogy below helps illustrate how ECS and Fargate work together to simplify the process of running containerized applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Amazon ECS and AWS Fargate with a Restaurant Analogy
&lt;/h2&gt;

&lt;p&gt;Imagine you want to start a restaurant business. To do this, you need a physical restaurant location, kitchen equipment, chefs and staff to prepare and serve the food, and all the necessary supplies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon ECS&lt;/strong&gt; (Elastic Container Service) is like a professional restaurant management company. They provide you with the restaurant space, organize the kitchen staff, handle the logistics of ordering supplies, and ensure everything runs smoothly. You tell them what kind of cuisine you want to serve, the menu, and any special requirements, and they take care of the rest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ECS as the Restaurant Management Company:&lt;/strong&gt; ECS handles the orchestration of your "menu items" (the containers), ensuring they are prepared, served, and maintained according to your needs. You don't need to worry about the details of managing the restaurant infrastructure.&lt;/p&gt;

&lt;p&gt;Now, let's introduce AWS Fargate into the mix. Fargate is like having a magical, self-adjusting kitchen space. You simply describe your menu and the number of customers you expect, and the kitchen automatically scales to fit your needs, providing all the necessary cooking equipment and staff without any manual intervention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fargate as the Magical, Self-Adjusting Kitchen:&lt;/strong&gt; With Fargate, you don't need to worry about the capacity of your kitchen or the logistics of managing the equipment and staff. You specify the requirements for your "menu items" (the containers), and Fargate automatically provides the necessary cooking resources. It's like having a kitchen that can expand or contract to perfectly accommodate your restaurant's needs, ensuring you always have the right amount of space and resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Putting It All Together&lt;/strong&gt;&lt;br&gt;
When you combine ECS and Fargate, it's like having the best of both worlds. You have a professional restaurant management company (ECS) handling the overall operations, and a magical, flexible kitchen space (Fargate) that adjusts to perfectly meet your restaurant's needs without any manual setup or management.&lt;/p&gt;

&lt;p&gt;Now that you understand on a high level how ECS and Fargate work; let's move on to the next part of this blog series where you will learn how to &lt;a href="https://dev.tourl"&gt;Launch your first ECS container&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Up Next: &lt;a href="https://dev.tourl"&gt;Launch Your First ECS Container&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ecs</category>
      <category>fargate</category>
      <category>elb</category>
      <category>autoscaling</category>
    </item>
    <item>
      <title>How to Create a Free Tier AWS account</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Sun, 02 Jun 2024 10:59:34 +0000</pubDate>
      <link>https://forem.com/sirlawdin/how-to-create-an-aws-account-39cn</link>
      <guid>https://forem.com/sirlawdin/how-to-create-an-aws-account-39cn</guid>
      <description>&lt;p&gt;Amazon Web Services (AWS) is one of the most popular cloud service providers in the world, offering a range of services from computing power to storage solutions. Creating an AWS account is the first step to accessing these powerful tools. In this blog, we'll guide you through the process of creating and logging into an AWS account, with step-by-step instructions and helpful GIFs to illustrate each part of the process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5o7hs6ou7bggoemqzbr.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5o7hs6ou7bggoemqzbr.gif" alt=" " width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create Your AWS Account
&lt;/h2&gt;

&lt;p&gt;Creating an AWS account is a straightforward process. Follow these steps:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Visit the AWS Sign-Up Page:&lt;/strong&gt;&lt;br&gt;
Navigate to the &lt;a href="https://portal.aws.amazon.com/billing/signup#/start/email" rel="noopener noreferrer"&gt;AWS Sign-Up page&lt;/a&gt; and click on the "Create an AWS Account" button.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enter Your Email and Choose a Password:&lt;/strong&gt;&lt;br&gt;
Enter a valid email address and choose a secure password. This email will be your root user email address, which has full access to all AWS services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose an AWS Account Name:&lt;/strong&gt;&lt;br&gt;
Enter a unique AWS account name that will be associated with your account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verify Your Email:&lt;/strong&gt;&lt;br&gt;
AWS will send a verification email to the address you provided. Enter the verification code from the email to proceed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enter Your Contact Information:&lt;/strong&gt;&lt;br&gt;
Provide your contact details, including your full name, address, and phone number.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Choose Your Account Type:&lt;/strong&gt;&lt;br&gt;
Select either a personal or professional account based on your usage needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enter Payment Information:&lt;/strong&gt;&lt;br&gt;
AWS requires a credit card or payment method on file. You won't be charged unless you use services beyond the AWS Free Tier limits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity Verification:&lt;/strong&gt;&lt;br&gt;
AWS will verify your identity by sending a text message or automated phone call with a verification code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Select a Support Plan:&lt;/strong&gt;&lt;br&gt;
Choose from various support plans based on your needs. The Basic plan is free and suitable for most new users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complete the Setup:&lt;/strong&gt;&lt;br&gt;
Review and complete the setup process. Your AWS account is now ready to use!&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Log into Your AWS Account
&lt;/h2&gt;

&lt;p&gt;Once your account is set up, logging in is simple:&lt;/p&gt;

&lt;p&gt;Go to the AWS Management Console.&lt;/p&gt;

&lt;p&gt;Enter Your Root User Email:&lt;br&gt;
Enter the email address you used to create your AWS account.&lt;/p&gt;

&lt;p&gt;Enter Your Password:&lt;br&gt;
Provide the password associated with your AWS account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrzwbcanw2g70cka8kgx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrzwbcanw2g70cka8kgx.gif" alt=" " width="720" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using the root account for daily operations is not recommended due to security concerns. Instead, create an IAM user with administrative privileges.&lt;/p&gt;

&lt;p&gt;Visit the &lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Management Console&lt;/a&gt;&lt;br&gt;
Navigate to the IAM Service:&lt;br&gt;
In the AWS Management Console, go to the &lt;strong&gt;IAM&lt;/strong&gt; (Identity and Access Management) service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a New User:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Click on "&lt;strong&gt;&lt;em&gt;Users&lt;/em&gt;&lt;/strong&gt;" in the left-hand menu.&lt;br&gt;
Click the "&lt;strong&gt;&lt;em&gt;Add user&lt;/em&gt;&lt;/strong&gt;" button.&lt;br&gt;
Set User Details:&lt;/p&gt;

&lt;p&gt;Enter a username for the new user (e.g., admin-user).&lt;br&gt;
Select "&lt;em&gt;AWS Management Console access&lt;/em&gt;".&lt;br&gt;
Set a custom password or choose to generate one automatically. Ensure the user is required to reset their password upon first login.&lt;br&gt;
Assign Permissions:&lt;/p&gt;

&lt;p&gt;Click "&lt;strong&gt;&lt;em&gt;Next: Permissions&lt;/em&gt;&lt;/strong&gt;".&lt;br&gt;
Choose "&lt;strong&gt;&lt;em&gt;Attach existing policies directly&lt;/em&gt;&lt;/strong&gt;".&lt;br&gt;
Select the "&lt;strong&gt;&lt;em&gt;AdministratorAccess&lt;/em&gt;&lt;/strong&gt;" policy.&lt;br&gt;
Review and Create:&lt;/p&gt;

&lt;p&gt;Review the user details and permissions.&lt;br&gt;
Click "&lt;strong&gt;&lt;em&gt;Create user&lt;/em&gt;&lt;/strong&gt;".&lt;br&gt;
Download Credentials:&lt;/p&gt;

&lt;p&gt;Download the .csv file containing the user credentials or copy the access details. This file contains the login URL, username, and password for the new IAM user.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdvdb0ajuvfof8g86uui.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdvdb0ajuvfof8g86uui.gif" alt=" " width="720" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws101</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Network Policy in Kubernetes</title>
      <dc:creator>Salaudeen O. Abdulrasaq</dc:creator>
      <pubDate>Sun, 02 Jun 2024 08:17:13 +0000</pubDate>
      <link>https://forem.com/sirlawdin/network-policy-in-kubernetes-47i9</link>
      <guid>https://forem.com/sirlawdin/network-policy-in-kubernetes-47i9</guid>
      <description>&lt;p&gt;Secure communication between pods is critical in maintaining secure deployments. In this post, I will demonstrate how Kubernetes Network Policy can enforce fine-grained security controls in Kubernetes.&lt;/p&gt;

&lt;p&gt;I will demonstrate how to set up and enforce network policies in a Minikube environment, ensuring a MYSQL pod in one namespace cannot be accessed by a client pod in another namespace after applying the policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A working installation of Minikube&lt;/li&gt;
&lt;li&gt;Basic Knowledge of Kubernetes concepts and resources&lt;/li&gt;
&lt;li&gt;'&lt;em&gt;kubectl&lt;/em&gt;' configured to interact with the Minikube cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Start Minikube
&lt;/h3&gt;

&lt;p&gt;setup the Kubernetes environment with Minikube&lt;br&gt;
&lt;code&gt;minikube start&lt;/code&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp5gnim09cy6z54s41a7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbp5gnim09cy6z54s41a7.png" alt="Start Minikube" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxw7patwju5z5n5ier3lp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxw7patwju5z5n5ier3lp.png" alt=" " width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Create Namespaces and Deploy Pods
&lt;/h3&gt;

&lt;p&gt;Create two namespaces: &lt;em&gt;database&lt;/em&gt; namespace; for the MySQL pod and &lt;em&gt;client&lt;/em&gt; namespace; for the client pod connecting to the MYSQL Database.&lt;/p&gt;
&lt;h3&gt;
  
  
  Deploy a MYSQL pod in the 'database' namespace:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  namespace: database
  labels:
    app: mysql
spec:
  containers:
  - name: mysql
    image: mysql:5.7
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: password
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  Deploy a Client pod in the  &lt;code&gt;client&lt;/code&gt; namespace:
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f - &amp;lt;&amp;lt;EOF
apiVersion: v1
kind: Pod
metadata:
  name: client
  namespace: client
  labels:
    app: client
spec:
  containers:
  - name: client
    image: mysql:5.7
    command: ["sleep", "3600"]
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac6mgjywvf245ozgx5gr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fac6mgjywvf245ozgx5gr.png" alt=" " width="800" height="143"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Test Connectivity Before Apply Network Policy
&lt;/h2&gt;

&lt;p&gt;Verify that the client pod can connect to the MYSQL pod:&lt;br&gt;
&lt;code&gt;kubectl exec -it client -n client -- sh&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Connect to MySQL:&lt;br&gt;
&lt;code&gt;mysql -h &amp;lt;pod ip address&amp;gt; -u root -p&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7fn0dkofmda45l79aig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7fn0dkofmda45l79aig.png" alt=" " width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Implementing Kubernetes Network Policy
&lt;/h2&gt;

&lt;p&gt;Now, we can create a Kubernetes Network Policy to deny access from the &lt;code&gt;client&lt;/code&gt; namespace to the &lt;code&gt;database&lt;/code&gt; namespace.&lt;/p&gt;

&lt;p&gt;I prefer using the &lt;a href="https://cilium.io/blog/2021/02/10/network-policy-editor/" rel="noopener noreferrer"&gt;Cilium Kubernetes Network Policy Generator&lt;/a&gt;. This tool provides a user-friendly UI to interpret policies at a glance and create them in a few clicks. It can be used to develop &lt;a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="noopener noreferrer"&gt;Kubernetes Network policies&lt;/a&gt; and &lt;a href="https://editor.cilium.io/" rel="noopener noreferrer"&gt;Cilium Network Policy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cilium offers a more robust and feature-rich alternative to Kubernetes' built-in network policies, enabling advanced security features like deep packet inspection and layer 7 (Application Layer) policies.&lt;/p&gt;
&lt;h3&gt;
  
  
  Generate a Kubernetes Network Policy with Cilium Policy Generator
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://share.zight.com/yAuzDN9x" rel="noopener noreferrer"&gt;How to use the UI policy Generator&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl --apply -f - &amp;lt;&amp;lt;EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-client-access
  namespace: database
spec:
  podSelector:
    matchLabels:
      app: mysql
  policyTypes:
    - Ingress
    - Egress
  ingress: []
  egress: []
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test Connectivity After Applying Network Policy
&lt;/h2&gt;

&lt;p&gt;Verify that the client pod can no longer connect to the MySQL pod:&lt;br&gt;
&lt;code&gt;kubectl exec -it -n client -- sh&lt;/code&gt;&lt;br&gt;
&lt;code&gt;mysql -h &amp;lt;pod ip address&amp;gt; -u root -p&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut39ja442vqjqmpkkn96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fut39ja442vqjqmpkkn96.png" alt=" " width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By implementing Kubernetes Network Policies, we can effectively control the communication between pods across namespaces, enhancing the security of our Kubernetes cluster. For more advanced and robust network policies, technologies like cilium can be used.  &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devsecops</category>
      <category>cilium</category>
    </item>
  </channel>
</rss>
