<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hyelngtil Isaac</title>
    <description>The latest articles on Forem by Hyelngtil Isaac (@maven_h).</description>
    <link>https://forem.com/maven_h</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/maven_h"/>
    <language>en</language>
    <item>
      <title>Threat Detection with GuardDuty</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Thu, 02 Apr 2026 07:46:22 +0000</pubDate>
      <link>https://forem.com/maven_h/threat-detection-with-guardduty-1odj</link>
      <guid>https://forem.com/maven_h/threat-detection-with-guardduty-1odj</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;p&gt;I built a hands-on project where I wore two hats; attacker and defender, to demonstrate how SQL injection and command injection can escalate into a full cloud credential breach, and how AWS GuardDuty surfaces those behaviors in near real-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e9n8pla2v4ghnt3j9v4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0e9n8pla2v4ghnt3j9v4.png" alt=" " width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools &amp;amp; Concepts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Services used:&lt;/strong&gt; Amazon GuardDuty, Amazon EC2, Amazon S3, AWS CloudFormation, AWS CloudShell, IAM Roles, Amazon CloudFront, VPC and networking components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key concepts:&lt;/strong&gt; Threat detection with GuardDuty, SQL injection, command injection, Instance Metadata Service (IMDS) and credential exfiltration, simulating attacker behavior with CloudShell, S3 Malware Protection, and incident investigation workflows.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This project took approximately 1 hour. The most challenging part was tuning GuardDuty detections and IAM role permissions. The most rewarding moment was watching real threat findings surface during testing.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Project Setup
&lt;/h2&gt;

&lt;p&gt;I deployed a &lt;strong&gt;CloudFormation template&lt;/strong&gt; that provisions three functional pillars:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web App Infrastructure&lt;/strong&gt; — an Amazon EC2 instance inside a dedicated VPC (not the default), with its own Subnet, Internet Gateway, and Elastic Load Balancer for isolated networking.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;S3 Storage&lt;/strong&gt; — a bucket containing a protected &lt;code&gt;important-information.txt&lt;/code&gt; file that the EC2 instance is authorized to access, simulating sensitive data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GuardDuty Monitoring&lt;/strong&gt; — automatically enabled as a security sentinel to monitor resources and detect threats.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The web app deployed is &lt;strong&gt;OWASP Juice Shop&lt;/strong&gt;, a deliberately vulnerable application. My objective as the simulated attacker: gain access to the EC2 web server and read the sensitive file in S3.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is GuardDuty?
&lt;/h3&gt;

&lt;p&gt;GuardDuty is an &lt;strong&gt;intelligent threat detection service&lt;/strong&gt; that continuously monitors AWS accounts and workloads for malicious activity. In this project, it analyzes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC Flow Logs&lt;/li&gt;
&lt;li&gt;CloudTrail management events&lt;/li&gt;
&lt;li&gt;S3 data events&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It uses machine learning and integrated threat intelligence to detect indicators of compromise — credential exfiltration, communication with known malicious IPs, and more. Findings trigger an automated remediation workflow via &lt;strong&gt;Amazon EventBridge&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxy3n721sxf7zbx10v3if.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxy3n721sxf7zbx10v3if.png" alt=" " width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Attack Phase 1: SQL Injection
&lt;/h2&gt;

&lt;p&gt;SQL injection involves injecting malicious SQL code into an input field to manipulate backend database queries. It's dangerous because it allows attackers to bypass authentication and access sensitive data without authorization.&lt;/p&gt;

&lt;p&gt;I entered the following into the email field of the Juice Shop login page:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="s1"&gt;' or 1=1;--
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this does:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;1=1&lt;/code&gt; is always true, so the database validates the login regardless of the password.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--&lt;/code&gt; comments out the rest of the original query, neutralizing the intended security check.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Result: administrative access to the OWASP Juice Shop portal — no password required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioby4bkpekfy3kgxn7sc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fioby4bkpekfy3kgxn7sc.png" alt=" " width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Attack Phase 2: Command Injection
&lt;/h2&gt;

&lt;p&gt;Command injection is a vulnerability where an attacker executes arbitrary OS commands via a vulnerable application. Juice Shop is vulnerable because it fails to sanitize user input before passing it to a system shell.&lt;/p&gt;

&lt;p&gt;I exploited the search field by injecting a &lt;strong&gt;Node.js payload&lt;/strong&gt; that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Queried the &lt;strong&gt;Instance Metadata Service (IMDSv2)&lt;/strong&gt; to retrieve a session token.&lt;/li&gt;
&lt;li&gt;Identified the &lt;strong&gt;IAM Role&lt;/strong&gt; attached to the EC2 instance.&lt;/li&gt;
&lt;li&gt;Fetched temporary security credentials — &lt;code&gt;AccessKeyId&lt;/code&gt;, &lt;code&gt;SecretAccessKey&lt;/code&gt;, and &lt;code&gt;SessionToken&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Piped the JSON output to a publicly accessible path:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;/frontend/dist/frontend/assets/public/credentials.json&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A simple application flaw became a significant cloud infrastructure breach.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7etli677751zp3kzrhi0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7etli677751zp3kzrhi0.png" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Attack Verification
&lt;/h2&gt;

&lt;p&gt;I navigated to the public URL at &lt;code&gt;/assets/public/credentials.json&lt;/code&gt; and confirmed the exfiltrated credentials — a structured JSON object containing the stolen IAM temporary credentials tied to the EC2 instance's role.&lt;/p&gt;

&lt;p&gt;This proved the attacker now had everything needed to authenticate as a legitimate internal service and begin compromising additional AWS resources, including the S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8m1szajsoaodcmrd204c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8m1szajsoaodcmrd204c.png" alt=" " width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Using CloudShell to Escalate the Attack
&lt;/h2&gt;

&lt;p&gt;CloudShell provided a pre-authenticated environment with the AWS CLI pre-installed — perfect for simulating an attacker operating from an external machine using stolen credentials.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps taken:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Download the exfiltrated credentials file&lt;/span&gt;
wget &amp;lt;public-url&amp;gt;/assets/public/credentials.json

&lt;span class="c"&gt;# Extract the credential values&lt;/span&gt;
&lt;span class="nb"&gt;cat &lt;/span&gt;credentials.json | jq &lt;span class="s1"&gt;'.AccessKeyId, .SecretAccessKey, .SessionToken'&lt;/span&gt;

&lt;span class="c"&gt;# Configure a new AWS CLI profile called "stolen"&lt;/span&gt;
aws configure &lt;span class="nt"&gt;--profile&lt;/span&gt; stolen
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the &lt;code&gt;stolen&lt;/code&gt; profile isolated the "hacker" identity from the default CloudShell credentials. I could now simulate unauthorized S3 access — the exact behavior GuardDuty would flag as anomalous.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xudoc1l7ujbu5ane6vn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xudoc1l7ujbu5ane6vn.png" alt=" " width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  GuardDuty's Findings
&lt;/h2&gt;

&lt;p&gt;Within &lt;strong&gt;15 minutes&lt;/strong&gt; of executing the attack, GuardDuty generated a finding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;UnauthorizedAccess:IAMUser/InstanceCredentialExfiltration.InsideAWS
Severity: HIGH
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this means:&lt;/strong&gt; GuardDuty detected that IAM credentials were exfiltrated and then used &lt;em&gt;inside&lt;/em&gt; the AWS environment — indicating a likely credential compromise and unauthorized lateral movement within the account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it detected it:&lt;/strong&gt;&lt;br&gt;
GuardDuty models normal AWS behavior and flags deviations. It correlates telemetry from CloudTrail, VPC Flow Logs, and DNS logs to spot unusual patterns — atypical API calls, credential use from unexpected sources, sudden internal data access, or reconnaissance activity.&lt;/p&gt;

&lt;p&gt;The detailed finding reported that credentials for the EC2 instance role were used from a &lt;strong&gt;remote AWS account&lt;/strong&gt;, confirming the simulated exfiltration scenario.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6j38j8w47c86pl6zq64t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6j38j8w47c86pl6zq64t.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  S3 Malware Protection
&lt;/h2&gt;

&lt;p&gt;To test GuardDuty's Malware Protection for S3, I uploaded the standard &lt;strong&gt;EICAR anti-malware test file&lt;/strong&gt; — a harmless string that antivirus products are configured to recognize as a test signature.&lt;/p&gt;

&lt;p&gt;GuardDuty instantly triggered a security alert, confirming that Malware Protection detected the uploaded object and generated a finding indicating potential malware.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwrx57h76v0iknbx1qut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwrx57h76v0iknbx1qut.png" alt=" " width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A single unsanitized input field can escalate from a login bypass to full cloud credential theft.&lt;/li&gt;
&lt;li&gt;IMDS is a high-value target; restricting it with &lt;strong&gt;IMDSv2&lt;/strong&gt; and tight IAM policies is critical.&lt;/li&gt;
&lt;li&gt;GuardDuty's anomaly detection is effective, it flagged the credential misuse within 15 minutes with no manual configuration beyond enabling the service.&lt;/li&gt;
&lt;li&gt;Simulating attacks in a controlled lab environment is one of the best ways to build intuition for both offensive techniques and defensive tooling.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;🤝Next in the series builds on this, which will be "Secure Secrets with Secrets Manager"&lt;/p&gt;

</description>
      <category>devops</category>
      <category>cloudnative</category>
      <category>aws</category>
      <category>security</category>
    </item>
    <item>
      <title>Encrypt Data with AWS KMS</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Sun, 15 Mar 2026 11:17:31 +0000</pubDate>
      <link>https://forem.com/maven_h/encrypt-data-with-aws-kms-4fdb</link>
      <guid>https://forem.com/maven_h/encrypt-data-with-aws-kms-4fdb</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;p&gt;In this project, I will demonstrate how to create AWS KMS encryption keys, use them to encrypt a DynamoDB table, add and retrieve data to verify the encryption, observe how AWS blocks unauthorized access, and grant a user the necessary encryption permission. The goal is to show end‑to‑end data protection in AWS by provisioning keys, applying encryption to a live database, validating that only authorized principals can read or write the data, and confirming that key policies and &lt;code&gt;IAM controls&lt;/code&gt; effectively prevent unauthorized access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw7k9q31cm1gka4uj5qq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faw7k9q31cm1gka4uj5qq.png" alt=" " width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools and concepts
&lt;/h2&gt;

&lt;p&gt;I used AWS KMS to create and manage a customer‑managed key, Amazon DynamoDB to store and encrypt table data, and IAM to create a test user and attach scoped policies. I worked in the AWS console to edit key policies, add key users, and verify access from an incognito session.&lt;/p&gt;

&lt;p&gt;I learned about encryption at rest, that KMS key policies are the ultimate authority, and how to enforce least privilege by separating DynamoDB permissions from KMS decrypt permissions. I practiced using grants for temporary access, testing permissions, and documenting changes for auditability and rollback.&lt;/p&gt;




&lt;h2&gt;
  
  
  Project reflection
&lt;/h2&gt;

&lt;p&gt;This project took me less an hour to complete, including setup, testing, and documentation. It was rewarding to see least‑privilege controls work in practice: the test user initially received access denied errors, then, after a narrowly scoped policy change, could decrypt the DynamoDB items, and I captured the steps and evidence for an auditable, repeatable workflow.&lt;/p&gt;

&lt;p&gt;I chose to do this project today because I wanted hands‑on experience with real AWS security controls, creating a customer‑managed KMS key, attaching it to a DynamoDB table, and testing least‑privilege in practice. Working through the console and verifying access as a restricted test user helped me connect theory to operational tasks I’ll face as a junior cloud engineer and gave me confidence in key policy mechanics and auditable workflows.&lt;/p&gt;




&lt;h2&gt;
  
  
  Encryption and KMS
&lt;/h2&gt;

&lt;p&gt;Encryption is the process of converting plaintext into an unreadable format (ciphertext) using mathematical algorithms so that unauthorized parties cannot understand the data. &lt;/p&gt;

&lt;p&gt;Companies and developers do this to protect sensitive data at rest and in transit, prevent data breaches, satisfy legal and regulatory obligations, and preserve customer trust by ensuring confidentiality and integrity. &lt;/p&gt;

&lt;p&gt;Encryption keys are secret values used by encryption algorithms to lock (encrypt) and unlock (decrypt) data; proper key management (storage, rotation, access control, and auditing) is essential because weak key handling defeats encryption. &lt;/p&gt;

&lt;p&gt;AWS KMS is a fully managed AWS service that creates, stores, and controls cryptographic keys, enforces key policies, integrates with other AWS services, and logs key usage for auditing; key management systems are important because they protect the secrets that secure your data, enforce least‑privilege access, provide auditable trails for compliance, simplify key rotation and lifecycle operations, and reduce the operational risk and complexity of managing encryption securely.&lt;/p&gt;

&lt;p&gt;Encryption keys are broadly categorized as &lt;code&gt;symmetric&lt;/code&gt; (one secret used for both encryption and decryption) and &lt;code&gt;asymmetric&lt;/code&gt; (a public/private key pair where the public key encrypts and the private key decrypts); I set up a symmetric key because symmetric keys are the recommended and most efficient choice for encrypting data at rest in AWS services like DynamoDB, they offer better performance for bulk data operations, integrate seamlessly with AWS KMS and service‑side encryption, simplify access control and key lifecycle management, and keep the sensitive key material protected inside KMS while still allowing authorized AWS principals to read and write encrypted table data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl78i9303m5fz0bjk2fti.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl78i9303m5fz0bjk2fti.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Encrypting Data
&lt;/h2&gt;

&lt;p&gt;My encryption key will safeguard data in DynamoDB, which is a fully managed, serverless NoSQL database that stores items as key‑value or document records, delivers single‑digit millisecond performance at scale, supports transactions and global replication, and integrates with AWS KMS for server‑side encryption; by attaching a customer‑managed symmetric KMS key to the table, storage and backups are encrypted at rest, access is controlled through key policies and IAM permissions, and every use of the key is logged for auditability so only authorized principals can decrypt and read the plaintext.&lt;/p&gt;

&lt;p&gt;The different encryption options in DynamoDB include &lt;code&gt;AWS owned keys&lt;/code&gt; (fully managed by AWS with no customer control), &lt;code&gt;AWS managed KMS keys&lt;/code&gt; (service‑managed CMKs that AWS creates and rotates but still surface usage in CloudTrail), and &lt;code&gt;customer‑managed KMS keys&lt;/code&gt; (CMKs you create and control in AWS KMS).&lt;br&gt;
Their differences are based on who controls the key material and lifecycle, how much policy and access control you can enforce, the granularity of audit and rotation capabilities, and how quickly you can revoke or disable access.&lt;br&gt;
I selected a &lt;code&gt;customer‑managed symmetric KMS key&lt;/code&gt; because it gives me full policy control, immediate revocation and rotation options, detailed auditability, and the performance and seamless integration needed for encrypting DynamoDB table data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq9k7gj9xn8zh7k0pppn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq9k7gj9xn8zh7k0pppn.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Data Visibility
&lt;/h2&gt;

&lt;p&gt;Rather than controlling who has access to the key, KMS manages user permissions by requiring explicit, key‑level authorization through a key policy (the primary control), and by evaluating IAM policies, grants, and any explicit denies before allowing cryptographic operations such as Encrypt, Decrypt, ReEncrypt, GenerateDataKey, or DescribeKey. This means no principal has any KMS key permissions unless the key policy or an allowed IAM policy/grant gives them those permissions.&lt;/p&gt;

&lt;p&gt;Despite encrypting my DynamoDB table, I could still see the table’s items because DynamoDB’s server‑side encryption with KMS is transparent to authorized clients: AWS encrypts data at rest and stores ciphertext, but when an IAM principal or service with the necessary permissions reads an item, DynamoDB requests KMS to decrypt the data and returns plaintext to the caller, so applications and users who hold the right IAM/KMS permissions see normal, readable items; this protects storage, snapshots, and backups from anyone who cannot obtain KMS decrypt rights, while if you need ciphertext visible to clients you must use client‑side encryption (where the application holds the keys) rather than KMS server‑side encryption.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03og7f7tf2ww7hqub7wl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03og7f7tf2ww7hqub7wl.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Denying Access
&lt;/h2&gt;

&lt;p&gt;I configured a new IAM user named &lt;code&gt;nextwork-kms-user&lt;/code&gt; to act as a test account for DynamoDB work; I attached the &lt;code&gt;AmazonDynamoDBFullAccess&lt;/code&gt; managed policy so the user can fully interact with DynamoDB and saved the login credentials from the Retrieve password page, but I did not grant any permissions to my KMS key (no KMS actions or key policy access), ensuring the user cannot manage or decrypt encrypted data.&lt;/p&gt;

&lt;p&gt;After accessing the DynamoDB table as the test user, I encountered an error when attempting to view the encrypted item attributes because the test user lacked permissions to use the KMS key (the console returned an access denied / missing &lt;code&gt;kms:Decrypt&lt;/code&gt; message). This confirmed that attaching AmazonDynamoDBFullAccess alone does not permit reading encrypted data and that explicit KMS key permissions are required to decrypt and view those items, validating the principle of least privilege.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmm81yn0bonmg43ta8za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmm81yn0bonmg43ta8za.png" alt=" " width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Granting Access
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5rujd1nyodxyvvbx78v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5rujd1nyodxyvvbx78v.png" alt=" " width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To let my test user use the encryption key, I added maven-kms-user as a key user in the KMS console so the principal can perform cryptographic operations; my key's policy was updated in the policy view to include a narrowly scoped statement granting that user &lt;code&gt;Encrypt&lt;/code&gt;, &lt;code&gt;Decrypt&lt;/code&gt;, &lt;code&gt;ReEncrypt&lt;/code&gt;, &lt;code&gt;GenerateDataKey&lt;/code&gt;, and &lt;code&gt;DescribeKey&lt;/code&gt; on the key (targeted to the user’s ARN) while explicitly omitting any key‑management actions so the user can decrypt and encrypt data but cannot manage or change the key.&lt;br&gt;
Using the test user, I retried accessing the DynamoDB table and refreshed the Items view; I observed the previously encrypted attributes now displayed in plaintext and successful &lt;code&gt;GetItem&lt;/code&gt;/&lt;code&gt;Scan&lt;/code&gt; operations without any KMS access‑denied errors, which confirmed that adding the user to the key policy (granting &lt;code&gt;kms:Decrypt&lt;/code&gt; and related use actions) allowed the test user to decrypt and view the encrypted data while still not granting key management permissions.&lt;/p&gt;

&lt;p&gt;Encryption protects data by making it unreadable without keys, while access control restricts who can request or retrieve that data; use encryption when you need protection against compromised storage or cross‑boundary exposure, and combine it with access controls, IAM, network controls, and logging to enforce defense in depth&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flviywarg0g6zm6vugjnw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flviywarg0g6zm6vugjnw.png" alt=" " width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🤝Next in the series builds on this, which will be "Threat Detection with Amazon GuardDuty"&lt;/p&gt;




</description>
      <category>security</category>
      <category>aws</category>
      <category>database</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Query Data with DynamoDB</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Thu, 05 Mar 2026 17:02:13 +0000</pubDate>
      <link>https://forem.com/maven_h/query-data-with-dynamodb-33ci</link>
      <guid>https://forem.com/maven_h/query-data-with-dynamodb-33ci</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Amazon DynamoDB?
&lt;/h3&gt;

&lt;p&gt;Amazon DynamoDB is a fully managed, serverless NoSQL database service from AWS that provides fast, predictable performance and scales automatically. It is useful because it eliminates the need to manage servers, supports both key‑value and document data models, and ensures single‑digit millisecond response times even at massive scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I used Amazon DynamoDB in this project
&lt;/h3&gt;

&lt;p&gt;In today’s project, I used Amazon DynamoDB to practice querying and updating data across tables in a way that keeps everything consistent. I started by running &lt;code&gt;get-item&lt;/code&gt; commands to retrieve specific records using partition keys and projection expressions, which allowed me to pull back only the attributes I needed. Then I explored how related tables can be updated together by running a transaction with &lt;code&gt;transact-write-items&lt;/code&gt;, which let me insert a new comment into one table while simultaneously updating a counter in another. This showed me how DynamoDB ensures atomicity, both operations succeed or fail together making it really useful for handling connected data across multiple tables without risking mismatched updates.&lt;/p&gt;

&lt;p&gt;One thing I didn’t expect in this project is how seamlessly DynamoDB handled transactions across multiple tables. I thought working with related data in separate tables would require a lot of manual coordination, but using &lt;code&gt;transact-write-items&lt;/code&gt; made it surprisingly straightforward to insert a new record in one table while simultaneously updating another. It was eye‑opening to see how DynamoDB guarantees atomicity, either both operations succeed or neither does which really simplifies keeping related data consistent. This step showed me that DynamoDB isn’t just about speed and scalability, but also about reliability when managing complex relationships.&lt;/p&gt;

&lt;p&gt;This project took me through the full cycle of working with DynamoDB from retrieving specific items with "get-item" and "projection expressions", to exploring how tables can be related, and finally running a transaction that updated two tables at once. It wasn’t just about learning commands; it was about seeing how DynamoDB ensures consistency and reliability when handling connected data&lt;/p&gt;




&lt;h3&gt;
  
  
  Querying DynamoDB Tables
&lt;/h3&gt;

&lt;p&gt;A partition key is the primary attribute DynamoDB uses to distribute and retrieve data across its storage partitions. Every item in a DynamoDB table must include a partition key, and items with the same partition key value are grouped together. This key determines where the data is stored internally and is essential for efficient queries, since DynamoDB can quickly locate items based on that key rather than scanning the entire table.&lt;br&gt;
A sort key is the secondary attribute in a DynamoDB table’s primary key schema that works alongside the partition key to uniquely identify items. While the partition key determines which partition the data belongs to, the sort key organizes items within that partition. This means multiple items can share the same partition key but be distinguished by different sort key values. Sort keys also enable powerful query patterns, such as retrieving items in a range (e.g., all comments after a certain date) or ordering results by the sort key.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhje6zf9yzbvbunfevly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmhje6zf9yzbvbunfevly.png" alt=" " width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Limits of Using DynamoDB
&lt;/h3&gt;

&lt;p&gt;I ran into an error when I queried for items in the Comment table without providing a value for the partition key Id. This was because DynamoDB requires the partition key filter to be specified in every query, without it, the system doesn’t know which partition to look in, so the console flagged the input as invalid. In other words, the query failed because the partition key field was left empty, and DynamoDB cannot execute a query unless the full key schema is respected.&lt;br&gt;
Insights we could extract from our Comment table include the ability to see which posts attract the most engagement, track how often specific users contribute comments, and identify time-based activity patterns such as peak commenting hours or days. We can also observe relationships between posts and their associated comments, giving us a clear picture of community interaction at a structural level.&lt;/p&gt;

&lt;p&gt;Insights we can’t easily extract from the Comment table include deeper qualitative analysis, such as the sentiment or tone of comments, trending topics across multiple posts, or demographic-based engagement patterns. DynamoDB stores structured attributes but doesn’t analyze meaning or allow complex joins across tables, so extracting these kinds of insights would require additional tools or data sources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww7djla08q9jj8y31nbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fww7djla08q9jj8y31nbr.png" alt=" " width="800" height="409"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Running Queries with CLI
&lt;/h3&gt;

&lt;p&gt;A query I ran in CloudShell was &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;aws dynamodb get-item \&lt;br&gt;
    --table-name ContentCatalog \&lt;br&gt;
    --key '{"Id":{"N":"202"}}' \&lt;br&gt;
    --projection-expression "Title, ContentType, Services" \&lt;br&gt;
    --return-consumed-capacity TOTAL&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This query will fetch the item in the ContentCatalog table with the partition key Id equal to 202, but instead of returning the entire record, DynamoDB will only return the attributes I specified in the projection expression — Title, ContentType, and Services. Alongside those values, the response will also include a Consumed-Capacity block that shows how many read capacity units (RCUs) were used, giving me both the filtered item data and a usage report in one result.&lt;/p&gt;

&lt;p&gt;Query options I could add to my query affect how DynamoDB returns the data and what additional information I get back. Specifically:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;--consistent-read&lt;br&gt;
Ensures I always get the most up-to-date version of the item, rather than a possibly stale copy from a replicated node.&lt;/p&gt;

&lt;p&gt;--projection-expression&lt;br&gt;
Lets me specify which attributes to return, so instead of the full record I only get the fields I care about (Title, ContentType, and Services).&lt;/p&gt;

&lt;p&gt;--return-consumed-capacity TOTAL&lt;br&gt;
Adds a usage report to the response, showing how many read capacity units were consumed by the query.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8hgnldisxm4mem3v9i2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8hgnldisxm4mem3v9i2.png" alt=" " width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Transactions
&lt;/h3&gt;

&lt;p&gt;A transaction is a coordinated set of operations in DynamoDB that are executed together so they either all succeed or all fail. Based on the surrounding page content, the idea is that when you need to update related data across multiple tables or items, you can group those changes into a single transaction. DynamoDB then guarantees atomicity: if one part of the transaction cannot be completed, none of the changes are applied. This ensures consistency and prevents situations where one table is updated but another is left behind, keeping your data reliable and synchronized across different parts of your application.&lt;/p&gt;

&lt;p&gt;I ran a transaction using the &lt;code&gt;aws dynamodb transact-write-items&lt;/code&gt; command with a client request token called &lt;code&gt;TRANSACTION1&lt;/code&gt;. This transaction did two things: first, it added a new item into the "Comment" table with details such as the event name, the date and time of the comment, the comment text, and the user who posted it. Second, it updated the "Forum" table by incrementing the &lt;code&gt;Comments&lt;/code&gt; attribute for the &lt;code&gt;Events&lt;/code&gt; item. By grouping these two operations together in a single transaction, DynamoDB ensured that both changes either succeeded or failed as one unit, keeping the data consistent across the related tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0n8nnbzed4i9xpcbifj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0n8nnbzed4i9xpcbifj.png" alt=" " width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🤝This is the end of this Series.&lt;br&gt;
Next Series will be &lt;strong&gt;Security&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>aws</category>
      <category>nosql</category>
      <category>cloudnative</category>
    </item>
    <item>
      <title>Load Data into a DynamoDB Table</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Mon, 02 Feb 2026 22:20:35 +0000</pubDate>
      <link>https://forem.com/maven_h/load-data-into-adynamodb-table-ko0</link>
      <guid>https://forem.com/maven_h/load-data-into-adynamodb-table-ko0</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Amazon DynamoDB?
&lt;/h3&gt;

&lt;p&gt;DynamoDB is useful because it combines fast performance, flexible data modeling, and effortless scaling, making it a strong choice for modern applications that need to handle large amounts of varied data reliably.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I used Amazon DynamoDB in this project
&lt;/h3&gt;

&lt;p&gt;In today's project, I used Amazon DynamoDB to create tables, load diverse data like projects and videos into the ContentCatalog, and then view and update those items, because DynamoDB’s flexible schema allowed me to store different&lt;br&gt;
types of content side by side while still retrieving them quickly with partition keys.&lt;/p&gt;

&lt;h3&gt;
  
  
  One thing I didn't expect in this project was...
&lt;/h3&gt;

&lt;p&gt;One thing I didn't expect in this project is how straightforward it was to set up and run DynamoDB compared to relational databases like RDS or Aurora, because DynamoDB doesn’t require configuring servers, managing connections,&lt;br&gt;
or defining rigid schemas, it just lets you create a table and start loading items right away.&lt;/p&gt;

&lt;p&gt;This project took me about an hour, taking me through the full cycle of working with Amazon DynamoDB, creating tables, loading diverse data like projects and videos, and then viewing and updating items, because it was designed to show&lt;br&gt;
how DynamoDB’s flexibility and speed make it easier to manage different types of content compared to traditional relational databases.&lt;/p&gt;




&lt;h2&gt;
  
  
  Create a DynamoDB table
&lt;/h2&gt;

&lt;p&gt;DynamoDB tables organize data using items, which are records made up of &lt;strong&gt;attributes&lt;/strong&gt; that describe details about each item; unlike relational databases, items don’t need to share the same attributes, giving DynamoDB a flexible way to store varied information in one table.&lt;/p&gt;

&lt;p&gt;An attribute is a single piece of data that describes an item in DynamoDB, for example, if the item is a student record, attributes could include the student’s name, age, or number of projects completed. Unlike traditional relational databases where every row must share the same set of columns, DynamoDB&lt;br&gt;
items can each have different attributes, giving you flexibility to store varied information within the same table.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyz6r441grfx73qitay4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyz6r441grfx73qitay4.png" alt=" " width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Read and Write Capacity
&lt;/h3&gt;

&lt;p&gt;Read Capacity Units (RCUs) and Write Capacity Units (WCUs) are DynamoDB’s measures of throughput, where RCUs define how many reads per second a table can handle and WCUs define how many writes per second it can handle.&lt;/p&gt;

&lt;p&gt;Amazon DynamoDB’s Free Tier provides 25 GB of storage, along with 25 Read Capacity Units (RCUs) and 25 Write Capacity Units (WCUs), which together allow upto 200million of requests per month at no cost. I turned off auto scaling because while it can automatically increase capacity in production to handle&lt;br&gt;
spikes in demand, it could push usage beyond the Free Tier limits and lead to unexpected charges, so disabling it ensures my table stays within the free allowance while I safely experiment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7y24tp9kse86ppc6fve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7y24tp9kse86ppc6fve.png" alt=" " width="800" height="414"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Using CLI and CloudShell
&lt;/h3&gt;

&lt;p&gt;AWS CloudShell is a browser‑based command line environment provided by Amazon Web Services that let's you securely manage, explore, and interact with your AWS resources without needing to install or configure tools locally. It comes pre‑authenticated with your AWS account and includes popular developer tools, making it easy to run commands, scripts, and manage services like DynamoDB directly from your web browser.&lt;/p&gt;

&lt;p&gt;AWS CLI is a command-line interface tool that lets you manage and interact with AWS services by typing commands instead of using the web console. It provides a unified way to automate tasks, run scripts, and control resources like DynamoDB, S3, or EC2 directly from your terminal, making it especially useful for developers and administrators who want efficiency and repeatability in managing their cloud infrastructure.&lt;/p&gt;

&lt;p&gt;I ran a CLI command in AWS CloudShell that created a new &lt;br&gt;
DynamoDB table, because CloudShell provides a ready‑to‑use, browser‑based terminal that’s already authenticated with my AWS account, making it simple to execute AWS CLI commands without installing or configuring anything locally. This step is part of learning how to provision DynamoDB resources directly from the command line, reinforcing the idea that you can manage AWS services not only through the console but also programmatically&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra7d64vvktcnzk6zwrro.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra7d64vvktcnzk6zwrro.png" alt=" " width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Loading Data with CLI
&lt;/h3&gt;

&lt;p&gt;I ran a CLI command in AWS CloudShell that created new DynamoDB tables, because CloudShell comes pre‑installed with the AWS CLI and is already authenticated with my AWS account, making it easy to provision resources directly from the browser without needing any local setup. This step shows how DynamoDB tables can be defined programmatically, reinforcing the flexibility of managing cloud databases through commands instead of the console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rtyr03w7z0uufkus52v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rtyr03w7z0uufkus52v.png" alt=" " width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Observing Item Attributes
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffotngoycls82q8p15egd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffotngoycls82q8p15egd.png" alt=" " width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I checked a ContentCatalog item, which had the following attributes: &lt;br&gt;
Id (partition key, number), Title (string), URL (string), Authors (list), Price (number), Difficulty (string), Published (boolean), ProjectCategory (string), and ContentType (string).&lt;br&gt;
I checked another ContentCatalog item, which had a different set of attributes:&lt;br&gt;
Id (partition key, number), Title (string), URL (string), VideoType (string), Price (number), Services (list, sometimes included), and ContentType (string).&lt;/p&gt;




&lt;h3&gt;
  
  
  Benefits of DynamoDB
&lt;/h3&gt;

&lt;p&gt;A benefit of DynamoDB over relational databases is flexibility, because it doesn’t require a fixed schema, items in the same table can have different sets of attributes and data types. This means you can store diverse records (like projects and videos in your ContentCatalog) side by side without redesigning the table, whereas relational databases enforce rigid column structures that must be consistent across all rows. This flexibility makes DynamoDB especially useful for applications where data models evolve quickly or vary widely.&lt;/p&gt;

&lt;p&gt;Another benefit over relational databases is speed, because DynamoDB is designed for high‑performance at scale, using SSD storage and a distributed architecture that allows single‑digit millisecond response times. Unlike relational databases, which often need complex joins and indexing across rigid schemas, DynamoDB retrieves items directly by their keys, making lookups and writes much faster. This speed is especially valuable for applications like gaming, e‑commerce, or real‑time analytics, where quick responses are critical.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8iagge0qx9cum0azoxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8iagge0qx9cum0azoxf.png" alt=" " width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🤝Next in the series builds on this, which is "Query Data with DynamoDB"&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>aws</category>
      <category>nosql</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Connect a Web App to Amazon Aurora</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Tue, 13 Jan 2026 11:51:33 +0000</pubDate>
      <link>https://forem.com/maven_h/connect-a-web-app-toamazon-aurora-367h</link>
      <guid>https://forem.com/maven_h/connect-a-web-app-toamazon-aurora-367h</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Amazon Aurora?
&lt;/h3&gt;

&lt;p&gt;Amazon Aurora is a fully managed relational database service from AWS that is compatible with MySQL. It is useful because it combines the familiarity of MySQL with the scalability, speed, and reliability of a cloud‑native service. Aurora&lt;br&gt;
automatically handles tasks like backups, replication, and failover, which makes it easier to build web apps that need secure, high‑performance data storage without managing complex infrastructure yourself.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I used Amazon Aurora in this project
&lt;/h3&gt;

&lt;p&gt;In today’s project, I used Amazon Aurora to store and manage the data from my web app. By connecting my EC2‑hosted application to Aurora, I was able to capture user input through the web interface and save it securely in a relational&lt;br&gt;
database. Aurora’s compatibility with MySQL made it easy to query and verify the data using the MySQL CLI, while its scalability and reliability ensured that the app could handle future growth without me having to manage complex infrastructure.&lt;/p&gt;

&lt;p&gt;One thing I didn’t expect in this project was how quickly Amazon Aurora connected with my EC2 instance once the configuration details were set. I thought it might take longer or require a more complex setup, but the compatibility with MySQL and the php‑mysqli extension made the process smoother than I anticipated.&lt;br&gt;
This showed me that cloud services can simplify tasks that would normally be more complicated to manage on my own.&lt;/p&gt;




&lt;h3&gt;
  
  
  Creating a Web App
&lt;/h3&gt;

&lt;p&gt;To connect to my EC2 instance, I used SSH with my .pem key file because this provides secure, authenticated access to the server. By running the "ssh -i MavenAuroraApp.pem ec2-user@" command, I was able to log in remotely, and I'm ready to begin installing and configuring my web application.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04yzyy29byc7i5krlela.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04yzyy29byc7i5krlela.png" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To help me create my web app, I first connected to my EC2 instance through SSH and installed the necessary software, Apache, PHP, and the php‑mysqli extension, because these tools turn the EC2 instance into a functioning web server capable of running a dynamic application and communicating with my Aurora database.&lt;/p&gt;




&lt;h3&gt;
  
  
  Connecting my Web App to Aurora
&lt;/h3&gt;

&lt;p&gt;I set up my EC2 instance's connection details to my database.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm39lecopehr82t6b2dz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpm39lecopehr82t6b2dz.png" alt=" " width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  My Web App Upgrade
&lt;/h3&gt;

&lt;p&gt;Next, I upgraded my web app by adding a new PHP script that connects to my Aurora database and displays a more user‑friendly web page. This upgrade allowed the app to move beyond a simple static page and start handling dynamic data, capturing user input, sending queries to Aurora, and showing results directly in the browser.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ckvges8cmx1j1rb3c12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ckvges8cmx1j1rb3c12.png" alt=" " width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Testing my Web App
&lt;/h3&gt;

&lt;p&gt;To make sure my web app was working correctly, I tested it in the browser by submitting data through the web page and then used the MySQL CLI on my EC2 instance to query the Aurora database. By checking that the new entries appeared in the database, I confirmed that the app was successfully sending and storing data in Aurora.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxilh0zakgxz5bgwnevw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxilh0zakgxz5bgwnevw.png" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim3325phbq9b6dezyv27.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim3325phbq9b6dezyv27.png" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🤝&lt;strong&gt;&lt;em&gt;Next in the series builds on this, which is "Load Data into DynamoDB"&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>awschallenge</category>
      <category>database</category>
      <category>cloudnative</category>
      <category>aws</category>
    </item>
    <item>
      <title>Aurora Database with EC2</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Mon, 05 Jan 2026 10:18:09 +0000</pubDate>
      <link>https://forem.com/maven_h/aurora-database-with-ec2-380d</link>
      <guid>https://forem.com/maven_h/aurora-database-with-ec2-380d</guid>
      <description>&lt;h3&gt;
  
  
  Connect a Web App to Amazon Aurora
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5aaj7w11gkdz9gqi3d8a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5aaj7w11gkdz9gqi3d8a.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What is Amazon Aurora?
&lt;/h3&gt;

&lt;p&gt;Amazon Aurora is a high‑performance, fully managed database engine that combines the speed and reliability of commercial databases with the simplicity and cost‑effectiveness of open‑source ones. It’s useful because it scales automatically, stays highly available, and integrates smoothly with your AWS environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I used Amazon Aurora in this project
&lt;/h3&gt;

&lt;p&gt;In today’s project, I used Amazon Aurora to set up a highly available relational database that integrates seamlessly with my EC2 instance. Aurora provided the database endpoint I needed to connect my application, while automatically handling scalability, replication, and fault tolerance. This allowed me to focus on building and testing my app without worrying about manual database management.&lt;/p&gt;

&lt;h3&gt;
  
  
  One thing I didn't expect in this project
&lt;/h3&gt;

&lt;p&gt;One thing I didn’t expect in this project was how straightforward the setup turned out to be. I thought it would be much of a hassle to configure the database and connect it to my EC2 instance, but the AWS steps made the process surprisingly simple.&lt;/p&gt;

&lt;p&gt;I completed the Aurora Database with EC2 project in about an hour, which was faster than I expected. The guided AWS setup made connecting my EC2 instance to Aurora straightforward and efficient.&lt;/p&gt;

&lt;h3&gt;
  
  
  In the first part of my project
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Creating an Aurora Cluster&lt;/strong&gt;&lt;br&gt;
A relational database is one in which data are organized into tables, which are collections of rows and columns. It's called  "relational" because the rows relate to the columns and vice versa.&lt;br&gt;
Aurora is a good choice when we need something large-scale, with peak performance and uptime. This is because Aurora databases use clusters. Ordinary relational databases, like MySQL and Oracle, are more generic and cost-effective. They suit smaller databases and less demanding workloads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qphgm22rfyymk0u1d2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qphgm22rfyymk0u1d2v.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Halfway through I stopped!
&lt;/h3&gt;

&lt;p&gt;I stopped creating my Aurora database because I am trying to connect a web app server to my Aurora database. That is why I needed to set up an EC2 instance to serve as the web app server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Features of my EC2 instance
&lt;/h3&gt;

&lt;p&gt;I created a new key pair for my EC2 instance, because I need keys to access my EC2 instance if I want to add, change, or update how my EC2 instance is running.&lt;br&gt;
When I created my EC2 instance, I took particular note of the "Public IPv4 DNS" and the "Key pair name." The Public IPv4 DNS is essentially the address of my EC2 instance on the internet, while the Key pair is a set of cryptographic keys (public and private) used to securely access the instance. The Key pair name identifies which keys are associated with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8f4naazng9nw7cnaekf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft8f4naazng9nw7cnaekf.png" alt=" " width="598" height="779"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Then I could finish setting up my database
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqafrzr4oupbfi4xndko2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqafrzr4oupbfi4xndko2.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aurora Database uses clusters because they work together, so your data is always available. Aurora is really good for the big jobs because of these clusters.&lt;br&gt;
Each cluster consists of a primary instance (where all write operations occur) and multiple read replicas as back-ups. If your database's primary instance fails, one of the replicas can be promoted to primary automatically.&lt;/p&gt;

&lt;p&gt;🤝&lt;strong&gt;&lt;em&gt;Next in the series builds on this, which is "Connect a Web App with Aurora"&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>cloudnative</category>
      <category>cloudskills</category>
    </item>
    <item>
      <title>AWS Databases!</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Sat, 03 Jan 2026 09:54:53 +0000</pubDate>
      <link>https://forem.com/maven_h/aws-databases-1nlh</link>
      <guid>https://forem.com/maven_h/aws-databases-1nlh</guid>
      <description>&lt;h2&gt;
  
  
  I'm exploring AWS Databases!
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;I'm building database solutions on AWS&lt;/strong&gt;&lt;br&gt;
In this AWS Databases series, I'm learning about AWS databases (Relational and NoSQL databases). By the end of these projects, I will have known how to Connect an Aurora Database to EC2, connect a Web App to Amazon Aurora, Load Data into a DynamoDB Table, Visualize a Relational Database, and Query Data with DynamoDB. I'm learning about cloud databases because I want to know how it works and then use it create impactful solutions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1706djr0hnxc5tc279d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1706djr0hnxc5tc279d.png" alt=" " width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I am excited to share my progress - explore AWS databases with me!&lt;/strong&gt;&lt;br&gt;
I will set aside few hours daily to work on these database projects. I will keep myself accountable by tracking my daily progress and sharing updates to stay consistent. My reward for completing this AWS Databases series will be more ability to manage cloud databases effectively and confidently in real projects, and the satisfaction of mastering practical cloud database skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are databases?&lt;/strong&gt;&lt;br&gt;
Databases are organized systems for storing and managing information digitally. They allow data such as customer records, product details, and transactions to be kept in one central place, making it easy to access, update, and share securely across teams. Cloud engineers use databases to store, organize, and manage application data securely in the cloud. They rely on databases to connect applications to data, run queries for insights, ensure scalability, and maintain high availability across different services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What do database professionals do?&lt;/strong&gt;&lt;br&gt;
Database professionals are responsible for setting up databases, modelling data, writing queries, and ensuring database security. They also handle performance tuning, backups, and connecting databases to applications. The most interesting part of their job is using queries to turn raw data into meaningful insights that support decision‑making.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Automated Deployment Bash Script: Deploying a Flask App to AWS</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Sat, 25 Oct 2025 06:50:06 +0000</pubDate>
      <link>https://forem.com/maven_h/automated-deployment-bash-script-deploying-a-flask-app-to-aws-4lgh</link>
      <guid>https://forem.com/maven_h/automated-deployment-bash-script-deploying-a-flask-app-to-aws-4lgh</guid>
      <description>&lt;p&gt;Hey DevOps folks! 👋 I've just wrapped up the DevOps Intern Stage 1 Task from HNG13, inspired by that dev.to challenge post. The mission? Build a single, robust Bash script to automate deploying a Dockerized app to a remote Linux server. I nailed it by deploying to an AWS EC2 instance with a simple Flask app that displays a success message and server time. This setup showcases real-world automation, idempotency, and reliability in DevOps workflows.&lt;/p&gt;

&lt;p&gt;In this article, I'll share my &lt;code&gt;deploy.sh&lt;/code&gt; script, explain the process, and how it all came together. Everything's based on my actual project files, feel free to check them out and adapt!&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Overview
&lt;/h2&gt;

&lt;p&gt;The script (&lt;code&gt;deploy.sh&lt;/code&gt;) handles everything in one executable file:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Collect and validate user inputs (Git repo, PAT, branch, SSH details, app port).&lt;/li&gt;
&lt;li&gt;Clone or update the repo.&lt;/li&gt;
&lt;li&gt;Verify Docker files.&lt;/li&gt;
&lt;li&gt;Test SSH and prepare the remote env (install Docker, Compose, Nginx).&lt;/li&gt;
&lt;li&gt;Transfer files via rsync.&lt;/li&gt;
&lt;li&gt;Deploy the app (build/run containers idempotently).&lt;/li&gt;
&lt;li&gt;Set up Nginx reverse proxy.&lt;/li&gt;
&lt;li&gt;Validate with health checks and curls.&lt;/li&gt;
&lt;li&gt;Log everything, handle errors, and support cleanup with &lt;code&gt;--cleanup&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I used AWS EC2 (Ubuntu 22.04) as the remote server. My app is a basic Flask site in &lt;code&gt;Fapp.py&lt;/code&gt;, Dockerized via &lt;code&gt;Dockerfile&lt;/code&gt;. Repo includes &lt;code&gt;requirements.txt&lt;/code&gt; and a detailed &lt;code&gt;README.md&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS EC2 instance (e.g., t2.micro Ubuntu) with SSH key access. Open security group ports: 22 (SSH), 80 (HTTP), 8080 (app, direct testing).&lt;/li&gt;
&lt;li&gt;Git repo with the app files: &lt;code&gt;Fapp.py&lt;/code&gt;, &lt;code&gt;Dockerfile&lt;/code&gt;, &lt;code&gt;requirements.txt&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Local machine with Git, SSH, rsync.&lt;/li&gt;
&lt;li&gt;PAT for GitHub repo access.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's a peek at the app files for context:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fapp.py&lt;/strong&gt; (Flask app serving HTML):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;flask&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Flask&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Flask&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;__name__&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;/&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;home&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;'''&lt;/span&gt;&lt;span class="s"&gt;
    &amp;lt;!DOCTYPE html&amp;gt;
    &amp;lt;html lang=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;en&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;
    &amp;lt;head&amp;gt;
      &amp;lt;meta charset=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;UTF-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;
      &amp;lt;meta name=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;viewport&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; content=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;width=device-width, initial-scale=1.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;
      &amp;lt;title&amp;gt;Stage1 HNG13 Deployment Successful!&amp;lt;/title&amp;gt;
      &amp;lt;style&amp;gt;
        body {{
          font-family: Arial, sans-serif;
          display: flex;
          justify-content: center;
          align-items: center;
          height: 100vh;
          margin: 0;
          background: linear-gradient(135deg, #3f8bcd 0%, #2a629a 100%);
          color: white;
        }}
        .container {{
          text-align: center;
          padding: 2rem;
          background: rgba(255, 255, 255, 0.1);
          border-radius: 10px;
          backdrop-filter: blur(10px);
        }}
        h1 {{ margin-bottom: 1rem; }}
        .timestamp {{ font-size: 0.9em; opacity: 0.8; }}
      &amp;lt;/style&amp;gt;
    &amp;lt;/head&amp;gt;
    &amp;lt;body&amp;gt;
      &amp;lt;div class=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;container&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;
        &amp;lt;h1&amp;gt;🚀Stage1 HNG13 Deployment Successful!&amp;lt;/h1&amp;gt;
        &amp;lt;p&amp;gt;Your automated deployment script is working!&amp;lt;/p&amp;gt;
        &amp;lt;p class=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;timestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;Server Time: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%d %H&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="n"&gt;M&lt;/span&gt;&lt;span class="si"&gt;:&lt;/span&gt;&lt;span class="o"&gt;%&lt;/span&gt;&lt;span class="n"&gt;S&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;/p&amp;gt;
      &amp;lt;/div&amp;gt;
    &amp;lt;/body&amp;gt;
    &amp;lt;/html&amp;gt;
    &lt;/span&gt;&lt;span class="sh"&gt;'''&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.11-slim&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt .&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["python", "Fapp.py"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;requirements.txt&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Flask==3.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Bash Script: &lt;code&gt;deploy.sh&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;This is the heart of it; POSIX-compliant, executable, and fully featured. Run &lt;code&gt;chmod +x deploy.sh&lt;/code&gt; first.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="c"&gt;# Create timestamped log file&lt;/span&gt;
&lt;span class="nv"&gt;LOG_FILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"deploy_&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%Y%m%d_%H%M%S&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;.log"&lt;/span&gt;

&lt;span class="c"&gt;# Log messages&lt;/span&gt;
log&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; &lt;span class="s1"&gt;'+%Y-%m-%d %H:%M:%S'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;] &lt;/span&gt;&lt;span class="nv"&gt;$*&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;tee&lt;/span&gt; &lt;span class="nt"&gt;-a&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$LOG_FILE&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Trap errors&lt;/span&gt;
&lt;span class="nb"&gt;trap&lt;/span&gt; &lt;span class="s1"&gt;'log "ERROR: Script failed at line $LINENO"'&lt;/span&gt; ERR

&lt;span class="c"&gt;# Read input with validation&lt;/span&gt;
read_input&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;var_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;default&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;3&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$prompt&lt;/span&gt;&lt;span class="s2"&gt;: "&lt;/span&gt; value
    &lt;span class="nv"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;value&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;$default&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$value&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$default&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;log &lt;span class="s2"&gt;"ERROR: &lt;/span&gt;&lt;span class="nv"&gt;$var_name&lt;/span&gt;&lt;span class="s2"&gt; cannot be empty"&lt;/span&gt;
        &lt;span class="nb"&gt;exit &lt;/span&gt;1
    &lt;span class="k"&gt;fi

    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$value&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Gather inputs&lt;/span&gt;

&lt;span class="nv"&gt;GIT_REPO&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;read_input &lt;span class="s2"&gt;"Enter Git Repository URL"&lt;/span&gt; &lt;span class="s2"&gt;"GIT_REPO"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;BRANCH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;read_input &lt;span class="s2"&gt;"Enter branch name [main]"&lt;/span&gt; &lt;span class="s2"&gt;"BRANCH"&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# PAT: Silent read, validate non-empty&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"PAT: "&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;stty&lt;/span&gt; &lt;span class="nt"&gt;-echo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;read&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; PAT&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;stty echo&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt;
&lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PAT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Error: PAT required"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nv"&gt;SSH_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;read_input &lt;span class="s2"&gt;"Enter SSH username"&lt;/span&gt; &lt;span class="s2"&gt;"SSH_USER"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;SERVER_IP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;read_input &lt;span class="s2"&gt;"Enter server IP address"&lt;/span&gt; &lt;span class="s2"&gt;"SERVER_IP"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;APP_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;read_input &lt;span class="s2"&gt;"Enter application port"&lt;/span&gt; &lt;span class="s2"&gt;"APP_PORT"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nv"&gt;SSH_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;read_input &lt;span class="s2"&gt;"Enter SSH key path"&lt;/span&gt; &lt;span class="s2"&gt;"SSH_KEY"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;#: Silent, validate file/permissions&lt;/span&gt;
&lt;span class="c"&gt;#echo -n "SSH Key Path: "; stty -echo; read -r SSH_KEY; stty echo; echo&lt;/span&gt;
&lt;span class="c"&gt;#[[ -f "$SSH_KEY" ]] || { echo "Error: Key invalid" &amp;gt;&amp;amp;2; exit 1; }&lt;/span&gt;
&lt;span class="c"&gt;#chmod 400 "$SSH_KEY" || log "WARN: chmod 400 failed for $SSH_KEY (continuing)"&lt;/span&gt;

&lt;span class="c"&gt;# Clone Git repository with authentication (not exposing PAT)&lt;/span&gt;
clone_repo&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;repo_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;token&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;branch&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nv"&gt;REPO_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;basename&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$repo_url&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; .git&lt;span class="si"&gt;)&lt;/span&gt;
    &lt;span class="nb"&gt;export &lt;/span&gt;REPO_NAME

    &lt;span class="c"&gt;# create a temporary GIT_ASKPASS helper that prints the PAT&lt;/span&gt;
    &lt;span class="nv"&gt;TMP_ASKPASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;mktemp&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
#!/bin/sh
# Git calls this script to obtain a password. It expects the password on stdout.
echo "&lt;/span&gt;&lt;span class="nv"&gt;$GIT_PASSWORD&lt;/span&gt;&lt;span class="sh"&gt;"
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;    &lt;span class="nb"&gt;chmod&lt;/span&gt; +x &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    &lt;span class="c"&gt;# Use GIT_ASKPASS to provide the token securely to git&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;log &lt;span class="s2"&gt;"Repository exists, updating to latest changes on branch '&lt;/span&gt;&lt;span class="nv"&gt;$branch&lt;/span&gt;&lt;span class="s2"&gt;'..."&lt;/span&gt;
        &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="nv"&gt;GIT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$token&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;GIT_ASKPASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; git fetch origin &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="nv"&gt;GIT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$token&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;GIT_ASKPASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; git checkout &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$branch&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="nv"&gt;GIT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$token&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;GIT_ASKPASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; git pull origin &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$branch&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;else
        &lt;/span&gt;log &lt;span class="s2"&gt;"Cloning repository on branch '&lt;/span&gt;&lt;span class="nv"&gt;$branch&lt;/span&gt;&lt;span class="s2"&gt;'..."&lt;/span&gt;
        &lt;span class="nv"&gt;GIT_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$token&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;GIT_ASKPASS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; git clone &lt;span class="nt"&gt;-b&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$branch&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$repo_url&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;2&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;fi

    &lt;/span&gt;&lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TMP_ASKPASS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    log &lt;span class="s2"&gt;"Successfully cloned/updated repository"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Function to verify Docker configuration&lt;/span&gt;
verify_docker_config&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"Dockerfile"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"docker-compose.yml"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;log &lt;span class="s2"&gt;"✓ Docker configuration found"&lt;/span&gt;
        &lt;span class="k"&gt;return &lt;/span&gt;0
    &lt;span class="k"&gt;else
        &lt;/span&gt;log &lt;span class="s2"&gt;"✗ No Dockerfile or docker-compose.yml found"&lt;/span&gt;
        &lt;span class="nb"&gt;exit &lt;/span&gt;3
    &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Function to test SSH connection&lt;/span&gt;
test_ssh&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    log &lt;span class="s2"&gt;"Testing SSH connection to &lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;..."&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;ConnectTimeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10 &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="nv"&gt;BatchMode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;yes&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
           &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"echo 'SSH connection successful'"&lt;/span&gt; &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;log &lt;span class="s2"&gt;"✓ SSH connection successful"&lt;/span&gt;
        &lt;span class="k"&gt;return &lt;/span&gt;0
    &lt;span class="k"&gt;else
        &lt;/span&gt;log &lt;span class="s2"&gt;"✗ SSH connection failed"&lt;/span&gt;
        &lt;span class="nb"&gt;exit &lt;/span&gt;4
    &lt;span class="k"&gt;fi&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Function to setup remote environment&lt;/span&gt;
setup_remote_environment&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    log &lt;span class="s2"&gt;"Setting up remote environment..."&lt;/span&gt;

    &lt;span class="c"&gt;# Execute commands on remote server&lt;/span&gt;
    ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s1"&gt;'bash -s'&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;ENDSSH&lt;/span&gt;&lt;span class="sh"&gt;'
        set -e

        # Update packages
        sudo apt-get update -y

        # Install Docker
        if ! command -v docker &amp;amp;&amp;gt; /dev/null; then
            curl -fsSL https://get.docker.com -o get-docker.sh
            sudo sh get-docker.sh
            sudo usermod -aG docker &lt;/span&gt;&lt;span class="nv"&gt;$USER&lt;/span&gt;&lt;span class="sh"&gt;
        fi

        # Install Docker Compose
        if ! command -v docker-compose &amp;amp;&amp;gt; /dev/null; then
            sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;-&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;uname&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt;" &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="sh"&gt;
                -o /usr/local/bin/docker-compose
            sudo chmod +x /usr/local/bin/docker-compose
        fi

        # Install Nginx
        if ! command -v nginx &amp;amp;&amp;gt; /dev/null; then
            sudo apt-get install -y nginx
        fi

        # Start services
        sudo systemctl enable docker nginx
        sudo systemctl start docker nginx

        # Verify installations
        docker --version
        docker-compose --version
        nginx -v
&lt;/span&gt;&lt;span class="no"&gt;ENDSSH

&lt;/span&gt;    log &lt;span class="s2"&gt;"✓ Remote environment ready"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Deploy Docker application&lt;/span&gt;
deploy_application&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;app_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$4&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    log &lt;span class="s2"&gt;"Deploying application..."&lt;/span&gt;

    ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; bash &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ENDSSH&lt;/span&gt;&lt;span class="sh"&gt; || { log "✗ Deploy failed (check connection/logs)"; exit 1; }
        set -e
        mkdir -p ~/deployment
        cd ~/deployment/&lt;/span&gt;&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;&lt;span class="sh"&gt; || { echo "Error: Repo dir not found" &amp;gt;&amp;amp;2; exit 1; }

        # Stop old containers
        docker-compose down 2&amp;gt;/dev/null || docker stop &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="sh"&gt;(docker ps -q) 2&amp;gt;/dev/null || true

        # Remove stopped containers to free names
        docker rm &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="sh"&gt;(docker ps -aq --filter "name=my-app") 2&amp;gt;/dev/null || true

        # Build and start
        if [[ -f "docker-compose.yml" ]]; then
            docker-compose up -d --build --force-recreate
        else
            docker build -t my-app .
            docker run -d -p &lt;/span&gt;&lt;span class="nv"&gt;$app_port&lt;/span&gt;&lt;span class="sh"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$app_port&lt;/span&gt;&lt;span class="sh"&gt; --name my-app my-app
        fi

        # Wait for container to be healthy
        sleep 5

        # Verify container is running
        if docker ps | grep -qE "my-app|&lt;/span&gt;&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;&lt;span class="sh"&gt;"; then
            echo "✓ Containers running"
        else
            echo "✗ No running containers found" &amp;gt;&amp;amp;2
            exit 1
        fi
&lt;/span&gt;&lt;span class="no"&gt;ENDSSH
&lt;/span&gt;    log &lt;span class="s2"&gt;"✓ Application deployed successfully"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Function to transfer application files to the remote server&lt;/span&gt;
transfer_files&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;local_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$4&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    log &lt;span class="s2"&gt;"Transferring application files..."&lt;/span&gt;

    &lt;span class="c"&gt;# Ensure the remote deployment directory exists&lt;/span&gt;
    ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"mkdir -p ~/deployment"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;9

    &lt;span class="c"&gt;# The REPO_NAME is globally available from the clone_repo call&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;REPO_TO_COPY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$local_dir&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_TO_COPY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        &lt;/span&gt;log &lt;span class="s2"&gt;"ERROR: Local repository directory '&lt;/span&gt;&lt;span class="nv"&gt;$REPO_TO_COPY&lt;/span&gt;&lt;span class="s2"&gt;' not found."&lt;/span&gt;
        &lt;span class="nb"&gt;exit &lt;/span&gt;10
    &lt;span class="k"&gt;fi&lt;/span&gt;

    &lt;span class="c"&gt;# Optional: Clean up existing remote repo dir to avoid conflicts/permissions issues&lt;/span&gt;
    &lt;span class="c"&gt;#ssh -i "$key" "$user@$ip" "rm -rf ~/deployment/$REPO_NAME" || true&lt;/span&gt;

    &lt;span class="c"&gt;# Transfer with rsync, excluding .git and logs&lt;/span&gt;
    rsync &lt;span class="nt"&gt;-avz&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"ssh -i '&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;'"&lt;/span&gt; &lt;span class="nt"&gt;--exclude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'.git/'&lt;/span&gt; &lt;span class="nt"&gt;--exclude&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'deploy_*.log'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$REPO_TO_COPY&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;:~/deployment/&lt;/span&gt;&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;11


    log &lt;span class="s2"&gt;"✓ Files transferred successfully to ~/deployment/&lt;/span&gt;&lt;span class="nv"&gt;$REPO_NAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Configure Nginx as reverse proxy&lt;/span&gt;
configure_nginx&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;app_port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$4&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    log &lt;span class="s2"&gt;"Configuring Nginx..."&lt;/span&gt;

    &lt;span class="c"&gt;# Create Nginx config&lt;/span&gt;
    &lt;span class="nv"&gt;NGINX_CONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"
server {
    listen 80;
    server_name &lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;;

    location / {
        proxy_pass http://localhost:&lt;/span&gt;&lt;span class="nv"&gt;$app_port&lt;/span&gt;&lt;span class="s2"&gt;;
        proxy_set_header Host &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;host;
        proxy_set_header X-Real-IP &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;remote_addr;
        proxy_set_header X-Forwarded-For &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto &lt;/span&gt;&lt;span class="se"&gt;\$&lt;/span&gt;&lt;span class="s2"&gt;scheme;
    }
}
"&lt;/span&gt;

    &lt;span class="c"&gt;# Deploy config&lt;/span&gt;
    ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; bash &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ENDSSH&lt;/span&gt;&lt;span class="sh"&gt;
        echo '&lt;/span&gt;&lt;span class="nv"&gt;$NGINX_CONFIG&lt;/span&gt;&lt;span class="sh"&gt;' | sudo tee /etc/nginx/sites-available/app.conf
        sudo ln -sf /etc/nginx/sites-available/app.conf /etc/nginx/sites-enabled/
        sudo nginx -t
        sudo systemctl reload nginx
&lt;/span&gt;&lt;span class="no"&gt;ENDSSH

&lt;/span&gt;    log &lt;span class="s2"&gt;"✓ Nginx configured successfully"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

validate_deployment&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    log &lt;span class="s2"&gt;"Validating deployment..."&lt;/span&gt;

    &lt;span class="c"&gt;# Check container health with fallback if no HEALTHCHECK is defined&lt;/span&gt;
    &lt;span class="nv"&gt;CONTAINER_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"my-app"&lt;/span&gt;
    &lt;span class="nv"&gt;HEALTH_STATUS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"docker inspect --format '{{.State.Health.Status}}' &lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_NAME&lt;/span&gt;&lt;span class="s2"&gt; 2&amp;gt;/dev/null || true"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;


    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HEALTH_STATUS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
        if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$HEALTH_STATUS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="s2"&gt;"healthy"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;log &lt;span class="s2"&gt;"✗ Container &lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_NAME&lt;/span&gt;&lt;span class="s2"&gt; not healthy (status: &lt;/span&gt;&lt;span class="nv"&gt;$HEALTH_STATUS&lt;/span&gt;&lt;span class="s2"&gt;)"&lt;/span&gt;
            &lt;span class="nb"&gt;exit &lt;/span&gt;6
        &lt;span class="k"&gt;fi
    else&lt;/span&gt;
        &lt;span class="c"&gt;# Fallback: ensure container exists and is running&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"docker ps --filter name=&lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_NAME&lt;/span&gt;&lt;span class="s2"&gt; --filter status=running --format '{{.Names}}' | grep -q ."&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;log &lt;span class="s2"&gt;"✗ Container &lt;/span&gt;&lt;span class="nv"&gt;$CONTAINER_NAME&lt;/span&gt;&lt;span class="s2"&gt; not running"&lt;/span&gt;
            &lt;span class="nb"&gt;exit &lt;/span&gt;6
        &lt;span class="k"&gt;fi
    fi&lt;/span&gt;

    &lt;span class="c"&gt;# App endpoint with retries&lt;/span&gt;
    &lt;span class="nv"&gt;MAX_RETRIES&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
    &lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;seq &lt;/span&gt;1 &lt;span class="nv"&gt;$MAX_RETRIES&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
        if &lt;/span&gt;curl &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="s2"&gt;"http://&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;/"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
            &lt;/span&gt;log &lt;span class="s2"&gt;"✓ Application /health accessible"&lt;/span&gt;
            &lt;span class="nb"&gt;break
        &lt;/span&gt;&lt;span class="k"&gt;fi&lt;/span&gt;
        &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="nv"&gt;$i&lt;/span&gt; &lt;span class="nt"&gt;-eq&lt;/span&gt; &lt;span class="nv"&gt;$MAX_RETRIES&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt; log &lt;span class="s2"&gt;"✗ /health failed after &lt;/span&gt;&lt;span class="nv"&gt;$MAX_RETRIES&lt;/span&gt;&lt;span class="s2"&gt; tries"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;8&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="k"&gt;$((&lt;/span&gt;i &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="k"&gt;))&lt;/span&gt;  &lt;span class="c"&gt;# Backoff: 2s, 4s, 6s&lt;/span&gt;
    &lt;span class="k"&gt;done
    &lt;/span&gt;log &lt;span class="s2"&gt;"✓ All validation checks passed"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

cleanup&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$1&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$2&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nv"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$3&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    log &lt;span class="s2"&gt;"Cleaning up deployment..."&lt;/span&gt;

    ssh &lt;span class="nt"&gt;-i&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$key&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$user&lt;/span&gt;&lt;span class="s2"&gt;@&lt;/span&gt;&lt;span class="nv"&gt;$ip&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; bash &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;ENDSSH&lt;/span&gt;&lt;span class="sh"&gt;' || exit 15
        # Stop and remove containers
        docker-compose down -v 2&amp;gt;/dev/null || true
        docker stop &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;-aq&lt;/span&gt; &lt;span class="nt"&gt;--filter&lt;/span&gt; &lt;span class="s2"&gt;"name=^my-app"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt; 2&amp;gt;/dev/null || true
        docker rm &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;docker ps &lt;span class="nt"&gt;-aq&lt;/span&gt; &lt;span class="nt"&gt;--filter&lt;/span&gt; &lt;span class="s2"&gt;"name=^my-app"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="sh"&gt; 2&amp;gt;/dev/null || true

        # Remove Nginx config
        sudo rm -f /etc/nginx/sites-enabled/app.conf
        sudo rm -f /etc/nginx/sites-available/app.conf
        sudo systemctl reload nginx

        # Remove deployment files
        rm -rf ~/deployment
&lt;/span&gt;&lt;span class="no"&gt;ENDSSH

&lt;/span&gt;    log &lt;span class="s2"&gt;"✓ Cleanup completed"&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;# Check for cleanup flag&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[[&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;1&lt;/span&gt;&lt;span class="k"&gt;:-}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"--cleanup"&lt;/span&gt; &lt;span class="o"&gt;]]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
    &lt;/span&gt;cleanup &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_USER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;exit &lt;/span&gt;0
&lt;span class="k"&gt;fi

&lt;/span&gt;main&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;local &lt;/span&gt;&lt;span class="nv"&gt;original_dir&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;  &lt;span class="c"&gt;# Capture parent dir before any cd&lt;/span&gt;
    log &lt;span class="s2"&gt;"===== Starting Deployment ====="&lt;/span&gt;
    log &lt;span class="s2"&gt;"Repository: &lt;/span&gt;&lt;span class="nv"&gt;$GIT_REPO&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    log &lt;span class="s2"&gt;"Branch: &lt;/span&gt;&lt;span class="nv"&gt;$BRANCH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    log &lt;span class="s2"&gt;"Target Server: &lt;/span&gt;&lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    clone_repo &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$GIT_REPO&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PAT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BRANCH&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    verify_docker_config
    &lt;span class="nb"&gt;cd&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$original_dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;12  &lt;span class="c"&gt;# Reset to parent dir for correct transfer path&lt;/span&gt;
    test_ssh &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_USER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    setup_remote_environment &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_USER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    transfer_files &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_USER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;pwd&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    deploy_application &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_USER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$APP_PORT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    configure_nginx &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_USER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$APP_PORT&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    validate_deployment &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_USER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SERVER_IP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$SSH_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

    log &lt;span class="s2"&gt;"===== Deployment Completed Successfully ====="&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How It Works: Step-by-Step
&lt;/h2&gt;

&lt;p&gt;From the &lt;code&gt;README.md&lt;/code&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Inputs&lt;/strong&gt;: Secure prompts (PAT hidden), with defaults and validation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clone&lt;/strong&gt;: Uses GIT_ASKPASS to handle PAT without exposure; pulls if exists.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify&lt;/strong&gt;: Ensures &lt;code&gt;Dockerfile&lt;/code&gt; is present.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSH Test&lt;/strong&gt;: Quick connectivity check with timeout.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote Setup&lt;/strong&gt;: Installs Docker/Compose/Nginx if missing, starts services.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transfer&lt;/strong&gt;: Rsync for efficient, excluding unnecessary files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy&lt;/strong&gt;: Idempotent stops/removes old containers, builds/runs new ones (uses &lt;code&gt;docker build/run&lt;/code&gt; since no compose file).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Nginx&lt;/strong&gt;: Dynamic config proxies 80 to app port (8080 here).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validate&lt;/strong&gt;: Checks container status, curls the endpoint with retries.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Logging/Cleanup&lt;/strong&gt;: Timestamped logs; &lt;code&gt;--cleanup&lt;/code&gt; tears down everything.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  AWS-Specific Adaptations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EC2 Launch&lt;/strong&gt;: Used Ubuntu 22.04 AMI, attached SSH key, configured security groups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Permissions&lt;/strong&gt;: SSH user (ubuntu) needs sudo; script handles Docker group add.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing&lt;/strong&gt;: Ran locally, deployed to EC2—browser hit the IP showed the success page with timestamp. Re-runs worked without issues thanks to idempotency.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security&lt;/strong&gt;: No SSL yet (add Certbot for prod); ensure key is 400 perms.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Secure handling of secrets (like PAT) is crucial GIT_ASKPASS was a game-changer.&lt;/li&gt;
&lt;li&gt;Idempotency via stop/rm commands prevents redeploy failures.&lt;/li&gt;
&lt;li&gt;Remote exec with heredocs keeps things clean but watch for quoting.&lt;/li&gt;
&lt;li&gt;Logging + traps make debugging remote issues easier.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Grab the full repo (including README) here: &lt;a href="https://github.com/hyelngtil/hng13-stage1-devops" rel="noopener noreferrer"&gt;github.com/hyelngtil/hng13-stage1-devops&lt;/a&gt;. It's ready to fork and test!&lt;/p&gt;

&lt;p&gt;What's your go-to automation trick in Bash? Drop it in the comments! 🚀&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>bash</category>
      <category>python</category>
    </item>
    <item>
      <title>DevOps Challenge 1: Set Up a Web App Using AWS and VS Code</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Mon, 06 Oct 2025 00:37:51 +0000</pubDate>
      <link>https://forem.com/maven_h/devops-challenge-1-set-up-a-web-app-using-aws-and-vs-code-4e65</link>
      <guid>https://forem.com/maven_h/devops-challenge-1-set-up-a-web-app-using-aws-and-vs-code-4e65</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;p&gt;In this project, I will demonstrate how to set up a remote SSH connection to an EC2 instance using VS Code. I'll install Maven and Java, generate a basic web app, and edit code without VS Code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key tools and concepts&lt;/strong&gt;&lt;br&gt;
Services I used were IAM, EC2, Security Groups, Key Pairs, VS Code, Maven, and Java. Key concepts I learnt include SSH for secure access, Maven project structure, dynamic vs static pages with JSP, and why IDEs like VS Code simplify cloud development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project reflection&lt;/strong&gt;&lt;br&gt;
One thing I didn't expect in this project was discovering that index.jsp isn’t just like HTML but can actually run Java code to make the page dynamic.&lt;/p&gt;

&lt;p&gt;🔥This project took me approximately 2hours. The most challenging part was setting permissions using 'icalcs' and sustaining the SSH connection for an extended period of time. I was able to stay logged in for the period of the project. I'll have to re-establish the SSH connection intermittently. It was most rewarding to be able to successfully set the permission on Windows OS especially&lt;/p&gt;

&lt;p&gt;This project is part one of a series of DevOps projects where I'm building a CI/CD pipeline! I'll be working on the next project in the next few days.&lt;/p&gt;




&lt;h2&gt;
  
  
  Launching an EC2 instance
&lt;/h2&gt;

&lt;p&gt;I started this project by launching an EC2 instance because I needed a secure, cloud-based server to host my web app and practice DevOps workflows like remote development, CI/CD setup, and infrastructure management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I also enabled SSH&lt;/strong&gt;&lt;br&gt;
SSH is a secure protocol that verifies your identity using a private key and encrypts data between your computer and a remote server. I enabled SSH so I could safely connect to my EC2 instance and use it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key pairs&lt;/strong&gt;&lt;br&gt;
A key pair is what let you securely access your EC2 instance. It’s made of two halves: a public key that AWS keeps, and a private key that you download. &lt;br&gt;
Once I set up my key pair, AWS automatically downloaded a '.pem' file to my DevOps folder—my private key for secure EC2 access&lt;/p&gt;




&lt;h2&gt;
  
  
  Set up VS Code
&lt;/h2&gt;

&lt;p&gt;VS Code, short for Visual Studio Code, is a powerful and widely used Integrated Development Environment (IDE) that helps developers write, edit, and manage code efficiently.&lt;br&gt;
I installed it so I could securely connect to my EC2 instance and build my web app with a smooth coding experience&lt;/p&gt;




&lt;h2&gt;
  
  
  My first terminal commands
&lt;/h2&gt;

&lt;p&gt;The first command I ran for this project was 'cd Desktop\DevOps' to navigate to the folder where my key file is located.&lt;/p&gt;

&lt;p&gt;I also updated my private key's permissions by running: 'icacls' commands in the terminal as shown in the screenshot. This made the file readable only by me, securing it for SSH access to my EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsa1v0o2as8jxmqb5i8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbsa1v0o2as8jxmqb5i8z.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  SSH connection to EC2 instance
&lt;/h2&gt;

&lt;p&gt;To connect to my EC2 instance, I ran the command &lt;em&gt;&amp;gt;&amp;gt;ssh -i Maven-DevOps Keypair.pem &lt;a href="mailto:ec2-user@ec2-3-80-80-185.compute-1.amazonaws.com"&gt;ec2-user@ec2-3-80-80-185.compute-1.amazonaws.com&lt;/a&gt;&amp;lt;&amp;lt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This command required an IPv4 address&lt;/strong&gt;&lt;br&gt;
A server's IPV4 DNS is the public address for your EC2 server that the internet uses to find and connect to it. The local computer you're using to do this project will find and connect to your EC2 instance through this IPv4 DNS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6amj5cac2p9csr6gjd30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6amj5cac2p9csr6gjd30.png" alt=" " width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Maven &amp;amp; Java
&lt;/h2&gt;

&lt;p&gt;Apache Maven is a Project Builder and a Dependency manager. It gives a ready‑made structure for Java projects (folders, config files, etc.) and downloads the necessary extra code libraries needed by apps (e.g., logging tools, database connectors).&lt;/p&gt;

&lt;p&gt;Maven is required in this project because I will use Maven to generate a web app project from a template (called an archetype).&lt;/p&gt;

&lt;p&gt;Java is a popular programming language used to build different types of applications, from mobile apps to large enterprise systems.&lt;br&gt;
Java is required in this project because it is used by Maven to operate. It will be used in generating and building the intended web app.&lt;/p&gt;




&lt;h2&gt;
  
  
  Create the Application
&lt;/h2&gt;

&lt;p&gt;I generated a Java web app using the command &lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;mvn archetype:generate \ &lt;br&gt;
  -DgroupId=com.nextwork.app \ &lt;br&gt;
  -DartifactId=nextwork-web-project \ &lt;br&gt;
  -DarchetypeArtifactId=maven-archetype-webapp \ &lt;br&gt;
  -DinteractiveMode=false&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;I installed Remote - SSH, which is an extension in VS Code that lets me connect directly via SSH to my EC2 instance securely over the internet. I installed it to use VS Code to work on files or run programs on my instance easily.&lt;/p&gt;

&lt;p&gt;Configuration details required to set up a remote connection include&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;Host ec2-54-196-112-71.compute-1.amazonaws.com&lt;br&gt;
   HostName ec2-54-196-112-71.compute-1.amazonaws.com&lt;br&gt;
   IdentityFile C:\Users\hyeln\Desktop\DevOps\nextwork-keypair.pem&lt;br&gt;
   User ec2-user&lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwcbb7pu47lmhgyx7kwy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwcbb7pu47lmhgyx7kwy.png" alt=" " width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using VS Code's file explorer, I could see all the webapp project files, folders, and subfolders in their hierarchy!&lt;/p&gt;

&lt;p&gt;Two of the project folders created by Maven are src (source) folder which holds all the source code files that define how your web app looks and works. Webapp is a special subfolder within src dedicated to the web-facing part of the app. HTML CSS JS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuegaruy03mnoj3zqpqr7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuegaruy03mnoj3zqpqr7.png" alt=" " width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Using Remote - SSH
&lt;/h2&gt;

&lt;p&gt;The index.jsp is the starting page of your Java web app — like index.html in a static site, but with the added power of Java code to generate dynamic, changing content.&lt;/p&gt;

&lt;p&gt;I edited index.jsp by navigating to the file via remote-SSH, the editing and saving.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gk1bm0kfsvef4mqa96a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gk1bm0kfsvef4mqa96a.png" alt=" " width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;







</description>
      <category>vscode</category>
      <category>ssh</category>
      <category>java</category>
      <category>devops</category>
    </item>
    <item>
      <title>Deploy an App Across Accounts</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Thu, 04 Sep 2025 19:41:38 +0000</pubDate>
      <link>https://forem.com/maven_h/deploy-an-app-across-accounts-nb4</link>
      <guid>https://forem.com/maven_h/deploy-an-app-across-accounts-nb4</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;p&gt;Here, I am going Build a Docker container image and an Amazon ECR (Elastic Container Registry) to store the image securely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Amazon ECR?&lt;/strong&gt;&lt;br&gt;
Amazon ECR is AWSʼs managed container registry for storing and sharing Docker images. In todayʼs project, we used it to push our app image and let our buddy pull and run it from their account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One thing I didn't expect&lt;/strong&gt;&lt;br&gt;
My buddy was in 'us-east-1' and I was in 'af-south-1', so he couldn't authenticate to my ECR because ECR authentication is region specific. I resolved it by creating a matching repository in 'us-east-1'.&lt;br&gt;
I later discovered that we needed to explicitly tell the AWS CLI to connect to the ECR service endpoints in the regions (af-south-1 &amp;amp; us-east-1) to get the correct token.&lt;/p&gt;

&lt;p&gt;This project took us about an hour and half.&lt;/p&gt;




&lt;h2&gt;
  
  
  Creating a Docker Image
&lt;/h2&gt;

&lt;p&gt;I set up a Dockerfile and an index.html in my local environment. Both files are needed because the Dockerfile defines how to build my custom container, and index.html provides the web content it serves.&lt;/p&gt;

&lt;p&gt;My Docker file tells docker how to build my image and to use the index.html file I created as the web page that will be served.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I also set up an ECR repository&lt;/strong&gt; &lt;br&gt;
ECR stands for Elastic Container Registry. It is important because it makes it easy for one to store, manage, and deploy their container images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaa4zu9cdwln1kjvbhfz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaa4zu9cdwln1kjvbhfz.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Set Up AWS CLI Access
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AWS CLI can let me run ECR commands&lt;/strong&gt;&lt;br&gt;
AWS CLI is a terminal tool to manage AWS services. The CLI asked for my credentials because browser logins arenʼt shared, so it needs its own access keys to authenticate.&lt;/p&gt;

&lt;p&gt;To enable CLI access, I set up a new IAM user with AmazonEC2ContainerRegistryFullAccess permission. I also set up an access key for this user, which means the CLI can authenticate to AWS.&lt;/p&gt;

&lt;p&gt;To pass my credentials to the AWS CLI, I ran the command &lt;code&gt;aws configure&lt;/code&gt;. I had to provide my Access Key ID, Secret Access Key, the AWS region code for my repository, and optionally an output format.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fcsrpoq5duce5hw1ble.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fcsrpoq5duce5hw1ble.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Pushing My Image to ECR
&lt;/h2&gt;

&lt;p&gt;Push commands in Amazon ECR (Elastic Container Registry) are the specific Docker commands you run to upload — or “push” — your container image from your local machine into an ECR repository in AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There are three main push commands&lt;/strong&gt;&lt;br&gt;
To authenticate Docker with my ECR repo, I used the command 'aws ecr get-login password --region  | docker login --username AWS --password-stdin .dkr.ecr..amazonaws.com'.&lt;/p&gt;

&lt;p&gt;To push my container image, I ran the command 'docker push .dkr.ecr. .amazonaws.com/maven-cross-account-docker-app:latest'. Pushing means uploading my local image to Amazon ECR for others to pull.&lt;/p&gt;

&lt;p&gt;When I built my image, I tagged it with the label &lt;code&gt;latest&lt;/code&gt;. This means itʼs marked as the most current version, so anyone pulling &lt;code&gt;latest&lt;/code&gt; will always get my newest build.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Resolving Permission Issues&lt;/strong&gt;&lt;br&gt;
When I first pulled my buddyʼs image, I got a 403 Forbidden error because their ECR repo is private and my AWS account didnʼt yet have permission. They had to update the repo policy to allow access.&lt;/p&gt;

&lt;p&gt;To resolve each otherʼs pull errors, we updated our ECR repo policies to add each otherʼs IAM ARNs with permissions to pull images, enabling cross&lt;br&gt;
account access to our private repositories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ayukft700wboalc12yq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ayukft700wboalc12yq.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;🔥 The most challenging part of the project was setting permissions to pull an image directly from a private ECR repository in another account and another region. &lt;/p&gt;

&lt;p&gt;🔥 While it was possible for me to do that, the best practice for deploying applications that span accounts and regions is to use Amazon ECR Private Image Replication.&lt;/p&gt;

&lt;p&gt;You can configure a replication rule in the source account/region to automatically copy images to a destination ECR repository in the pulling account/region.&lt;/p&gt;

&lt;p&gt;Benefits of Replication are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lower Latency&lt;/li&gt;
&lt;li&gt;Cost Optimization&lt;/li&gt;
&lt;li&gt;Resiliency&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;🌟 Something that stood out for me was the commands that look like Linux (bash) based succeeded on Windows PowerShell because the AWS CLI and Docker CLI are cross-platform tools, and PowerShell supports the same piping mechanism as Linux shells. As long as the required tools are installed and configured, this command is platform-agnostic and will work on both Windows and Linux.&lt;/p&gt;




&lt;p&gt;🤝In the next project, I'm going to start a 7days DevOps series.&lt;/p&gt;




</description>
      <category>docker</category>
      <category>aws</category>
      <category>containerapps</category>
      <category>containers</category>
    </item>
    <item>
      <title>Deploy an App with Docker</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Tue, 12 Aug 2025 14:19:40 +0000</pubDate>
      <link>https://forem.com/maven_h/deploy-an-app-with-docker-4hi4</link>
      <guid>https://forem.com/maven_h/deploy-an-app-with-docker-4hi4</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is Docker?&lt;/strong&gt;&lt;br&gt;
Docker is a tool for building and running self-contained environments that package your app and its dependencies. I used Docker to create a custom container image that served a webpage, tested locally, and deployed it to AWS Elastic Beanstalk live.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One thing I didn't expect.&lt;/strong&gt;&lt;br&gt;
One thing I didn't expect in this project was that AWS Elastic Beanstalk would automatically create an S3 bucket and upload my zipped Docker project file to it. That behind-the-scenes setup made deployment smoother than I anticipated.&lt;/p&gt;

&lt;p&gt;It took me about 90 minutes to complete the project, from installing Docker to deploying the app with Elastic Beanstalk.&lt;/p&gt;




&lt;h2&gt;
  
  
  Understanding Containers and Docker
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Containers&lt;/strong&gt;&lt;br&gt;
Containers are packages that include an app and all its dependencies. They are useful because they ensure the app runs consistently anywhere.&lt;/p&gt;

&lt;p&gt;A container image is a file that bundles an app with everything it needs to run like code, libraries, and settings so it behaves the same on any system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt;&lt;br&gt;
Docker is a tool for creating, managing containers, and packages that hold everything an app needs to run. Docker Desktop is its user-friendly interface for local use.&lt;/p&gt;

&lt;p&gt;The Docker daemon is the background service that builds, runs, and manages&lt;br&gt;
containers, powering Docker commands and enabling app deployment&lt;/p&gt;




&lt;h2&gt;
  
  
  Running an Nginx Image
&lt;/h2&gt;

&lt;p&gt;Nginx is a fast, lightweight web server used to serve web pages. In this project, it's the container image that helps you test Docker by instantly loading a webpage.&lt;/p&gt;

&lt;p&gt;The command I ran to start a new container was &lt;strong&gt;'docker run -d -p 80:80 nginx'&lt;/strong&gt;. It launched an Nginx container in the background and mapped port &lt;strong&gt;80&lt;/strong&gt; so I could view the webpage locally'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk27jrg3tfhmbro3swe2p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk27jrg3tfhmbro3swe2p.png" alt=" " width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Creating a Custom Image
&lt;/h2&gt;

&lt;p&gt;The Dockerfile is a text file with instructions to build a container image, like replacing Nginxʼs default page with your own and exposing port &lt;strong&gt;80&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;My Dockerfile tells Docker three things: use the official Nginx image, copy in my custom webpage to replace the default, and expose port &lt;strong&gt;80&lt;/strong&gt; to serve it online.&lt;/p&gt;

&lt;p&gt;The command I used to build a custom image with my Dockerfile was &lt;strong&gt;'docker build -t my-web-app .'&lt;/strong&gt; The &lt;strong&gt;'.'&lt;/strong&gt; at the end of the command means Docker uses the current folder where the &lt;strong&gt;Dockerfile&lt;/strong&gt; and &lt;strong&gt;'index.html'&lt;/strong&gt; live to build the image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxqqzdkhlools6la13xu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxqqzdkhlools6la13xu.png" alt=" " width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Running My Custom Image
&lt;/h2&gt;

&lt;p&gt;There was an error when I ran my custom image because another container was already using port &lt;strong&gt;80&lt;/strong&gt;. I resolved this by stopping the active container in Docker Desktop and restarting mine.&lt;/p&gt;

&lt;p&gt;In this example, the container image is the blueprint that defines what goes into the container, like the Nginx base and my custom webpage. The container is the running instance created from that image, serving my site locally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff10ywzictj6swabke0r6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff10ywzictj6swabke0r6.png" alt=" " width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Elastic Beanstalk
&lt;/h2&gt;

&lt;p&gt;Elastic Beanstalk is an AWS service that helps you deploy containerized apps to the cloud easily, handling servers, scaling, and app health for you.&lt;/p&gt;

&lt;p&gt;Deploying my custom image with Elastic Beanstalk took me just a few minutes 10 - 15, AWS handled setup, so my containerized app went live fast.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdtzparuehc9f0xmom4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffdtzparuehc9f0xmom4m.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v7t6h32jcwstmuuhqkr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v7t6h32jcwstmuuhqkr.png" alt=" " width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🤝In the next project, I'm going to demonstrate how to &lt;strong&gt;'Deploy an App Across Accounts.'&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>container</category>
      <category>containers</category>
    </item>
    <item>
      <title>Networking Series 9: VPC Endpoints</title>
      <dc:creator>Hyelngtil Isaac</dc:creator>
      <pubDate>Thu, 07 Aug 2025 12:02:47 +0000</pubDate>
      <link>https://forem.com/maven_h/networking-series-9-vpc-endpoints-4bhm</link>
      <guid>https://forem.com/maven_h/networking-series-9-vpc-endpoints-4bhm</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Today's Project!
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What is Amazon VPC?&lt;/strong&gt;&lt;br&gt;
Amazon VPC is a secure, customizable network within AWS that lets you control how resources like EC2 and S3 communicate. Itʼs useful because it enables private, direct connections like using VPC endpoints to avoid the public internet and reduce cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How I used Amazon VPC in this project&lt;/strong&gt;&lt;br&gt;
I used Amazon VPC in today's project to create a secure network that connects my EC2 instance directly to S3 using a VPC endpoint, avoiding the public internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One thing I didn't expect in this project&lt;/strong&gt;&lt;br&gt;
One thing I didnʼt expect in this project was that even with the VPC endpoint set up, my EC2 instance couldnʼt access S3 until I updated the route table correctly.&lt;/p&gt;

&lt;p&gt;This project took me about an hour to complete. I spent that time setting up a custom VPC, launching an EC2 instance, creating a VPC endpoint, and securing access to S3 using policies.&lt;/p&gt;




&lt;h2&gt;
  
  
  In the first part of my project
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1 - Architecture set up&lt;/strong&gt;&lt;br&gt;
I'm creating a VPC, launching an EC2 instance, and setting up an S3 bucket to build a secure network that avoids public internet exposure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2 - Connect to EC2 instance&lt;/strong&gt;&lt;br&gt;
I'm connecting directly to my EC2 instance to enable terminal access for AWS CLI testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3 - Set up access keys&lt;/strong&gt;&lt;br&gt;
I'm giving my EC2 instance access to AWS by creating access keys, so it can securely run CLI commands and use S3 services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4 - Interact with S3 bucket&lt;/strong&gt;&lt;br&gt;
Iʼm heading back to my EC2 instance to test if it can access my S3 bucket using AWS CLI. This confirms that the credentials are working correctly.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture set up
&lt;/h2&gt;

&lt;p&gt;I started my project by launching a custom VPC and also set up an EC2 instance inside my VPC to set up a secure, private AWS network environment.&lt;/p&gt;

&lt;p&gt;I also set up an S3 bucket to enable secure object storage and prepare for testing connectivity within my AWS network architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8844fskp8ak1nlmug7iw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8844fskp8ak1nlmug7iw.png" alt=" " width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Access keys
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Credentials&lt;/strong&gt;&lt;br&gt;
To set up my EC2 instance to interact with my AWS environment, I configured a custom VPC, launched an EC2 instance, created an S3 bucket for storage, and applied access keys to enable CLI-based interaction between the instance and S3.&lt;/p&gt;

&lt;p&gt;Access keys are credentials (Access Key ID + Secret Key) that let EC2 or apps securely access AWS services like S3 without user login.&lt;/p&gt;

&lt;p&gt;Secret access keys are like passwords used with access key IDs to securely connect apps like EC2 to AWS services such as S3 via CLI or SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best practice&lt;/strong&gt;&lt;br&gt;
Although I'm using access keys in this project, a best practice alternative is to use IAM roles for secure, automated access without stored credentials.&lt;/p&gt;




&lt;h2&gt;
  
  
  Connecting to my S3 bucket
&lt;/h2&gt;

&lt;p&gt;The command I ran was 'aws s3 ls' This command is used to list accessible S3 buckets from EC2.&lt;/p&gt;

&lt;p&gt;The terminal responded with a list of my S3 buckets, showing that the access keys were successfully configured and my EC2 instance could securely connect using AWS CLI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2zs9lp7gk5pvnze21bi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2zs9lp7gk5pvnze21bi.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also tested the command 'aws s3 ls s3://maven-vpc-endpoints-s3', which returned the files in my bucket. This confirmed my EC2 instance could access S3 before private networking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdo0st2xmo2wh9ljpvv3g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdo0st2xmo2wh9ljpvv3g.png" alt=" " width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Uploading objects to S3
&lt;/h2&gt;

&lt;p&gt;To upload a new file to my bucket, I first ran the command 'sudo touch /tmp/newdoc.txt'. This command creates an empty text file for upload.&lt;/p&gt;

&lt;p&gt;The second command I ran was 'aws s3 cp /tmp/newdoc.txt s3://maven-vpc endpoints-s3'. This command will upload the file to my S3 bucket.&lt;/p&gt;

&lt;p&gt;The third command I ran was 'aws s3 ls s3://maven-vpc-endpoints-s3', which validated that the EC2 instance could list files in the S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqhsmcr0qc9etl3vqfzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqhsmcr0qc9etl3vqfzw.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  In the second part of my project
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 5 - Set up a Gateway&lt;/strong&gt;&lt;br&gt;
I'm setting up a VPC endpoint so my VPC can talk to S3 directly. This boosts security by avoiding the public internet and makes my network faster and cheaper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6 - Bucket policies&lt;/strong&gt;&lt;br&gt;
I'm about to restrict my S3 bucket so only traffic from my VPC endpoint can access it, blocking all public access and securing my data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 7 - Update route tables&lt;/strong&gt;&lt;br&gt;
In this step, I'm testing my VPC endpoint setup by accessing my S3 bucket from my EC2 instance. If access is denied, Iʼll troubleshoot the route table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8 - Validate endpoint conection&lt;/strong&gt;&lt;br&gt;
In this step, Iʼm testing my VPC endpoint setup to confirm private S3 access, then applying a bucket policy to restrict access so only my VPC can reach the bucket securely.&lt;/p&gt;




&lt;h2&gt;
  
  
  Setting up a Gateway
&lt;/h2&gt;

&lt;p&gt;I set up an S3 Gateway, which is a VPC endpoint that lets my VPC access S3 privately boosting security by avoiding the public internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are endpoints?&lt;/strong&gt;&lt;br&gt;
An endpoint is a private gateway that lets your VPC connect securely to AWS services like S3, no public internet needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F795ueg7bq0oq0tba5er9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F795ueg7bq0oq0tba5er9.png" alt=" " width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Bucket policies
&lt;/h2&gt;

&lt;p&gt;A bucket policy is an S3 security rule that restricts access to your bucket, allowing only traffic you allow for safer, private connections.&lt;/p&gt;

&lt;p&gt;My bucket policy will block all access to my S3 bucket except traffic coming through my VPC endpoint, ensuring ultra-secure, private communication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqyhmm2ns9g047qftjsk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqyhmm2ns9g047qftjsk.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After saving my bucket policy, my S3 bucket page showed 'denied access' warnings.&lt;br&gt;
This was because the policy blocks all public access unless traffic comes through my VPC endpoint. The AWS Console uses the internet, so it gets denied.&lt;/p&gt;

&lt;p&gt;I also had to update my route table because my EC2 instance was still routing S3 traffic through the public internet instead of the VPC endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wh3zqic60eqfv0ge5kw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wh3zqic60eqfv0ge5kw.png" alt=" " width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Route table updates
&lt;/h2&gt;

&lt;p&gt;To update my route table, I added a route that directs S3 traffic from my subnet to the VPC endpoint, ensuring private access and avoiding the public internet.&lt;/p&gt;

&lt;p&gt;After updating my public subnet's route table, my terminal could return the list of S3 bucket objects via 'aws s3 ls s3://maven-vpc-endpoints-s3', confirming private access through the VPC endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyg6c278ask4arvyfnfl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsyg6c278ask4arvyfnfl.png" alt=" " width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Endpoint policies
&lt;/h2&gt;

&lt;p&gt;An endpoint policy is a set of rules that controls which AWS services and resources your VPC can access through a VPC endpoint.&lt;/p&gt;

&lt;p&gt;I updated my endpoint's policy by changing the "Effect" from "Allow" to "Deny" in the JSON. I could see the effect of this right away, because my EC2 instance was blocked from accessing the S3 bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7uaef6cih6mb7ol79hv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7uaef6cih6mb7ol79hv.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🤝 This is the final in this networking series. Watch out for the &lt;strong&gt;7 Day DevOps Challenge!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>networking</category>
      <category>awschallenge</category>
    </item>
  </channel>
</rss>
