<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ismail G.</title>
    <description>The latest articles on Forem by Ismail G. (@ismailg).</description>
    <link>https://forem.com/ismailg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ismailg"/>
    <language>en</language>
    <item>
      <title>Day 1 — Root Security &amp; IAM Identity Center</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sun, 19 Apr 2026 12:23:59 +0000</pubDate>
      <link>https://forem.com/ismailg/day-1-root-security-iam-identity-center-4ndd</link>
      <guid>https://forem.com/ismailg/day-1-root-security-iam-identity-center-4ndd</guid>
      <description>&lt;p&gt;In my ongoing startup infrastructure series, I began by securing the most critical part of any AWS account: the root user and access management layer.&lt;/p&gt;

&lt;p&gt;This first step is simple, but extremely important: lock down root access and establish a proper identity system.&lt;/p&gt;

&lt;p&gt;A complete hands-on version of this setup is also available on GitHub repo:&lt;br&gt;
&lt;a href="https://github.com/skysea-devops/startup-aws-infra-setup-guide/tree/main" rel="noopener noreferrer"&gt;https://github.com/skysea-devops/startup-aws-infra-setup-guide/tree/main&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why This Setup Is Important
&lt;/h2&gt;

&lt;p&gt;This setup is aligned with AWS best practices, particularly the AWS Well-Architected Framework principles:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The root account in AWS has unlimited permissions. If compromised, everything is exposed: infrastructure, data, billing, and even account ownership.&lt;/p&gt;

&lt;p&gt;At the same time, poorly managed access (shared credentials, no MFA, direct IAM users) becomes chaos very quickly in a growing startup.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So the goal of Day 1 is clear:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Secure root access&lt;/li&gt;
&lt;li&gt;Eliminate risky login patterns&lt;/li&gt;
&lt;li&gt;Establish scalable access control via IAM Identity Center (SSO)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 1 — Enable MFA on Root Account
&lt;/h2&gt;

&lt;p&gt;The very first thing you should do after creating your AWS account is enabling Multi-Factor Authentication (MFA).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5xwxufbt6s1y3upcpiw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5xwxufbt6s1y3upcpiw.jpeg" alt=" " width="722" height="738"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Avoid relying only on passwords. MFA ensures that even if your credentials leak, your account remains protected.&lt;/p&gt;

&lt;p&gt;Setup Flow&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to: Account → Security Credentials → MFA → Assign MFA&lt;/li&gt;
&lt;li&gt;Choose Authenticator app&lt;/li&gt;
&lt;li&gt;Scan QR code with your phone&lt;/li&gt;
&lt;li&gt;Enter two consecutive codes&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once completed, AWS will confirm:&lt;/p&gt;

&lt;p&gt;“You have successfully assigned this virtual MFA device.”&lt;/p&gt;

&lt;p&gt;At this point, your root account is significantly more secure.&lt;/p&gt;

&lt;p&gt;Important Rule&lt;/p&gt;

&lt;p&gt;Never use the root account for daily operations.&lt;/p&gt;

&lt;p&gt;Use it only for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Billing&lt;/li&gt;
&lt;li&gt;Account-level changes&lt;/li&gt;
&lt;li&gt;Emergency recovery&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 2 — Enable IAM Identity Center (SSO)
&lt;/h2&gt;

&lt;p&gt;After securing root, the next step is eliminating direct access usage and moving to a centralized identity system.&lt;/p&gt;

&lt;p&gt;AWS provides this via: &lt;a href="https://docs.aws.amazon.com/singlesignon/latest/userguide/enable-identity-center.html" rel="noopener noreferrer"&gt;IAM Identity Center&lt;/a&gt; (formerly AWS SSO)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa00dfrvnj2yxzktsh833.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa00dfrvnj2yxzktsh833.jpeg" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
Why IAM Identity Center?&lt;/p&gt;

&lt;p&gt;Instead of creating IAM users, sharing credentials across team members, and manually managing passwords, you can adopt a much more structured and secure approach. &lt;/p&gt;

&lt;p&gt;By using IAM Identity Center, you gain centralized access control, a seamless SSO login experience, built-in MFA enforcement, and a cleaner, more scalable DevOps workflow overall.&lt;/p&gt;

&lt;p&gt;Search in AWS Console:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;IAM Identity Center → Enable&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwbdds46t19u0f3p3uva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdwbdds46t19u0f3p3uva.png" alt=" " width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this stage, AWS gives you two options for enabling IAM Identity Center:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Account instance (single account)&lt;br&gt;
Organization instance (recommended)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Since I’m building this setup for a startup with scalability in mind, I chose:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Enable with AWS Organizations&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You must choose a region.&lt;/p&gt;

&lt;p&gt;Recommended:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;eu-west-1&lt;br&gt;
us-east-1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This choice is not easily changeable later, so pick carefully.&lt;/p&gt;

&lt;p&gt;Encryption (KMS)&lt;/p&gt;

&lt;p&gt;Under Advanced Configuration, AWS asks:&lt;/p&gt;

&lt;p&gt;Key for encrypting IAM Identity Center data at rest&lt;/p&gt;

&lt;p&gt;You have two options:&lt;/p&gt;

&lt;p&gt;Use AWS owned key (recommended for most startups)&lt;br&gt;
Use a custom KMS key (advanced)&lt;/p&gt;

&lt;p&gt;For now, I selected:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Use AWS owned key&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Reason:&lt;/p&gt;

&lt;p&gt;Simpler setup&lt;br&gt;
No key management overhead&lt;br&gt;
Fully secure for standard use cases&lt;/p&gt;

&lt;p&gt;You can always migrate to a custom KMS later if needed.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2 — Create Permission Sets
&lt;/h3&gt;

&lt;p&gt;At this stage, IAM Identity Center is enabled, but no one has access yet. Before creating users, you need to define what kind of access they will have.&lt;/p&gt;

&lt;p&gt;This is done through Permission Sets. Think of these as role templates.&lt;br&gt;
Navigate to:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;IAM Identity Center → Permission sets → Create permission set&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csvs"&gt;&lt;code&gt;&lt;span class="k"&gt;Name&lt;/span&gt;      &lt;span class="err"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Policy&lt;/span&gt;               &lt;span class="err"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Use&lt;/span&gt; &lt;span class="k"&gt;Case&lt;/span&gt;
&lt;span class="k"&gt;Admin&lt;/span&gt;     &lt;span class="err"&gt;|&lt;/span&gt; &lt;span class="k"&gt;AdministratorAccess&lt;/span&gt;  &lt;span class="err"&gt;|&lt;/span&gt; &lt;span class="k"&gt;Founders&lt;/span&gt; &lt;span class="err"&gt;/&lt;/span&gt; &lt;span class="k"&gt;Infra&lt;/span&gt;
&lt;span class="k"&gt;PowerUser&lt;/span&gt; &lt;span class="err"&gt;|&lt;/span&gt; &lt;span class="k"&gt;PowerUserAccess&lt;/span&gt;      &lt;span class="err"&gt;|&lt;/span&gt; &lt;span class="k"&gt;DevOps&lt;/span&gt; &lt;span class="err"&gt;/&lt;/span&gt; &lt;span class="k"&gt;Engineers&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Admin Permission Set&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full access to all AWS services&lt;/li&gt;
&lt;li&gt;Should be limited to very few users&lt;/li&gt;
&lt;li&gt;Used for infrastructure ownership&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzelkutby96ukfdup0h4r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzelkutby96ukfdup0h4r.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PowerUser Permission Set&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Broad access to services&lt;/li&gt;
&lt;li&gt;Cannot manage IAM&lt;/li&gt;
&lt;li&gt;Ideal for developers and DevOps engineers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxf7imk0twpzg5bpxfc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwxf7imk0twpzg5bpxfc7.png" alt=" " width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3 — Create Your First User
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;IAM Identity Center → Users → Add user&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Username&lt;/li&gt;
&lt;li&gt;Email address&lt;/li&gt;
&lt;li&gt;First &amp;amp; last name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS will send an invitation email for first-time login.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5atzjg0me53jazpl4a3z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5atzjg0me53jazpl4a3z.png" alt=" " width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4 — Assign Access
&lt;/h3&gt;

&lt;p&gt;This is where everything connects. You now link:&lt;br&gt;
User → AWS Account → Permission Set&lt;/p&gt;

&lt;p&gt;Navigate:&lt;br&gt;
&lt;code&gt;IAM Identity Center → AWS Accounts → Assign users or groups&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Assignment Flow&lt;/strong&gt;&lt;br&gt;
Select AWS Account&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptzd1x9xdsq2wnukd8nq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptzd1x9xdsq2wnukd8nq.png" alt=" " width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select User (or Group)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytwu9mgr9dwnpouy95lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytwu9mgr9dwnpouy95lw.png" alt=" " width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose Permission Set&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujs82za6z6nq6rk2hspu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujs82za6z6nq6rk2hspu.png" alt=" " width="800" height="329"&gt;&lt;/a&gt;&lt;br&gt;
Confirm&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5 — Enforce MFA for All Users
&lt;/h3&gt;

&lt;p&gt;Even though root MFA is enabled, you must enforce MFA for all SSO users. Because MFA:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Protects against credential leaks&lt;/li&gt;
&lt;li&gt;Enforces security across the team&lt;/li&gt;
&lt;li&gt;Standard best practice for any production setup&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Navigate to:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;IAM Identity Center → Settings → Authentication → Require MFA&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fripiuhvduww2c53vtkmg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fripiuhvduww2c53vtkmg.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Access Portal Login:
&lt;/h2&gt;

&lt;p&gt;Users will log in through a centralized access portal provided by AWS, using a unique URL such as &lt;a href="https://xxxx.awsapps.com/start" rel="noopener noreferrer"&gt;https://xxxx.awsapps.com/start&lt;/a&gt;. This portal acts as the single entry point for all users, where they authenticate with their credentials and complete MFA verification before accessing their assigned AWS accounts and roles.&lt;/p&gt;

&lt;p&gt;After logging in through the Access Portal URL, users are presented with a centralized dashboard where they can see all AWS accounts and roles assigned to them.&lt;/p&gt;

&lt;p&gt;At this point, the AWS account is no longer a single-user environment but a structured, secure, and scalable access system. This foundation is critical before provisioning any infrastructure.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>infrastructure</category>
      <category>devops</category>
    </item>
    <item>
      <title>Building Startup Infrastructure the Right Way</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Tue, 14 Apr 2026 17:16:02 +0000</pubDate>
      <link>https://forem.com/ismailg/building-startup-infrastructure-the-right-way-49mb</link>
      <guid>https://forem.com/ismailg/building-startup-infrastructure-the-right-way-49mb</guid>
      <description>&lt;p&gt;I recently started working on a systematic "30-day startup infrastructure plan" and have been working on my GitHub repo step by step. The goal is simple: I want to construct a clean, production-ready infrastructure from the start. I also want the whole process to be open and easy to follow.&lt;/p&gt;

&lt;p&gt;Most teams in the early stages don't really design their infrastructure. They put it together. You set up a server, add a database, and configure a deployment. Everything works well enough to carry on. This method makes sense, especially when time is of the essence. But it also hides risks that only show up when the system starts to grow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnderjxnepqpudbytt3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnderjxnepqpudbytt3y.png" alt=" " width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Infrastructure Becomes a Problem Later
&lt;/h2&gt;

&lt;p&gt;Startups generally don't think about infrastructure as a top priority. The main things to work on are product development, getting new customers, and how quickly you can make changes. But decisions about infrastructure discreetly affect how a product works when it's under stress. They decide how easily a team can make changes, how safely data is managed, and how reliably the system responds to more demand.&lt;/p&gt;

&lt;p&gt;When the first setup is done without a plan, the difficulties usually show up later. It gets tougher to maintain systems, deployments become less stable, and debugging takes longer. Instead of giving you confidence, scaling makes things less certain. At that point, teams have to spend time correcting problems that shouldn't have happened instead of adding new features.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Means to Get It Right Early
&lt;/h2&gt;

&lt;p&gt;This is why I think we should plan for infrastructure from the start. This doesn't mean making things too complicated or establishing systems that are too complicated too soon. It includes setting a clear baseline for things like safe access control, well-segmented environments, automated deployments, and being able to see how the system works. These things are not extras; they are the bare minimum for long-term progress.&lt;/p&gt;

&lt;p&gt;In this case, cloud platforms like AWS provide a great base for new businesses. They let you start small and grow over time, and they still let you use best practices like managed services, identity management, and network isolation. &lt;/p&gt;

&lt;p&gt;AWS provides scalable, secure, and cost-effective infrastructure tailored for startups, featuring over 200 fully featured services. Key offerings include compute (EC2), storage (S3), databases (RDS), and networking (VPC). &lt;/p&gt;

&lt;p&gt;AWS lets teams construct systems that are ready for production and don't cost too much. More crucially, it allows an infrastructure-as-code approach, which makes it easier to keep environments the same and manage them over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making This Work in Real Life
&lt;/h2&gt;

&lt;p&gt;My current goal is to bring all these pieces together in a meaningful and easily understandable way. To build a real infrastructure step-by-step and document it. This covers everything from security setup for individual accounts to networking, computing, database architecture, CI/CD pipelines, and monitoring.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm03sinoxoxzdnj16o61e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm03sinoxoxzdnj16o61e.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;This will be a series that shows how the setup really goes. The structure I'm constructing in my &lt;a href="https://github.com/skysea-devops/startup-aws-infra-setup-guide" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt; will match each stage, and each post will focus on a different layer of the infrastructure. The documentation will change as the system does.&lt;/p&gt;

&lt;p&gt;You can follow both the my GitHub repository and the posts that will be coming out here if you want to learn how to construct startup infrastructure the proper way. As I go along, I'll share each step and explain not just what I'm doing but also why I'm doing it that way.&lt;/p&gt;

&lt;p&gt;The next essay will be about the first step in the setup, which is building a secure and controlled base.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>devops</category>
      <category>startup</category>
    </item>
    <item>
      <title>How I Became an AWS Community Builder (Data Track)</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sat, 07 Mar 2026 07:17:26 +0000</pubDate>
      <link>https://forem.com/ismailg/how-i-became-an-aws-community-builder-data-track-4mmp</link>
      <guid>https://forem.com/ismailg/how-i-became-an-aws-community-builder-data-track-4mmp</guid>
      <description>&lt;p&gt;I got an email a few days ago that made my day. I had been accepted into the Data track of the AWS Community Builders.&lt;/p&gt;

&lt;p&gt;This initiative may just look like another badge for a lot of folks. But for me, it means a lot more: months of studying, trying new things, and sharing what I learn with other people.&lt;/p&gt;

&lt;p&gt;After I told folks the news, a lot of them asked me the same thing: "What helped you get in?"&lt;/p&gt;

&lt;p&gt;There is no one secret, to tell the truth. But one thing was really important: always sharing technical information. One of the best things I did on my trip was to write about my experiences with databases, cloud architectures, and AWS services on Dev.to.&lt;/p&gt;

&lt;p&gt;In this article, I want to talk about what I accomplished, what made my application stand out, and what I would tell anyone who wishes to apply to the program in the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the AWS Community Builders Program?
&lt;/h2&gt;

&lt;p&gt;Before diving into my journey, let's clarify what this program is. The AWS Community Builders program is designed to recognize and support technical community leaders who are passionate about sharing knowledge and connecting with others about AWS technologies.&lt;/p&gt;

&lt;p&gt;It provides builders with technical resources, mentorship, $500 in AWS credits, exam vouchers, and a direct line to AWS product teams. It’s not just about what you know; it’s about how you help others learn.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I Started: My Background
&lt;/h2&gt;

&lt;p&gt;At first, I focused mainly on getting AWS certifications. But after a while, I realized that passing an exam is really just the starting point. To truly understand the cloud, I needed hands-on experience—working through real database problems, experimenting, sometimes breaking things, and figuring out how they work.&lt;/p&gt;

&lt;p&gt;Along the way, I also started documenting what I learned so others could benefit from the same experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Turning Point: Writing on Dev.to
&lt;/h2&gt;

&lt;p&gt;At some point, I realized that simply learning new things wasn’t enough. I was spending hours troubleshooting systems, experimenting with databases, and figuring out how different AWS services worked together—but most of those lessons stayed in my own notes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;That’s when I started sharing what I learned on Dev.to.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead of writing simple notes for myself, I began turning real troubleshooting sessions into structured tutorials. Whenever I solved a problem, I tried to explain the process step by step—what went wrong, what I tried, and what finally worked.&lt;/p&gt;

&lt;p&gt;Another place where I learned a lot was AWS re:Post. I started helping people who were facing real problems with AWS services. Sometimes the questions were about databases, sometimes about architecture or infrastructure.&lt;/p&gt;

&lt;p&gt;When I encountered an interesting problem there, I didn’t just answer it and move on. I often recreated the scenario in AWS, tested different solutions, and then wrote a detailed article on Dev.to so that the solution could help more people facing the same issue.&lt;/p&gt;

&lt;p&gt;Because I applied to the Data track of the AWS Community Builders, many of my articles naturally focused on data-related topics, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS DocumentDB — exploring how managed NoSQL databases work in AWS&lt;/li&gt;
&lt;li&gt;MongoDB migrations — the challenges of moving on-premise data to the cloud&lt;/li&gt;
&lt;li&gt;Database architecture — designing systems for high availability and scalability&lt;/li&gt;
&lt;li&gt;Cloud infrastructure — automating data workloads and deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Over time, something interesting happened. Writing about these topics didn’t just help others—it also helped me understand them much more deeply. Explaining a solution forces you to truly understand it.&lt;/p&gt;

&lt;p&gt;One thing I learned along the way is this:&lt;/p&gt;

&lt;p&gt;Don’t just write about what a service is. Write about the problem you solved with it.&lt;/p&gt;

&lt;p&gt;That’s the kind of knowledge the cloud community values most.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Community Contribution &amp;amp; Engagement&lt;br&gt;
A significant part of my journey consisted of writing articles; however, I quickly realized that being "constructive" wasn't just about producing content. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Being a member of a community is also an important element. Answering questions posted in online forums, participating in online discussions, and trying to help with challenging database configurations also gave me new experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Matters When Applying?
&lt;/h2&gt;

&lt;p&gt;If you are planning to apply for the next cohort, here are the four pillars that I believe made my application stand out:&lt;/p&gt;

&lt;p&gt;I can't speak for the critics, but in my experience, four elements seem to be the most important:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Information&lt;/strong&gt;&lt;br&gt;
High-quality blog posts, GitHub repositories, or films that indicate how deep your technical knowledge is and how much practical experience you have.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistency:&lt;/strong&gt;&lt;br&gt;
One article soon before the deadline won't do anything. If you post content regularly over a few months, it demonstrates you're actively participating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-World Experience:&lt;/strong&gt;&lt;br&gt;
Your information is considerably more useful if you explain how you use AWS services to solve real problems, especially difficulties with infrastructure or data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Community Impact:&lt;/strong&gt;&lt;br&gt;
Last but not least, your content should be useful to individuals. People talking about your ideas, leaving comments, and using them prove that your effort is helpful to the community.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Advice for Aspiring Builders
&lt;/h2&gt;

&lt;p&gt;If you’re thinking about applying to the AWS Community Builders, my biggest advice is simple: start sharing what you learn.&lt;/p&gt;

&lt;p&gt;You don’t need to be an expert in everything. In fact, many of the articles I wrote started with something I had just learned while working on a real problem. Instead of keeping that knowledge to myself, I turned those experiences into tutorials and shared them with the community.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;One thing that helped me a lot was writing about real challenges. Explaining how you solved a problem—whether it’s a database migration, an architecture decision, or a troubleshooting process—creates content that is actually useful for others.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another important thing is consistency. You don’t need to publish something every week, but sharing your learning journey over time shows that you’re actively contributing to the ecosystem.&lt;/p&gt;

&lt;p&gt;Finally, try to engage with the community whenever you can. Platforms like Dev.to or AWS re:Post are great places to both learn from others and help people solve real problems.&lt;/p&gt;

&lt;p&gt;At the end of the day, the goal isn’t just to get accepted into the program. The real value comes from learning in public and helping others along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  A New Beginning
&lt;/h2&gt;

&lt;p&gt;Becoming an AWS Community Builder is a milestone, but more importantly, it’s a beginning. It’s an invitation to learn more, share more, and connect with some of the brightest minds in the industry.&lt;/p&gt;

&lt;p&gt;Are you planning to apply for the next round? Let me know in the comments if you have any questions about the process!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>community</category>
      <category>database</category>
      <category>devto</category>
    </item>
    <item>
      <title>Secure Terraform CI/CD on AWS with GitHub Actions (OIDC + Remote State)</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sun, 08 Feb 2026 21:40:52 +0000</pubDate>
      <link>https://forem.com/ismailg/secure-terraform-cicd-on-aws-with-github-actions-oidc-remote-state-2eg6</link>
      <guid>https://forem.com/ismailg/secure-terraform-cicd-on-aws-with-github-actions-oidc-remote-state-2eg6</guid>
      <description>&lt;p&gt;For CI/CD processes to work well, they need to be secure and repeatable. Without a strong authentication system and a consistent state management strategy, infrastructure automation quickly becomes vulnerable to security threats.&lt;br&gt;
This blog post explains how to set up a remote Terraform backend with state locking using Amazon S3 and DynamoDB. We will also use OIDC to set up keyless authentication from GitHub Actions to Amazon Web Services (AWS).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon S3 – remote state storage (versioned &amp;amp; encrypted)&lt;/li&gt;
&lt;li&gt;DynamoDB – state locking&lt;/li&gt;
&lt;li&gt;AWS KMS – encryption&lt;/li&gt;
&lt;li&gt;GitHub Actions – CI/CD automation&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Why This Setup Matters
&lt;/h2&gt;

&lt;p&gt;AWS access keys are saved as secrets in traditional continuous integration pipelines. This plan is very risky because long-lived credentials could be stolen.&lt;/p&gt;

&lt;p&gt;Key rotation is hard, but it's necessary. When CI is compromised, AWS is also compromised.&lt;/p&gt;

&lt;p&gt;OpenID Connect (OIDC) solves this problem by letting GitHub Actions get an IAM role dynamically using short-lived credentials from AWS STS.&lt;/p&gt;

&lt;p&gt;Terraform also needs to use a remote backend to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stop the state from getting corrupted at the same time.&lt;/li&gt;
&lt;li&gt;Take care of values that are weak.&lt;/li&gt;
&lt;li&gt;Make sure that people can work together.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture solves both of these problems in a way that is easy to use and can grow with your needs.&lt;/p&gt;

&lt;p&gt;High-Level Architecture:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions requests an OIDC identity token&lt;/li&gt;
&lt;li&gt;AWS validates the token using IAM OIDC Provider&lt;/li&gt;
&lt;li&gt;An IAM Role is assumed via sts:AssumeRoleWithWebIdentity&lt;/li&gt;
&lt;li&gt;Terraform runs with temporary credentials&lt;/li&gt;
&lt;li&gt;State is stored in encrypted S3, locked via DynamoDB&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Step 1️: Create AWS OIDC Provider
&lt;/h2&gt;

&lt;p&gt;To allow GitHub Actions to authenticate with AWS, an OIDC provider must be configured in AWS IAM. Before this, if you do not have AWS CLI configured in your local computer, you must setup it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS CLI Setup (macOS)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Homebrew
brew install awscli

aws --version

aws configure

write those when asked:
AWS Access Key ID [None]: &amp;lt;your-accesskey&amp;gt;
AWS Secret Access Key [None]: &amp;lt;your-secret-accesskey&amp;gt;
Default region name [None]: &amp;lt;region-name&amp;gt;
Default output format [None]: json

# Account test
aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Create the OIDC Provider&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run the following command using AWS CLI or create the provider via the AWS Console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws iam create-open-id-connect-provider \
  --url https://token.actions.githubusercontent.com \
  --client-id-list sts.amazonaws.com \
  --thumbprint-list 6938fd4d98bab03faadb97b34396831e3780aea1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This enables AWS to validate GitHub-issued identity tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2️: Create IAM Role for GitHub Actions
&lt;/h2&gt;

&lt;p&gt;Next, an IAM role must be created so that GitHub Actions workflows can assume it using sts:AssumeRoleWithWebIdentity.&lt;/p&gt;

&lt;p&gt;This role explicitly defines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Who can assume it (GitHub Actions)&lt;/li&gt;
&lt;li&gt;From which repository it can be assumed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Create GitHubActionsRole with trust policy sts:AssumeRoleWithWebIdentity&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::YOUR_ACCOUNT_ID:oidc-provider/token.actions.githubusercontent.com"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
        },
        "StringLike": {
          "token.actions.githubusercontent.com:sub": "repo:YOUR_GITHUB_USERNAME/YOUR_REPO_NAME:*"
        }
      }
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Attach IAM Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For bootstrap simplicity, we attach AdministratorAccess.&lt;br&gt;
 Important:&lt;br&gt;
 In real production environments, replace this with least-privilege policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sj5ipis8ny7kzkyw0mh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9sj5ipis8ny7kzkyw0mh.png" alt=" " width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4️: Configure GitHub Repository Secret
&lt;/h2&gt;

&lt;p&gt;GitHub Actions must now be informed which IAM role to assume.&lt;/p&gt;

&lt;p&gt;In the GitHub repository: Settings → Secrets and variables → Actions → New repository secret&lt;/p&gt;

&lt;p&gt;Create the following secret:&lt;/p&gt;

&lt;p&gt;AWS_ROLE_ARN = arn:aws:iam::YOUR_ACCOUNT_ID:role/GitHubActionsRole&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 5️: Terraform Remote State
&lt;/h2&gt;

&lt;p&gt;We use a one-time bootstrap workflow to provision:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt; S3 bucket (versioning + encryption)&lt;/li&gt;
&lt;li&gt; DynamoDB table (state locking)&lt;/li&gt;
&lt;li&gt; KMS key (state encryption)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repository Structure&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform-remote-state/
├── main.tf
├── providers.tf
├── variables.tf
├── terraform.tfvars
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From my GitHub repository you can check the terraform files:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/skysea-devops/aws-private-infrastructure-terraform-githubactions" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6️: Bootstrap GitHub Actions Workflow
&lt;/h2&gt;

&lt;p&gt;Below is the final bootstrap workflow.&lt;/p&gt;

&lt;p&gt;Uses OIDC for AWS auth&lt;/p&gt;

&lt;p&gt;Accepts the S3 bucket name as an input&lt;/p&gt;

&lt;p&gt;Pins Terraform version&lt;/p&gt;

&lt;p&gt;Verifies AWS identity before provisioning&lt;/p&gt;

&lt;p&gt;bootstrap.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# This workflow creates the foundational infrastructure for Terraform:
# - S3 bucket for state storage with encryption and versioning
# - DynamoDB table for state locking (prevents concurrent modifications)
# - KMS key for encrypting state files and secrets
#
# Run this ONCE before deploying main infrastructure

name: Bootstrap 

on:  
  workflow_dispatch:

permissions:
  contents: read
  id-token: write

env:
  AWS_REGION: us-east-1
  TF_VERSION: 1.5.0  

jobs: 
  bootstrap:  
    runs-on: ubuntu-latest 

    defaults:
      run:
        working-directory: terraform-remote-state

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Configure AWS credentials via OIDC
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
          aws-region: ${{ env.AWS_REGION }}
          role-session-name: GitHubActions-Bootstrap

      - name: Verify AWS identity
        run: |
          echo "Authenticated as:"
          aws sts get-caller-identity

          ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
          echo "AWS Account ID: $ACCOUNT_ID"
          echo "AWS Region: ${{ env.AWS_REGION }}"

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3
        with:
          terraform_version: ${{ env.TF_VERSION }}

      - name: Terraform Init
        run: terraform init

      - name: Terraform Plan
        run: |
          terraform plan -out=plan.tfplan

      - name: Terraform Apply
        run: terraform apply -auto-approve plan.tfplan

      - name: Terraform Output
        run: terraform output


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7️: Run the Bootstrap Workflow
&lt;/h2&gt;

&lt;p&gt;Go to GitHub Actions&lt;/p&gt;

&lt;p&gt;Select Bootstrap&lt;/p&gt;

&lt;p&gt;Click Run workflow&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwmyvxbjuqodz1mj5ekb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwmyvxbjuqodz1mj5ekb.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8️: Store Terraform Outputs as GitHub Secrets
&lt;/h2&gt;

&lt;p&gt;After completion, Terraform outputs values required by all future environments.&lt;br&gt;
Store these as GitHub Secrets:&lt;br&gt;
TF_STATE_BUCKET   # S3 bucket name&lt;br&gt;
TF_LOCK_TABLE     # DynamoDB table name&lt;br&gt;
KMS_KEY_ARN       # KMS key ARN&lt;/p&gt;

</description>
      <category>aws</category>
      <category>github</category>
      <category>cicd</category>
      <category>devops</category>
    </item>
    <item>
      <title>Solving Frontend-Lambda Timeout Issues with AppSync Asynchronous Execution</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sat, 29 Nov 2025 16:33:36 +0000</pubDate>
      <link>https://forem.com/ismailg/solving-frontend-lambda-timeout-issues-with-appsync-asynchronous-execution-2p93</link>
      <guid>https://forem.com/ismailg/solving-frontend-lambda-timeout-issues-with-appsync-asynchronous-execution-2p93</guid>
      <description>&lt;p&gt;A common issue in serverless applications: the frontend receives a timeout error while CloudWatch logs show the Lambda function completed successfully. Users see failed requests, but backend operations succeed.&lt;/p&gt;

&lt;p&gt;When a Lambda function is called synchronously, the API waits for it to complete and return a response.  For long-running tasks, this might cause considerable delays.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical timeout constraints:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Maximum Timeout&lt;/th&gt;
&lt;th&gt;Configurable&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Lambda Function&lt;/td&gt;
&lt;td&gt;15 minutes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API Gateway (REST)&lt;/td&gt;
&lt;td&gt;29 seconds&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AppSync (GraphQL)&lt;/td&gt;
&lt;td&gt;30 seconds&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  The Solution: AppSync Asynchronous Lambda Execution
&lt;/h2&gt;

&lt;p&gt;AWS AppSync provides asynchronous Lambda resolver support. Asynchronous execution lets a GraphQL mutation trigger a Lambda function without waiting for it to finish. The resolver returns immediately, bypassing the 30-second timeout limit.&lt;/p&gt;

&lt;p&gt;With this pattern, the frontend is no longer tied to the duration of the Lambda execution. This enables long-running workflows to complete in the background.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;br&gt;
 ```Before (Synchronous):&lt;br&gt;
Frontend → "Start job" → Wait 30s → Timeout ❌&lt;br&gt;
                           Lambda still running...&lt;/p&gt;

&lt;p&gt;After (Asynchronous):&lt;br&gt;
Frontend → "Start job" → Get job ID immediately ✅&lt;br&gt;
Lambda runs independently → Updates result → Frontend gets notified ✅```&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;p&gt;When a GraphQL mutation is invoked with an async handler, AppSync invokes the Lambda function using Event invocation type (asynchronous mode). It returns a response—typically containing a job identifier—without waiting for Lambda completion.&lt;/p&gt;

&lt;p&gt;The Lambda function then executes independently in the background. The frontend retrieves results through two methods:&lt;/p&gt;

&lt;p&gt;Real-time updates: GraphQL subscriptions notify the client when data changes&lt;br&gt;
Polling: Periodic GraphQL queries check job status at defined intervals&lt;/p&gt;

&lt;p&gt;This architecture eliminates the 30-second AppSync resolver timeout limitation while maintaining a responsive user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation with AWS Amplify Gen 2
&lt;/h2&gt;

&lt;p&gt;For Amplify applications using AppSync, AWS provides native support for asynchronous Lambda resolvers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The frontend triggers a GraphQL mutation.&lt;/li&gt;
&lt;li&gt;AppSync invokes the Lambda function in asynchronous mode and immediately returns a task reference.&lt;/li&gt;
&lt;li&gt;The Lambda executes independently.&lt;/li&gt;
&lt;li&gt;Results are written to a datastore.&lt;/li&gt;
&lt;li&gt;The frontend retrieves results via follow-up GraphQL queries, or AppSync subscriptions (real-time updates).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS Documentation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.amplify.aws/react/build-a-backend/data/custom-business-logic/#async-function-handlers" rel="noopener noreferrer"&gt;https://docs.amplify.aws/react/build-a-backend/data/custom-business-logic/#async-function-handlers&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Get Hands-On with Amazon RDS Using AWS’s Getting Started Resource Center</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sat, 02 Aug 2025 10:15:35 +0000</pubDate>
      <link>https://forem.com/ismailg/get-hands-on-with-amazon-rds-using-awss-getting-started-resource-center-4gpa</link>
      <guid>https://forem.com/ismailg/get-hands-on-with-amazon-rds-using-awss-getting-started-resource-center-4gpa</guid>
      <description>&lt;p&gt;Understanding Amazon RDS (Relational Database Service) is essential for anyone seeking to gain expertise in cloud technology.  You can't beat getting your hands on some real-world experience with managed databases, cloud-native application deployment, or even just learning the ropes of Amazon Web Services (AWS) certification.&lt;/p&gt;

&lt;p&gt;Fortunately, the 'Getting Started Resource Center' on AWS provides a curated set of practical lessons tailored to RDS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1z1eh0h6n9lec7vizpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1z1eh0h6n9lec7vizpq.png" alt=" " width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS RDS (Relational Database Service)
&lt;/h2&gt;

&lt;p&gt;Amazon RDS is a managed relational database service provided by AWS (Amazon Web Services).  Without worrying about the underlying infrastructure, users may quickly establish, operate, and scale databases in the cloud.&lt;/p&gt;

&lt;p&gt;Launching and managing cloud-based relational databases is easy with Amazon RDS.  It facilitates numerous engines, including MariaDB, SQL Server, PostgreSQL, and MySQL, and it hides numerous tedious processes, such as backups, scalability, replication, and patching.&lt;/p&gt;

&lt;p&gt;Getting RDS knowledge empowers you with the ability to:&lt;/p&gt;

&lt;p&gt;Database provisioning that is both secure and scalable&lt;/p&gt;

&lt;p&gt;Redundancy and high availability&lt;/p&gt;

&lt;p&gt;Tracking and automating performance&lt;/p&gt;

&lt;p&gt;Managing a database instance and integrating third-party applications&lt;/p&gt;

&lt;p&gt;Unless you have hands-on experience with database launch, connection, and management, these concepts may appear abstract.  The value of AWS's practical guides becomes apparent in this context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hands-On RDS Tutorials Currently Available
&lt;/h2&gt;

&lt;p&gt;As of now, AWS offers three dedicated hands-on labs for Amazon RDS, each addressing a key learning scenario:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/getting-started/hands-on/create-mysql-db/?ref=gsrchandson" rel="noopener noreferrer"&gt;Create and Connect to a MySQL Database&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Ideal for beginners&lt;/li&gt;
&lt;li&gt;Learn how to launch a MySQL RDS instance, configure access, and connect with a client&lt;/li&gt;
&lt;li&gt;Free Tier–eligible&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/getting-started/hands-on/create-microsoft-sql-db/?ref=gsrchandson&amp;amp;id=updated" rel="noopener noreferrer"&gt;Create and Connect to a Microsoft SQL Server Database&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Similar structure, but uses SQL Server as the database engine&lt;/li&gt;
&lt;li&gt;Great for Windows-centric or enterprise developers&lt;/li&gt;
&lt;li&gt;Learn connectivity, security, and basic DB management&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/getting-started/hands-on/amazon-rds-backup-restore-using-aws-backup/?ref=gsrchandson&amp;amp;id=itprohandson" rel="noopener noreferrer"&gt;Amazon RDS Backup &amp;amp; Restore Using AWS Backup&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Learn how to create an on-demand backup job for an Amazon RDS database&lt;/li&gt;
&lt;li&gt;Practice backup planning and restore workflows&lt;/li&gt;
&lt;li&gt;Valuable for DevOps and system reliability engineers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why AWS Hands-on Tutorials Are Valuable
&lt;/h2&gt;

&lt;p&gt;While the number of RDS-related hands-on tutorials is currently limited, they cover core operational skills that are widely applicable:&lt;/p&gt;

&lt;p&gt;Almost every cloud project requires database creation and connection. The availability of data in production settings depends on the backup and restore processes.&lt;/p&gt;

&lt;p&gt;The ability to use Microsoft SQL Server configurations will give you basic information about managing other databases.&lt;/p&gt;

&lt;p&gt;Don’t just read about RDS—build with it. Let's start with this:&lt;br&gt;
&lt;a href="https://aws.amazon.com/getting-started/hands-on/amazon-rds-backup-restore-using-aws-backup/?ref=gsrchandson&amp;amp;id=itprohandson" rel="noopener noreferrer"&gt;https://aws.amazon.com/getting-started/hands-on/amazon-rds-backup-restore-using-aws-backup/?ref=gsrchandson&amp;amp;id=itprohandson&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloudcomputing</category>
      <category>aws</category>
      <category>database</category>
    </item>
    <item>
      <title>Using SSL with a PostgreSQL DB Instance</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sun, 15 Jun 2025 13:22:17 +0000</pubDate>
      <link>https://forem.com/ismailg/using-ssl-with-a-postgresql-db-instance-10e9</link>
      <guid>https://forem.com/ismailg/using-ssl-with-a-postgresql-db-instance-10e9</guid>
      <description>&lt;p&gt;Protecting any app that deals with sensitive information means making sure that it is safe while it is being sent. When you host PostgreSQL on Amazon RDS, it enables Secure Sockets Layer (SSL) connections. &lt;/p&gt;

&lt;p&gt;This means that data transfers between your app and the database can be protected. This makes sure that private information is safe from being intercepted or changed while it is being sent.&lt;/p&gt;

&lt;p&gt;This post will show you how to use SSL with a PostgreSQL DB instance on Amazon RDS, including what you need to do first, how to set it up, and the best ways to do so.&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling SSL on Your RDS PostgreSQL Instance
&lt;/h2&gt;

&lt;p&gt;By default, Amazon RDS for PostgreSQL supports SSL. But to make SSL work and set up your client correctly, you need to do a few more things.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Check the SSL Configuration
&lt;/h3&gt;

&lt;p&gt;Go to your RDS instance in the AWS Console and review the associated parameter group. If you are using PostgreSQL version 15 or newer, rds.force_ssl may be enforced by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx63knzge7ya16t5l8zyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx63knzge7ya16t5l8zyd.png" alt=" " width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to RDS &amp;gt; Databases &amp;gt; [your database] &amp;gt; Configuration&lt;/p&gt;

&lt;p&gt;Open the linked Parameter group&lt;/p&gt;

&lt;p&gt;Find the rds.force_ssl parameter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If set to 1, SSL is required.&lt;/li&gt;
&lt;li&gt;If set to 0, SSL is optional.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqufyqgtspwk5jyzhkeir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqufyqgtspwk5jyzhkeir.png" alt=" " width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. How to Enforce SSL in RDS (If SSL is not Enforced in RDS)
&lt;/h3&gt;

&lt;p&gt;If rds.force_ssl parameter is 0 you must set it to 1. By default, parameter groups in AWS RDS are read-only and cannot be modified. Therefore, to enable rds.force_ssl = 1, you must create a custom parameter group.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create a Custom Parameter Group:
&lt;/h4&gt;

&lt;p&gt;By default, parameter groups in AWS RDS are read-only and cannot be modified. Therefore, to enable rds.force_ssl = 1, you must create a custom parameter group.&lt;/p&gt;

&lt;p&gt;Go to RDS → Parameter groups → Click “Create parameter group”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fto77az61mhavn87ao13n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fto77az61mhavn87ao13n.png" alt=" " width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fill in the fields as follows:

&lt;ul&gt;
&lt;li&gt;Parameter group family: postgres14&lt;/li&gt;
&lt;li&gt;Group name: custom-postgres14-ssl&lt;/li&gt;
&lt;li&gt;Description: Enable SSL for PostgreSQL 14&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Click Create&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  Set rds.force_ssl = 1 in your new parameter group:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Select your newly created parameter group&lt;/li&gt;
&lt;li&gt;Click “Edit parameters”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xlbyob8rgouf82awxl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xlbyob8rgouf82awxl3.png" alt=" " width="800" height="313"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search for rds.force_ssl and change its value from 0 ➝ 1&lt;/li&gt;
&lt;li&gt;Click Save
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu66g9g03nn2mcksbqheo.png" alt=" " width="800" height="396"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Attach the custom parameter group to your RDS instance:
&lt;/h4&gt;

&lt;p&gt;Go to RDS → Databases → Click your instance (database-1)&lt;br&gt;
Click “Modify”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb3crpljtv42epdesip9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpb3crpljtv42epdesip9.png" alt=" " width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the DB parameter group dropdown, select the custom group: &lt;br&gt;
custom-postgres14-ssl&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ycr2hkguku3u0ciz9h7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ycr2hkguku3u0ciz9h7.png" alt=" " width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll to the bottom and choose 'Apply immediately'.&lt;/p&gt;

&lt;p&gt;Click “Continue” and then “Apply changes”&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Download the AWS RDS Root Certificate
&lt;/h3&gt;

&lt;p&gt;To establish a secure SSL connection, you must download the root certificate authority (CA) file from AWS.&lt;/p&gt;

&lt;p&gt;You can find the latest region-specific certificates here:&lt;br&gt;
Using SSL with Amazon RDS PostgreSQL&lt;/p&gt;

&lt;h2&gt;
  
  
  Connecting with SSL
&lt;/h2&gt;

&lt;p&gt;Once you’ve downloaded the certificate, you can connect to your RDS PostgreSQL instance using several methods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using a GUI Tool (e.g., DBeaver)
&lt;/h3&gt;

&lt;p&gt;In your tool of choice, create a new PostgreSQL connection:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the RDS endpoint, port (5432), database name, and credentials.&lt;/li&gt;
&lt;li&gt;Under SSL settings:

&lt;ul&gt;
&lt;li&gt;Enable SSL (usually a checkbox).&lt;/li&gt;
&lt;li&gt;Set SSL Mode to require or verify-ca.&lt;/li&gt;
&lt;li&gt;Upload the RDS Root CA certificate you previously downloaded.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Using Terminal (psql CLI)
&lt;/h3&gt;

&lt;p&gt;You can also connect securely via the terminal:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;psql "host=mydb.xxxxxx.rds.amazonaws.com port=5432 dbname=mydb user=myuser password=mypass sslmode=verify-full sslrootcert=rds-ca-2019-root.pem"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;sslmode=verify-full: Ensures both certificate and hostname validation.&lt;/p&gt;

&lt;p&gt;sslrootcert: Path to the downloaded certificate file.&lt;/p&gt;

&lt;p&gt;This configuration helps ensure both confidentiality and integrity of the data transmitted.&lt;/p&gt;

&lt;p&gt;For more information and certificate downloads, refer to the official documentation:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL.Concepts.General.SSL.html" rel="noopener noreferrer"&gt;Using SSL with Amazon RDS PostgreSQL&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>postgres</category>
    </item>
    <item>
      <title>Migrating from MongoDB to Amazon DocumentDB</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sat, 17 May 2025 12:22:46 +0000</pubDate>
      <link>https://forem.com/ismailg/migrating-from-mongodb-to-amazon-documentdb-4eo3</link>
      <guid>https://forem.com/ismailg/migrating-from-mongodb-to-amazon-documentdb-4eo3</guid>
      <description>&lt;p&gt;Modern applications today often use document databases. For years, MongoDB has been the preferred choice for developers to build applications using JSON-like document data structures. However, a move to a fully managed service like Amazon DocumentDB is attractive when workloads increase.&lt;/p&gt;

&lt;p&gt;Built from the ground up, Amazon DocumentDB (with MongoDB compatibility) is highly available, robust, and scalable. It supports common MongoDB drivers and tools, making it easy to move teams without changing application code. This article will guide you step-by-step through migrating data from MongoDB to Amazon DocumentDB using the AWS Database Migration Service (DMS).&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin the migration, make sure you have the following:&lt;/p&gt;

&lt;p&gt;An Amazon DocumentDB cluster already created and available in your AWS account.&lt;/p&gt;

&lt;p&gt;We will use AWS Database Migration Service (DMS) to migrate data from a MongoDB database to Amazon DocumentDB. This will work with minimal downtime and will not require the export and import of collections manually.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create an AWS DMS Replication Instance for MongoDB Migration
&lt;/h2&gt;

&lt;p&gt;The replication instance is responsible for the actual migration process.   It establishes a connection to the source and target endpoints, extracts the data, transforms it (if necessary), and then inserts it into the destination.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Access the AWS DMS Console and navigate to the Replication instances section.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhntl66wungdg50wjzmcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhntl66wungdg50wjzmcr.png" alt=" " width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
Select "Create replication instance."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3r411hdu1jli8375hdu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp3r411hdu1jli8375hdu.png" alt=" " width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pick an instance identifier and select an instance class, such as dms.t3.medium.&lt;/li&gt;
&lt;li&gt;  Select the appropriate virtual private cloud (VPC).   This must be the same VPC that your DocumentDB cluster refers to as its home.&lt;/li&gt;
&lt;li&gt;  Multi-AZ should be activated when exceptional availability is required.&lt;/li&gt;
&lt;li&gt;  Select the "Create" button and await the "Available" status of the instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create Source and Target Endpoints for MongoDB Migration
&lt;/h2&gt;

&lt;p&gt;Once the replication instance is ready, create source and target endpoints to define where the data will be moved from and to.&lt;/p&gt;

&lt;p&gt;Source Endpoint (MongoDB):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8wn7we4ycdd4icgu1bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu8wn7we4ycdd4icgu1bc.png" alt=" " width="635" height="774"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Endpoint type: Source&lt;/li&gt;
&lt;li&gt;Engine: MongoDB&lt;/li&gt;
&lt;li&gt;Server name: MongoDB hostname or IP address&lt;/li&gt;
&lt;li&gt;Port: 27017&lt;/li&gt;
&lt;li&gt;Database name: your MongoDB database (e.g., zips-db)&lt;/li&gt;
&lt;li&gt;Authentication mode: default&lt;/li&gt;
&lt;li&gt;Username and password: credentials for your MongoDB instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Target Endpoint (Amazon DocumentDB):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwnd3dshfrokf479xf5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwnd3dshfrokf479xf5g.png" alt=" " width="620" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Endpoint type: Target&lt;/li&gt;
&lt;li&gt;Engine: MongoDB&lt;/li&gt;
&lt;li&gt;Server name: your DocumentDB cluster endpoint (e.g., mydocdbcluster.cluster-xxxxxx.docdb.amazonaws.com)&lt;/li&gt;
&lt;li&gt;Port: 27017&lt;/li&gt;
&lt;li&gt;Database name: target database name&lt;/li&gt;
&lt;li&gt;Authentication: enter your DocumentDB admin username and password&lt;/li&gt;
&lt;li&gt;TLS mode: verify-full&lt;/li&gt;
&lt;li&gt;TLS CA file: upload global-bundle.pem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To add TLS CA file,first download the global CA certificate. You can find the TLS certificate download command directly in the Amazon DocumentDB console under your cluster's Connectivity &amp;amp; security tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj7111fcohizpztwq7eo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcj7111fcohizpztwq7eo.png" alt=" " width="800" height="524"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;After both endpoints are created, test the connections to verify that DMS can reach each database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo750r6xizoquvfggl9q4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo750r6xizoquvfggl9q4.png" alt=" " width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Create and Run a MongoDB Migration Task
&lt;/h2&gt;

&lt;p&gt;Now, you can define and launch the migration task.&lt;/p&gt;

&lt;p&gt;In the DMS Console, go to Database migration tasks and click Create task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq4nfv97t5i2teef4io5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvq4nfv97t5i2teef4io5.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose a task identifier, and select your previously created replication instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2pwsrplaa5d52s0gavm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs2pwsrplaa5d52s0gavm.png" alt=" " width="661" height="489"&gt;&lt;/a&gt;&lt;br&gt;
For migration type, choose one of the following:&lt;/p&gt;

&lt;p&gt;Migrate existing data only&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3yysjdrkywhtdsp2g5y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3yysjdrkywhtdsp2g5y.png" alt=" " width="667" height="699"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Important: Turn off data validation. This feature is not supported for MongoDB endpoints.&lt;/p&gt;

&lt;p&gt;Under Table mappings, click Add new selection rule (you must select at least one) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few5cvi99n7jrkh5ujyqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Few5cvi99n7jrkh5ujyqh.png" alt=" " width="608" height="624"&gt;&lt;/a&gt;&lt;br&gt;
Schema: your MongoDB database name (e.g., zips-db)&lt;/p&gt;

&lt;p&gt;Source table name: % &lt;/p&gt;

&lt;p&gt;Action: Include&lt;/p&gt;

&lt;p&gt;Choose to start the task automatically on create.&lt;/p&gt;

&lt;p&gt;Once created, the task will begin migrating your data to Amazon DocumentDB. You can monitor progress via the DMS console and view logs for detailed insights.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzre4377l875fsu784lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzre4377l875fsu784lw.png" alt=" " width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;AWS DMS makes migrating from MongoDB to Amazon DocumentDB a simple task.  Following the procedures outlined above will help you to reduce downtime and transfer your document-based workloads to a completely controlled environment that grows with your requirements.  DocumentDB gives you the AWS security, scalability, and dependability advantages without compromising MongoDB compatibility.&lt;/p&gt;

&lt;p&gt;This move can be the next step in developing your architecture if you are thinking about integration with other AWS services, operational simplicity, and long-term maintenance.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudcomputing</category>
      <category>mongodb</category>
    </item>
    <item>
      <title>How to Set Up AWS EFS Static Provisioning Across Multiple Kubernetes Namespaces</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Fri, 11 Apr 2025 15:57:48 +0000</pubDate>
      <link>https://forem.com/ismailg/how-to-set-up-aws-efs-static-provisioning-across-multiple-kubernetes-namespaces-58i2</link>
      <guid>https://forem.com/ismailg/how-to-set-up-aws-efs-static-provisioning-across-multiple-kubernetes-namespaces-58i2</guid>
      <description>&lt;p&gt;Bitnami PostgreSQL is a widely-used container image with the default being to run safely as a non-root user. But persistent storage—especially shared storage between environments such as dev and test—becomes a problem. Here in this blog post, I'll walk you through how I used AWS EFS static provisioning to share storage between two namespaces with Bitnami PostgreSQL running on Kubernetes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Static Provisioning?
&lt;/h2&gt;

&lt;p&gt;While dynamic provisioning is easy, static provisioning offers full control. It allows you to set a PersistentVolume (PV) by hand which corresponds with an AWS EFS File System or Access Point—ideal for environments where there are multiple environments (e.g., dev and test). As we’re using static provisioning, there’s no need to define a StorageClass for EFS.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Full control over PersistentVolume (PV) setup.&lt;/li&gt;
&lt;li&gt;A way to reuse the same EFS volume across different namespaces.&lt;/li&gt;
&lt;li&gt;Simpler debugging for permission or access issues.&lt;/li&gt;
&lt;li&gt;No need to define a StorageClass&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What We’re Building
&lt;/h2&gt;

&lt;p&gt;A PostgreSQL setup running in two separate namespaces: dev and test&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both environments mount the same EFS volume&lt;/li&gt;
&lt;li&gt;PostgreSQL data is shared &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabd75wnshl0ms93ru5il.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabd75wnshl0ms93ru5il.png" alt=" " width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you begin:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A running Kubernetes cluster (K3s, EKS, etc.)&lt;/li&gt;
&lt;li&gt;An AWS EFS file system already created&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;

&lt;p&gt;Your repo should look like this:&lt;br&gt;
deployment-files/&lt;br&gt;
├── deployment-dev/&lt;br&gt;
│   └── pv-dev.yml, pvc-dev.yml, postgres.yml&lt;br&gt;
└── deployment-test/&lt;br&gt;
    └── pv-test.yml, pvc-test.yml, postgres*.yml&lt;/p&gt;

&lt;p&gt;My GitLab repo: &lt;a href="https://gitlab.com/samueldeniz80/aws-efs-static-provisioning-bitnami-postgresqls/-/tree/main/deployment-files?ref_type=heads" rel="noopener noreferrer"&gt;https://gitlab.com/samueldeniz80/aws-efs-static-provisioning-bitnami-postgresqls/-/tree/main/deployment-files?ref_type=heads&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Create EFS Access Point
&lt;/h2&gt;

&lt;p&gt;To prevent permission issues when mounting EFS across namespaces, create an Access Point from the AWS Console with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User ID: 1001&lt;/li&gt;
&lt;li&gt;Group ID: 1001&lt;/li&gt;
&lt;li&gt;Permissions: 0775&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvnmwktdycw89v59zupm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvnmwktdycw89v59zupm.png" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Install EFS CSI Driver in your cluster:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.7"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also use Helm for EFS CSI Driver installation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Define the PV and PVC
&lt;/h2&gt;

&lt;p&gt;Set your pv’s volumeHandle section with both fs file system id and access point id :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volumeHandle: fs-&amp;lt;file-system-id&amp;gt;::fsap-&amp;lt;access-point-id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Leave storageClassName empty.&lt;/p&gt;

&lt;p&gt;pv-dev.yml:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8tyjyxur4l7zs8ioxnu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8tyjyxur4l7zs8ioxnu.png" alt=" " width="800" height="392"&gt;&lt;/a&gt;&lt;br&gt;
For pvc also leave storageClassName empty.&lt;/p&gt;

&lt;p&gt;pvc-dev.yml:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86gwr6xk4p62i9v068uv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86gwr6xk4p62i9v068uv.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create PV and PVC as the same for the test namespace. The test namespace PV will also point to the same access point in the EFS.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: Configure PostgreSQL Deployment
&lt;/h2&gt;

&lt;p&gt;Make sure the deployment uses fsGroup: 1001 in its securityContext to match EFS Access Point permissions:&lt;br&gt;
securityContext:&lt;br&gt;
  fsGroup: 1001&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 4: Deploy Namespaces
&lt;/h2&gt;

&lt;p&gt;Deploy to dev:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace dev
kubectl apply -f deployment-files/deployment-dev/ -n dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the logs to verify that the PostgreSQL PV and PVC are bound, and the postgres pod is running.&lt;/p&gt;

&lt;p&gt;Deploy to test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl create namespace test
kubectl apply -f deployment-files/deployment-test/ -n test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the logs to verify that the PostgreSQL PV and PVC are bound, and the postgres pod is running.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hmiv1q1t9qw0yufeybe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hmiv1q1t9qw0yufeybe.png" alt=" " width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Outcome
&lt;/h2&gt;

&lt;p&gt;You now have a shared EFS volume accessed by PostgreSQL pods running in different namespaces.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automate Your Python API with AWS Lambda and EventBridge</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sun, 23 Mar 2025 07:02:54 +0000</pubDate>
      <link>https://forem.com/ismailg/automate-your-python-api-with-aws-lambda-and-eventbridge-1g8i</link>
      <guid>https://forem.com/ismailg/automate-your-python-api-with-aws-lambda-and-eventbridge-1g8i</guid>
      <description>&lt;p&gt;In situations when you want to conduct scheduled tasks without having to worry about the infrastructure, serverless architecture is an excellent option to consider.  Over the course of this piece, I will demonstrate how I utilized Amazon Web Services Lambda and Amazon EventBridge to automate a Python-based API update script that runs on a regular schedule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use Case
&lt;/h2&gt;

&lt;p&gt;Let's assume you have a website or app that requires you to pull information from a third-party API at regular intervals. The information could be prices, exchange rates, weather, or analytics results. You already have an existing Python script that functions to accomplish this upgrade. &lt;/p&gt;

&lt;p&gt;Now, the objective is to execute this script on a regular basis without any need to work with any servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Create Your Lambda Function
&lt;/h2&gt;

&lt;p&gt;Your Python script needs to be prepared and packaged in such a way that it can be executed by AWS Lambda. This is the first stage.&lt;/p&gt;

&lt;p&gt;Your working directory ought to have the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── lambda_function.py
├── requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;lambda_function.py:&lt;/strong&gt;&lt;br&gt;
When you run a Python script on AWS Lambda, the conventional name assists AWS Lambda in identifying and running your function in the right way. Your primary Python script must be lambda_function.py.&lt;/p&gt;

&lt;p&gt;Also you must define lambda_handler(event, context). The lambda_handler() function serves as an entry point for AWS Lambda and will directly be called when your function gets invoked.   &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;requirements.txt:&lt;/strong&gt;&lt;br&gt;
Requirements.txt contains a list of all the dependencies that your code needs.  &lt;/p&gt;

&lt;p&gt;After you've structured your folder and written your Lambda function code, the next step is to package your Python script along with its dependencies. AWS Lambda does not automatically install Python packages from requirements.txt — you must include all dependencies in your deployment package:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install dependencies locally into your project folder:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install -r requirements.txt -t .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a Deployment Package (ZIP):
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zip -r lambda_shopify_update.zip .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Double check that you are zipping up the contents of the folder, not the folder itself. That is important — AWS needs to see the handler file (i.e., the lambda_function.py) in the root of the ZIP file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2. Upload to AWS Lambda
&lt;/h2&gt;

&lt;p&gt;Now go to the AWS Console:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open AWS Lambda&lt;/li&gt;
&lt;li&gt;Click “Create function” &lt;/li&gt;
&lt;li&gt;Choose “Upload from → .zip file”&lt;/li&gt;
&lt;li&gt;Upload your lambda_shopify_update.zip&lt;/li&gt;
&lt;li&gt;Test your lambda function&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9wj6xrqu9kzea58mm31.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9wj6xrqu9kzea58mm31.png" alt=" " width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Automating with EventBridge
&lt;/h2&gt;

&lt;p&gt;Once your Lambda function is working correctly, the next step is to automate its execution using Amazon EventBridge. EventBridge is a serverless scheduler that lets you trigger your Lambda function on a regular schedule.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd8o8pu9fq2gpmnts5qh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbd8o8pu9fq2gpmnts5qh.png" alt=" " width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the Amazon EventBridge section in the AWS Console.&lt;/li&gt;
&lt;li&gt;Click “Create rule” to set up an automatic trigger for your Lambda function.&lt;/li&gt;
&lt;li&gt;Under Rule type, choose “Schedule” to trigger your function based on time (rather than an event).&lt;/li&gt;
&lt;li&gt;Select “Rate-based schedule” to run your Lambda at a regular interval.&lt;/li&gt;
&lt;li&gt;Alternatively, you can use “Cron-based schedule” for more specific timing (e.g., every day at 08:00).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhb41l0c3qpyc0qo4qyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhb41l0c3qpyc0qo4qyd.png" alt=" " width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In the Target section:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose AWS Lambda.&lt;/li&gt;
&lt;li&gt;Select your deployed Lambda function.&lt;/li&gt;
&lt;li&gt;Click Next.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Review your configuration, and click “Create rule”.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Your Lambda function is now scheduled to run automatically at the interval you defined — no manual execution needed.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>eventbridge</category>
      <category>automation</category>
    </item>
    <item>
      <title>Creating an AWS DMS Migration Task</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Sun, 09 Mar 2025 16:23:54 +0000</pubDate>
      <link>https://forem.com/ismailg/creating-an-aws-dms-migration-task-4ka9</link>
      <guid>https://forem.com/ismailg/creating-an-aws-dms-migration-task-4ka9</guid>
      <description>&lt;p&gt;&lt;strong&gt;Migrating Data from Local SQL Server to AWS RDS PostgreSQL Using AWS DMS - II&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In my last post, I guided you through building AWS Database Migration Service (DMS) to move data from an on-premises SQL Server to an AWS RDS PostgreSQL database instance. We went over establishing the AWS environment, configuring the source database, and building the required AWS DMS resources.&lt;/p&gt;

&lt;p&gt;In this article, we will examine the migration process itself—target. With both source and target endpoints configured, the next step is to create a migration task in AWS DMS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating and Running the AWS DMS Migration Task
&lt;/h2&gt;

&lt;p&gt;With both the source (SQL Server) and target (PostgreSQL) endpoints configured (if not check the &lt;a href="https://dev.to/ismailg/migrating-data-from-local-sql-server-to-aws-rds-postgresql-using-aws-dms-58k4"&gt;Migrating Data from Local SQL Server to AWS RDS PostgreSQL Using AWS DMS - I&lt;/a&gt;, it's time to create and execute a migration task.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating the Migration Task
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1.Navigate to Database Migration Tasks:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Go to AWS DMS Console &amp;gt; Database migration tasks &amp;gt; Create task.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.Configure Task Settings:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F409akffllqelrpr3mosy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F409akffllqelrpr3mosy.png" alt=" " width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Task Identifier:&lt;/strong&gt; Just give a name, for example, sqlserver-to-postgres-migration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replication instance:&lt;/strong&gt; Select the replication instance configured earlier.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Source and target database:&lt;/strong&gt; Choose the previously configured SQL Server source and PostgreSQL target endpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Migration type:&lt;/strong&gt; Select "Migrate" to perform a full load of the existing data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3.Task Settings:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9sw7705eycqzzzutlea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg9sw7705eycqzzzutlea.png" alt=" " width="800" height="784"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Editing mode:&lt;/strong&gt; Wizard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Target table preparation mode:&lt;/strong&gt; If you wish DMS to drop and replicate tables on the target, choose for "Drop tables on target".  If you wish to maintain the current structure and data unaltered, then choose "Do nothing"; DMS just adds fresh entries without altering or deleting any current data.  If you have pre-existing data that ought to be kept whole, this option is handy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Include LOB columns in replication:&lt;/strong&gt; Enable this option if your tables contain large object (LOB) data types.​&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable validation:&lt;/strong&gt; When you enable this option, AWS DMS verifies the row counts and checksums of the source and target databases and confirms that they are equal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4.Table Mapping and Transformations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddilcvzi4iiwmdohfsvz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddilcvzi4iiwmdohfsvz.png" alt=" " width="800" height="867"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Editing mode:&lt;/strong&gt; Wizard&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Selection rules:&lt;/strong&gt; Define at least one selection rule to specify the schemas and tables you want to be included or excluded from the migration. Use % in the Source table name to include all tables.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transformation rules (optional):&lt;/strong&gt; Set transformation rules, if you need to rename schemas, tables, or columns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5.Premigration assessment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4mgmrjifngmjvao9och.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4mgmrjifngmjvao9och.png" alt=" " width="800" height="324"&gt;&lt;/a&gt;&lt;br&gt;
A premigration assessment warns you of potential migration issues before starting your migration task. For this tutorial I skipped premigration assessment. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6.Running the Migration Task&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Click Create task, then Start the migration.&lt;/p&gt;

&lt;p&gt;AWS DMS will begin extracting data from SQL Server, transforming it, and loading it into PostgreSQL.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Migrating Data from Local SQL Server to AWS RDS PostgreSQL Using AWS DMS - I</title>
      <dc:creator>Ismail G.</dc:creator>
      <pubDate>Thu, 27 Feb 2025 09:09:24 +0000</pubDate>
      <link>https://forem.com/ismailg/migrating-data-from-local-sql-server-to-aws-rds-postgresql-using-aws-dms-58k4</link>
      <guid>https://forem.com/ismailg/migrating-data-from-local-sql-server-to-aws-rds-postgresql-using-aws-dms-58k4</guid>
      <description>&lt;p&gt;Data migration is an essential activity for those organizations that are moving from on-premises database technology to cloud offerings such as AWS RDS. An extremely helpful service that simplifies this task is the AWS Database Migration Service (DMS). With AWS DMS, for example, you can migrate data from an on-premises Microsoft SQL Server database to an AWS RDS PostgreSQL. I am going to explain the process of creating SQL Server parameters and configuring an AWS DMS replication instance in this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing the SQL Server for AWS DMS
&lt;/h2&gt;

&lt;p&gt;Before setting up AWS DMS, it is essential to configure your local SQL Server to allow external connections and ensure proper networking settings. &lt;/p&gt;

&lt;h3&gt;
  
  
  1. Enabling TCP/IP Connections in SQL Server
&lt;/h3&gt;

&lt;p&gt;By default, SQL Server does not allow remote connections unless TCP/IP is explicitly enabled. Follow these steps to enable TCP/IP connections:&lt;/p&gt;

&lt;p&gt;Open SQL Server Configuration Manager.&lt;/p&gt;

&lt;p&gt;Navigate to SQL Server Network Configuration → Protocols for MSSQLSERVER.&lt;/p&gt;

&lt;p&gt;Locate the TCP/IP protocol and right-click to select Enable.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq2wdubyqrbpujb66o5c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyq2wdubyqrbpujb66o5c.png" alt=" " width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Right-click on TCP/IP, select Properties, and navigate to the IP Addresses tab.&lt;/p&gt;

&lt;p&gt;Under the IPAll section, set TCP Port to 1433 (leave TCP Dynamic Ports blank).&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Configuring Windows Firewall Rules
&lt;/h3&gt;

&lt;p&gt;After enabling TCP/IP, you need to allow inbound connections on port 1433 through Windows Firewall:&lt;/p&gt;

&lt;p&gt;Open Windows Defender Firewall with Advanced Security.&lt;/p&gt;

&lt;p&gt;Navigate to Inbound Rules → Add New Rule.&lt;/p&gt;

&lt;p&gt;Select Port and click Next.&lt;/p&gt;

&lt;p&gt;Choose TCP and enter 1433 in the Specific local ports field.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dj5wjbmku17m23a0wrq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5dj5wjbmku17m23a0wrq.png" alt=" " width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Next.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39fmut6g06ipzo3d2l3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F39fmut6g06ipzo3d2l3a.png" alt=" " width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose 'Allow the connection'&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fm1lq682oorky6ui628.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fm1lq682oorky6ui628.png" alt=" " width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Apply the rule to Domain, Private, and Public profiles.&lt;/p&gt;

&lt;p&gt;Name the rule and complete the setup.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Restarting SQL Server
&lt;/h3&gt;

&lt;p&gt;For the changes to take effect, restart the SQL Server service:&lt;/p&gt;

&lt;p&gt;Open SQL Server Configuration Manager.&lt;/p&gt;

&lt;p&gt;Select SQL Server Services.&lt;/p&gt;

&lt;p&gt;Right-click SQL Server (MSSQLSERVER) and select Restart.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rr2n97801l1wa9vdlw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rr2n97801l1wa9vdlw9.png" alt=" " width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the SQL Server settings are configured, test the connection from another computer again using:&lt;/p&gt;

&lt;p&gt;sqlcmd -S IP-Address -U Username -P Password&lt;/p&gt;

&lt;p&gt;If you can connect successfully, your SQL Server is ready for migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up AWS DMS for Migration
&lt;/h2&gt;

&lt;p&gt;Follow these steps to configure AWS DMS:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Creating a Replication Instance
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r4yhhplbekffanuxbua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9r4yhhplbekffanuxbua.png" alt=" " width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam0g66d99yd28y0nyy89.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fam0g66d99yd28y0nyy89.png" alt=" " width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the AWS DMS Console.&lt;/p&gt;

&lt;p&gt;Select Replication Instances and click Create Replication Instance.&lt;/p&gt;

&lt;p&gt;Provide a name and choose an appropriate instance class.&lt;/p&gt;

&lt;p&gt;Set the replication instance to public since the SQL Server is hosted on a local computer.&lt;/p&gt;

&lt;h4&gt;
  
  
  Security Group Configuration
&lt;/h4&gt;

&lt;p&gt;DMS relies on security groups to control inbound and outbound traffic between the replication instance and databases. To properly configure the security group:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to EC2 Security Groups in the AWS Console.&lt;/li&gt;
&lt;li&gt;Locate the security group assigned to the replication instance.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Add the following Inbound Rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MSSQL (TCP/1433): Allow traffic from your local SQL Server’s IP or security group.&lt;/li&gt;
&lt;li&gt;PostgreSQL (TCP/5432): Allow traffic to the target AWS RDS PostgreSQL instance.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Add an Outbound Rule that allows all traffic to leave (egress) the VPC. This ensures communication from the replication instance to the source and target database endpoints.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Configuring Source Endpoint (SQL Server)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F339q35zv475zkkye2hcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F339q35zv475zkkye2hcr.png" alt=" " width="627" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose Source Endpoint and enter SQL Server details.&lt;/p&gt;

&lt;p&gt;Set the Endpoint Type to Source.&lt;/p&gt;

&lt;p&gt;Enter the Server Name (IP Address), Port (1433), Username, and Password.&lt;/p&gt;

&lt;p&gt;Click Test Connection and ensure it succeeds.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Configuring Target Endpoint (AWS RDS PostgreSQL)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4kqzjghyl1v331vtu28.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4kqzjghyl1v331vtu28.png" alt=" " width="622" height="697"&gt;&lt;/a&gt;&lt;br&gt;
Navigate to Endpoints and select Create Endpoint.&lt;/p&gt;

&lt;p&gt;Choose Target Endpoint and enter AWS RDS PostgreSQL details.&lt;/p&gt;

&lt;p&gt;Set the Endpoint Type to Target.&lt;/p&gt;

&lt;p&gt;Enter the RDS Endpoint, Port (5432), Username, and Password.&lt;/p&gt;

&lt;p&gt;Click Test Connection and verify success.&lt;/p&gt;

&lt;h2&gt;
  
  
  Database migration tasks
&lt;/h2&gt;

&lt;p&gt;After making those settings you are ready to create a database migration task. I will cover this in my next blog post.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>postgres</category>
      <category>sql</category>
    </item>
  </channel>
</rss>
