<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Piotr Popiolek</title>
    <description>The latest articles on Forem by Piotr Popiolek (@posone).</description>
    <link>https://forem.com/posone</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/posone"/>
    <language>en</language>
    <item>
      <title>How AWS Lightsail Limit Forced Us to Rethink Our Lab Infrastructure</title>
      <dc:creator>Piotr Popiolek</dc:creator>
      <pubDate>Wed, 10 Dec 2025 21:26:44 +0000</pubDate>
      <link>https://forem.com/aws-builders/how-aws-lightsail-limit-forced-us-to-rethink-our-lab-infrastructure-29nh</link>
      <guid>https://forem.com/aws-builders/how-aws-lightsail-limit-forced-us-to-rethink-our-lab-infrastructure-29nh</guid>
      <description>&lt;p&gt;During a recent preparation to hands-on lab we ran into a surprising limit: AWS Lightsail Container Service only allows 4 custom domains per account. We needed a separate public instance of our application for each lab participant (we predicted 11 participants), so this quota was a deal-breaker for the container service approach. And we discovered that like one day before the actual lab starts, so we had to act quickly.&lt;/p&gt;

&lt;p&gt;This post describes the practical solution we adopted: run a single Lightsail (Ubuntu) instance, host multiple app containers there with docker-compose, and expose each container under its own subdomain via a Cloudflare Tunnel. The result: fast deployment, one public IP-less server, and stable per-user URLs.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Problem: Lightsail Container Service limits custom domains to 4 → can't create per-user public URLs.&lt;/li&gt;
&lt;li&gt;Solution: Use a Lightsail Instance (Ubuntu) + docker-compose to run multiple containers and Cloudflare Tunnel to map subdomains to localhost ports.&lt;/li&gt;
&lt;li&gt;Result: 11 public subdomains served from one instance without exposing ports or a fixed public IP in DNS.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why This Architecture?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Running a Lightsail Instance Virtual Machine gives full control and no custom-domain quota.&lt;/li&gt;
&lt;li&gt;docker-compose is simple to orchestrate many containers on one host.&lt;/li&gt;
&lt;li&gt;Cloudflare Tunnel provides secure inbound connectivity without opening firewall ports or relying on a static IP: cloudflared accepts connections for specified hostnames and forwards traffic to local services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Architecture (simple view):&lt;/p&gt;

&lt;p&gt;app1.lab.work → cloudflared (tunnel) → localhost:5001 → our-app-1 (container)&lt;br&gt;
app2.lab.work → cloudflared (tunnel) → localhost:5002 → our-app-2 (container)&lt;br&gt;
...&lt;br&gt;
app11.lab.work → cloudflared → localhost:5011 → our-app-11&lt;/p&gt;


&lt;h2&gt;
  
  
  Example docker-compose
&lt;/h2&gt;

&lt;p&gt;This is the lightweight pattern we used. Each service runs the same application image but uses a distinct container name and public-mapped port.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;3.8"&lt;/span&gt;
&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;our-app-1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./our-app&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;our-app-1&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5001:5001"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;PORT=5001&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;

  &lt;span class="na"&gt;our-app-2&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./our-app&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;our-app-2&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5002:5001"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;PORT=5001&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;

  &lt;span class="c1"&gt;# ...&lt;/span&gt;
  &lt;span class="na"&gt;our-app-11&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./our-app&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;our-app-11&lt;/span&gt;
    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;5011:5001"&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;PORT=5001&lt;/span&gt;
    &lt;span class="na"&gt;restart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;unless-stopped&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The containers all expose the application on the container's internal port 5001, but we map distinct host ports (5001..5011) so cloudflared can forward to the correct service.&lt;/li&gt;
&lt;li&gt;If your app can read PORT from env, adjust container start command so it binds to the correct port (we used the same internal port and mapped different external ports).&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  deploy.sh (what we actually used)
&lt;/h2&gt;

&lt;p&gt;Below is the deploy script we executed on the Lightsail instance (/home/ubuntu/lab). The script installs Docker and cloudflared, builds the containers, writes a Cloudflare tunnel config, and starts the cloudflared service. Important: the script writes a config that references /etc/cloudflared/lab.json (the tunnel credentials file). Creating that credentials file requires a one-time cloudflared "tunnel create" step which is explained further below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[1/5] Installing Docker &amp;amp; Cloudflared..."&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; docker.io docker-compose wget

&lt;span class="c"&gt;# Install cloudflared&lt;/span&gt;
wget &lt;span class="nt"&gt;-O&lt;/span&gt; cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; cloudflared.deb &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nt"&gt;--fix-broken&lt;/span&gt; &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[2/5] Building 11 app containers..."&lt;/span&gt;
&lt;span class="nb"&gt;cd&lt;/span&gt; /home/ubuntu/lab
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker-compose build

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[3/5] Starting containers..."&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[4/5] Setting up Cloudflare tunnel config..."&lt;/span&gt;
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /etc/cloudflared
&lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/cloudflared/config.yml &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;
tunnel: lab
credentials-file: /etc/cloudflared/lab.json
ingress:
  - hostname: app1.lab.work
    service: http://localhost:5001
  - hostname: app2.lab.work
    service: http://localhost:5002
  - hostname: app3.lab.work
    service: http://localhost:5003
  - hostname: app4.lab.work
    service: http://localhost:5004
  - hostname: app5.lab.work
    service: http://localhost:5005
  - hostname: app6.lab.work
    service: http://localhost:5006
  - hostname: app7.lab.work
    service: http://localhost:5007
  - hostname: app8.lab.work
    service: http://localhost:5008
  - hostname: app9.lab.work
    service: http://localhost:5009
  - hostname: app10.lab.work
    service: http://localhost:5010
  - hostname: app11.lab.work
    service: http://localhost:5011
  - service: http_status:404
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"[5/5] Starting Cloudflare tunnel..."&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;cloudflared
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl restart cloudflared

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"=============================="&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;" Your our apps are ready!"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"=============================="&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Public URLs:"&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;i &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;1..11&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"app&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="s2"&gt; → https://app&lt;/span&gt;&lt;span class="nv"&gt;$i&lt;/span&gt;&lt;span class="s2"&gt;.lab.work"&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Important: the &lt;code&gt;credentials-file: /etc/cloudflared/lab.json&lt;/code&gt; value must point to an actual tunnel credentials JSON file created when you run &lt;code&gt;cloudflared tunnel create lab&lt;/code&gt; (covered next). If no credentials-file is present, cloudflared service will fail to start.&lt;/p&gt;




&lt;h2&gt;
  
  
  One-time Cloudflare Tunnel Steps (interactive)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Install &lt;code&gt;cloudflared&lt;/code&gt; locally or on the server (we installed on server in the script).&lt;/li&gt;
&lt;li&gt;Login and create the tunnel (this requires access to the Cloudflare account for the lab.work zone):

&lt;ul&gt;
&lt;li&gt;Run: &lt;code&gt;cloudflared login&lt;/code&gt;

&lt;ul&gt;
&lt;li&gt;This opens a browser to authenticate your Cloudflare account and grants the cloudflared instance permissions to make DNS records.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Then create the named tunnel:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;cloudflared tunnel create lab&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;This creates a credentials file at &lt;code&gt;~/.cloudflared/&amp;lt;TUNNEL-ID&amp;gt;.json&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Copy the credentials file to /etc/cloudflared/lab.json:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;sudo mkdir -p /etc/cloudflared&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;sudo cp ~/.cloudflared/&amp;lt;TUNNEL-ID&amp;gt;.json /etc/cloudflared/lab.json&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Create DNS routes for each hostname (you can also create DNS records in the Cloudflare dashboard):

&lt;ul&gt;
&lt;li&gt;For each hostname run:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;cloudflared tunnel route dns lab app1.lab.work&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;cloudflared tunnel route dns lab app2.lab.work&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;...&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;This creates Cloudflare DNS records that point the hostnames to the tunnel. You should see them as proxied A/CNAME records in Cloudflare DNS.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Start the tunnel (the deploy script starts the cloudflared system service):

&lt;ul&gt;
&lt;li&gt;Check logs: &lt;code&gt;sudo journalctl -u cloudflared -f&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Test: curl or open &lt;a href="https://app1.lab.work" rel="noopener noreferrer"&gt;https://app1.lab.work&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Notes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you prefer to manage DNS manually in the Cloudflare UI, create proxied A or CNAME records for each subdomain pointing to the tunnel as instructed by the Cloudflare docs. Using &lt;code&gt;cloudflared tunnel route dns&lt;/code&gt; is convenient because it automates DNS creation.&lt;/li&gt;
&lt;li&gt;The login + tunnel creation step is interactive. For fully automated setups, Cloudflare API tokens and non-interactive flows can be used, but that requires careful management of keys.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Caveats, Security &amp;amp; Gotchas
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The Lightsail Instance is a single point of failure. Please note that this setup was created specifically only for lab/trainings.
For higher availability, run multiple instances behind a load balancer or use Kubernetes/ECS.&lt;/li&gt;
&lt;li&gt;The tunnel credentials JSON must be stored securely (we place it in /etc/cloudflared and restrict file permissions).&lt;/li&gt;
&lt;li&gt;Rate-limits / fair-use: Cloudflare Tunnel is intended for this type of workload, but verify any usage limits for your plan, and ensure you don’t hit Cloudflare or upstream rate-limits during peak lab use.&lt;/li&gt;
&lt;li&gt;DNS propagation: After &lt;code&gt;tunnel route dns&lt;/code&gt; there can be short DNS propagation delays.&lt;/li&gt;
&lt;li&gt;Exposed ports: We never open inbound ports on the VM for the apps — cloudflared handles inbound traffic via the tunnel. You should still harden SSH and instance access.&lt;/li&gt;
&lt;li&gt;Logs &amp;amp; debugging: check &lt;code&gt;sudo journalctl -u cloudflared -f&lt;/code&gt; and &lt;code&gt;docker logs &amp;lt;container&amp;gt;&lt;/code&gt; when things misbehave.&lt;/li&gt;
&lt;li&gt;TLS: Cloudflare provides TLS for your hostnames automatically when using the tunnel and proxied DNS records.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Improvements &amp;amp; Alternatives
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Traefik or nginx as an internal reverse-proxy: run one proxy on host that routes subdomains to containers, then point Cloudflare Tunnel to the proxy (single local port). This reduces the number of host ports and centralizes routing logic (and makes scaling containers more flexible).&lt;/li&gt;
&lt;li&gt;Use a wildcard subdomain and generate DNS records programmatically if many labs are expected; Cloudflare supports automation via API tokens.&lt;/li&gt;
&lt;li&gt;If you need thousands of per-user instances, move to container orchestration (k8s, ECS) and either use an Ingress controller + external LB or scale with multi-instance / autoscaling groups.&lt;/li&gt;
&lt;li&gt;Consider ephemeral user environments that spin up and down on-demand to save cost.&lt;/li&gt;
&lt;li&gt;Monitoring and health checks: add health endpoints and simple monitors to restart broken containers automatically.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Cost &amp;amp; operational note
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Lightsail Instances are cheap and predictable - Instance with 2 GB Memory, 2 vCPUs, and 3 TB Transfer was enough for us (it costs 12$/month)&lt;/li&gt;
&lt;li&gt;Cloudflare Tunnel is free for many small-use cases, we had a domain there but we did not pay anything for our two-days heavy usage.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Switching from Lightsail Container Service to a Lightsail Instance + docker-compose + Cloudflare Tunnel gave us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fast, repeatable lab deployments on a single server&lt;/li&gt;
&lt;li&gt;Individual public subdomains per participant&lt;/li&gt;
&lt;li&gt;A secure inbound setup without opening host ports&lt;/li&gt;
&lt;li&gt;..And it really saved us and we did not have to cancel the event in the last minute. :)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>containers</category>
      <category>docker</category>
    </item>
    <item>
      <title>From Frustration to Automation: Building a Squash Court Availability App</title>
      <dc:creator>Piotr Popiolek</dc:creator>
      <pubDate>Thu, 23 Oct 2025 20:14:28 +0000</pubDate>
      <link>https://forem.com/aws-builders/from-frustration-to-automation-building-a-squash-court-availability-app-54f6</link>
      <guid>https://forem.com/aws-builders/from-frustration-to-automation-building-a-squash-court-availability-app-54f6</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A few months back, I wanted to reserve a squash court, as usual, on a Wednesday evening. Unfortunately, everything was booked in my favorite club, so I started looking for other places. I was on my phone, which wasn’t very convenient, and every website was different — so I quickly got frustrated.&lt;/p&gt;

&lt;p&gt;This situation happened a few times, and then I realized I could make this task easier by creating a small app to check court availability for me. It seemed like a good small project. I have two small kids, so I don’t have much free time, but I decided to spend 15–30 minutes on it whenever I could — and here we are!&lt;/p&gt;

&lt;p&gt;What started as a quick idea to save a few clicks, turned into a mini project — a Python + FastAPI app hosted on AWS Lightsail, with full GitHub Actions automation.&lt;br&gt;
In this post, I will share how I built it, automated deployment with CI/CD, and what I learned along the way.&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;During my IT career, I spent over 10 years as an Infrastructure Engineer and the next 5 as a DevOps Engineer, so I’m not really a programmer. That’s why my technical choices were mostly about simplicity rather than software design perfection.&lt;/p&gt;

&lt;p&gt;As I mentioned, time wasn’t on my side, so I decided on a simple architecture. Still, I wanted to learn something new, so I promised myself to use at least one AWS service I hadn’t worked with before.&lt;br&gt;
The choice fell on &lt;strong&gt;AWS Lightsail&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Amazon Lightsail offers easy-to-use virtual private servers (VPS), containers, storage, and databases. It’s really intuitive and fits perfectly for my small app.&lt;/p&gt;

&lt;p&gt;I’m quite comfortable with &lt;em&gt;Python&lt;/em&gt;, so the backend is built with &lt;strong&gt;FastAPI&lt;/strong&gt;. For the frontend, I used some simple &lt;strong&gt;HTML and CSS&lt;/strong&gt; - generated mostly by ChatGPT, since I don’t have strong frontend experience.&lt;/p&gt;

&lt;p&gt;To parse the web pages, I used the &lt;strong&gt;BeautifulSoup&lt;/strong&gt; module, which provides methods and Pythonic idioms to navigate, search, and modify the parsed HTML tree.&lt;/p&gt;

&lt;p&gt;Tests are based on pytest and mock, with simple assertions for specific use cases.&lt;/p&gt;

&lt;p&gt;The code is hosted on &lt;strong&gt;GitHub&lt;/strong&gt;, and I use &lt;strong&gt;GitHub Actions&lt;/strong&gt; to build, test, and deploy automatically to AWS Lightsail after every push to the main branch.&lt;/p&gt;

&lt;p&gt;Here’s how the overall architecture looks:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qxvdr6o2zk6zjri11za.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7qxvdr6o2zk6zjri11za.jpg" alt="Architecture" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Initial app design
&lt;/h2&gt;

&lt;p&gt;The main goal was to create a lightweight, intuitive web page to check squash court availability in Wroclaw.&lt;br&gt;
The user can choose a date from the calendar (up to one week ahead) and select an hour between 06:00 and 23:00 (full hours only, since most facilities don’t support half-hour bookings).&lt;/p&gt;

&lt;p&gt;After selecting the date and hour, the user clicks “Sprawdz” (means Search), and all sports facilities are listed with availability information. For the biggest club, Hasta La Vista, the app also lists individual courts — since there are 32 of them and players often prefer specific ones.&lt;/p&gt;
&lt;h2&gt;
  
  
  How to pull the data?!
&lt;/h2&gt;

&lt;p&gt;Once I knew what I wanted to achieve, I started figuring out how to do it.&lt;br&gt;
First, I gathered all sports facilities in Wroclaw that offer squash. Then, one by one, I tried to fetch the data I needed. Tools like curl and browser dev tools helped a lot.&lt;/p&gt;

&lt;p&gt;After a few experiments, I wrote my first Python file — &lt;code&gt;hasta.py&lt;/code&gt;. Using &lt;strong&gt;requests&lt;/strong&gt; and &lt;strong&gt;BeautifulSoup&lt;/strong&gt;, I fetched each website’s HTML and parsed the structure based on date and time. It looked like follow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def check_availability(date_str: str, time_str: str):
    url = f"https://(...)"
    datetime_variants = [
        f"{date_str} {time_str}:00",
        f"{date_str}T{time_str}:00"
    ]
    try:
        resp = requests.get(url, timeout=10)
        resp.raise_for_status()
        data = resp.json()

        if "html" not in data:
            return "❌ Failed to fetch calendar data."

        soup = BeautifulSoup(data["html"], "html.parser")
        all_elements = soup.select("[data-begin]")

        for el in all_elements:
            data_begin = el.get("data-begin")
            if data_begin in datetime_variants:
                text = el.get_text(strip=True)
                if "Book" in text:
                    return (
                        f"✅ Court is available {date_str}  {time_str}"
                        )
                elif "Notify me" in text:
                    return f"❌ Court is not available  ({time_str})  {date_str}"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few evenings, I had an early version of a crawler that could check court availability for all clubs in the city.&lt;br&gt;
In the meantime, I also started working on a simple frontend to connect the crawler with a web interface — again, as I have already mentioned, mostly with ChatGPT’s help.&lt;/p&gt;

&lt;p&gt;The frontend consists of three main files: style.css, index.html, and result.html.&lt;br&gt;
Everything was working well locally, so I decided to deploy it as a container on &lt;strong&gt;AWS Lightsail&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Let's Automate A Few Things
&lt;/h2&gt;

&lt;p&gt;Before deploying anything manually to AWS, I wanted to automate the process as much as possible.&lt;br&gt;
Since the code was already on GitHub, GitHub Actions was a natural choice for CI/CD. I decided to build a simple pipeline that would:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build&lt;/strong&gt; the Docker image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Run tests&lt;/strong&gt; using pytest&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deploy&lt;/strong&gt; automatically to AWS Lightsail&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also decided to use OIDC (OpenID Connect) for authentication between GitHub and AWS, so I didn’t need to store long-lived access keys (you can check on OIDC Stack in my previous blog post regarding &lt;a href="https://dev.to/aws-builders/aws-serverless-hands-on-part-22-3mcd"&gt;AWS Serverless&lt;/a&gt;). This approach is more secure and follows AWS best practices for CI/CD integration.&lt;/p&gt;

&lt;p&gt;Here’s a simplified version of the GitHub Actions workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT }}:role/github-actions-role
          aws-region: eu-west-1

      - name: Build Docker image
        run: docker build -t court-checker .

      - name: Install lightsailctl
        run: |
          curl "https://s3.us-west-2.amazonaws.com/lightsailctl/latest/linux-amd64/lightsailctl" -o "/usr/local/bin/lightsailctl"
          chmod +x /usr/local/bin/lightsailctl
          /usr/local/bin/lightsailctl --version

      - name: Create Lightsail container service if not exists
        run: |
            set -e
            if ! aws lightsail get-container-services --service-name court-checker &amp;gt;/dev/null 2&amp;gt;&amp;amp;1; then
              echo "Creating Lightsail container service..."
              aws lightsail create-container-service \
                --service-name court-checker \
                --power medium \
                --scale 1
            else
              echo "Lightsail container service already exists."
            fi      

      - name: Push image to Lightsail registry
        id: push
        run: |
          set -euo pipefail
          OUTPUT=$(aws lightsail push-container-image \
            --service-name court-checker \
            --label web \
            --image court-checker:latest)

          echo "$OUTPUT"

          # Extract registryPath from human-readable output
          REGISTRY_PATH=$(echo "$OUTPUT" | grep -oE ':court-checker\.web\.[0-9]+' | head -n 1)

          if [ -z "$REGISTRY_PATH" ]; then
            echo "ERROR: No registryPath found in push output."
            exit 1
          fi
          echo "registry_path=$REGISTRY_PATH" &amp;gt;&amp;gt; $GITHUB_OUTPUT
          echo "Using image: $REGISTRY_PATH"

      - name: Create container config
        run: |
          set -euo pipefail
          cat &amp;gt; container.json &amp;lt;&amp;lt;EOF
          {
            "web": {
              "image": "${{ steps.push.outputs.registry_path }}",
              "ports": {
                "8000": "HTTP"
              }
            }
          }
          EOF
          cat container.json

      - name: Create endpoint config
        run: |
          echo '{
            "containerName": "web",
            "containerPort": 8000,
            "healthCheck": {
              "path": "/health",
              "successCodes": "200-499",
              "timeoutSeconds": 5,
              "intervalSeconds": 10,
              "healthyThreshold": 2,
              "unhealthyThreshold": 2
            }
          }' &amp;gt; endpoint.json

      - name: Deploy to Lightsail
        run: |
          aws lightsail create-container-service-deployment \
            --service-name court-checker \
            --containers file://container.json \
            --public-endpoint file://endpoint.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Release And Share With Others
&lt;/h2&gt;

&lt;p&gt;Once all tests were done and I had used the app myself for a couple of days, I decided to share it with a few of my friends.&lt;br&gt;
They tried it for about a week, gave me some feedback, I fixed a few things, and then decided to share it with a wider group.&lt;/p&gt;

&lt;p&gt;Squash isn’t super popular, but I posted about the app in a local Facebook group with around 3,000 squash enthusiasts from Wroclaw so they could give it a try.&lt;br&gt;
Now I can see there are around 20–50 visits per week — which makes me really happy that at least some people are finding it useful!&lt;/p&gt;

&lt;h2&gt;
  
  
  Costs
&lt;/h2&gt;

&lt;p&gt;Obviously, I wanted to keep costs as low as possible. Fortunately, I’m an &lt;strong&gt;AWS Community Builder&lt;/strong&gt;, so I have some AWS credits, which gave me flexibility in initial testing and I did not need to bother too much about that.&lt;br&gt;
Here’s the rough cost breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Domain registration&lt;/strong&gt;: around $45 for three years + $10 tax (one-time cost)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lightsail container&lt;/strong&gt; (micro tier: 0.25 vCPU, 1GB RAM): $10/month&lt;br&gt;
Each container service includes 500 GB/month data transfer. Extra data costs start at $0.09/GB depending on the region.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GitHub Actions&lt;/strong&gt;: 2,000 free minutes per month (my project used only 103 minutes in August)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Thoughts about AWS Lightsail?
&lt;/h2&gt;

&lt;p&gt;It’s definitely a great and easy way to deploy small web apps with minimal effort.&lt;br&gt;
You get a public DNS by default:&lt;br&gt;
&lt;code&gt;https://&amp;lt;app-name&amp;gt;.&amp;lt;random_digits&amp;gt;.&amp;lt;region&amp;gt;.cs.amazonlightsail.com&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Containers are perfect for small/medium projects or proof-of-concepts. I didn’t use instances, but they seem suitable for larger workloads.&lt;/p&gt;

&lt;p&gt;With GitHub Actions automation, I don’t need to worry about deployment — it happens automatically after every push.&lt;br&gt;
The only downsides I’ve noticed are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Lack of container-level alarms and metrics (available only for instances)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Occasional delays during deployment&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Overall, it’s a neat and simple setup for quick and cost-effective deployments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion And Key Takeaways
&lt;/h2&gt;

&lt;p&gt;Looking back, this small side project turned out to be much more than I expected.&lt;br&gt;
What started as a simple idea to save time booking a squash court became a fun way to learn, automate, and explore new AWS services. I truly recommend that everyone try something like this and not be afraid to experiment. &lt;br&gt;
I also realized how much can be achieved by dedicating just a little time each day — consistency really does beat intensity.&lt;br&gt;
Here are also a few takeaways I’d like to share as a summary:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start small, iterate often&lt;/strong&gt;: Even with just 15–30 minutes a day, you can deliver a working project over time.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;AWS Lightsail is underrated&lt;/strong&gt;: It’s perfect for small, containerized apps — simple setup, predictable cost, and built-in DNS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automation saves time&lt;/strong&gt;: GitHub Actions makes it easy to build, test, and deploy without manual steps or AWS Console clicks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security by design&lt;/strong&gt;: Using OIDC between GitHub and AWS avoids storing long-lived credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Don’t fear imperfect tools&lt;/strong&gt;: FastAPI, BeautifulSoup, and a bit of ChatGPT-generated frontend were enough to get a solid MVP online.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And if you’d like to take a look at the app yourself, check it out &lt;a href="https://court-checker.com/" rel="noopener noreferrer"&gt;here&lt;/a&gt;! It is in Polish however should be intuitive for everyone. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>webdev</category>
      <category>python</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS Serverless hands on part 2/2</title>
      <dc:creator>Piotr Popiolek</dc:creator>
      <pubDate>Tue, 11 Feb 2025 07:47:30 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-serverless-hands-on-part-22-3mcd</link>
      <guid>https://forem.com/aws-builders/aws-serverless-hands-on-part-22-3mcd</guid>
      <description>&lt;p&gt;At the beginning, I would like to emphasize that this is the second part of our journey. Please read &lt;a href="https://dev.to/aws-builders/aws-serverless-hands-on-part-12-33d6"&gt;First Part&lt;/a&gt; before you move forward! &lt;/p&gt;

&lt;h1&gt;
  
  
  Short Intro
&lt;/h1&gt;

&lt;p&gt;I hope you took some time after reading the first part, and now we are ready to continue! We have already learnt about some serverless services and architecture of what we want to achieve. We also know what are our main assumptions and which technology we want to use. Our next step will be to get familiar with the scope of application and details about backend and frontend.&lt;/p&gt;

&lt;h1&gt;
  
  
  Application Description
&lt;/h1&gt;

&lt;p&gt;Coming up with an idea for this application was the most challenging part for me. I didn’t want to simply display an index.html file with "Hello World," but I also struggled to think of a more complex use case. Since I wanted to implement CRUD (Create, Read, Update, Delete) functionality, I decided to use DynamoDB as the database and focus on GET/POST endpoints.&lt;/p&gt;

&lt;p&gt;Eventually, the idea I settled on may not be the most sophisticated, but it serves its purpose well as a hands-on project.&lt;/p&gt;

&lt;p&gt;Our serverless application functions as a kind of guest book. It is hosted on CloudFront with static content stored in S3. The application offers two main options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Add Data" – Allows users to submit information&lt;/li&gt;
&lt;li&gt;"Fetch Data From Backend" – Retrieves stored data based on user input&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Behind the scenes, we have API Gateway with GET and POST endpoints handling the requests. When a user clicks on "Add Data", they are prompted to answer a few mandatory questions: City, Name, and Year of Birth. The City serves as the partition key in DynamoDB.&lt;/p&gt;

&lt;p&gt;To retrieve stored data, users click "Fetch Data From Backend", enter a City, and receive all records associated with that city from the database.&lt;/p&gt;

&lt;p&gt;The application in the web browser looks as follow: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F069qeetg2ithqq2me76s.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F069qeetg2ithqq2me76s.gif" alt="alt image" width="480" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Storage and Directory Structure&lt;/strong&gt;&lt;br&gt;
Every value entered into the application is recorded in DynamoDB. In the following sections, we will dive into the details of how the various services work and how they are connected in our setup.&lt;/p&gt;

&lt;p&gt;For now, let's take a look at the Directory Structure. The CDK (Cloud Development Kit) is already initialized, and the tests, stacks, and GitHub workflow are pre-configured. Below is the directory structure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1pitrhp1jg3dzo6k5n8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl1pitrhp1jg3dzo6k5n8.png" alt="Image description" width="588" height="1274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A brief overview of each folder:&lt;br&gt;
&lt;strong&gt;.github/worfkflow&lt;/strong&gt; - Defines our pipeline for deploying stacks&lt;br&gt;
&lt;strong&gt;deploy/cdk/bin&lt;/strong&gt; - The entry point for CDK with defined stacks&lt;br&gt;
&lt;strong&gt;deploy/cdk/lib&lt;/strong&gt; - Contains the main CDK constructs&lt;br&gt;
&lt;strong&gt;deploy/cdk/frontend&lt;/strong&gt; - Holds the frontend files for our page&lt;br&gt;
&lt;strong&gt;deploy/cdk/lambda&lt;/strong&gt; - includes the Lambda function which is triggered by API Gateway&lt;br&gt;
&lt;strong&gt;deploy/cdk/test&lt;/strong&gt; - Contains test files for CDK&lt;br&gt;
&lt;strong&gt;deploy/cdk/{cdk.json,package.json,package-lock.json,jest.config.js}&lt;/strong&gt; - npm/CDK configuration files &lt;/p&gt;

&lt;p&gt;The repository with the code is located here:&lt;br&gt;
&lt;a href="https://github.com/posone/aws-serverless-hands-on-template" rel="noopener noreferrer"&gt;aws-serverless-hands-on-template&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;README.md&lt;/strong&gt; file provides a &lt;strong&gt;tl;dr&lt;/strong&gt; version of the deployment steps. You can simply fork the repo, make a few adjustments, and deploy. In the next section, we will explore each part in more detail.&lt;/p&gt;
&lt;h1&gt;
  
  
  Workstation Requirements
&lt;/h1&gt;

&lt;p&gt;Before we begin the hands on portion, ensure that the following prerequisites are met:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have AWS account that you can use&lt;/li&gt;
&lt;li&gt;Your local station is properly configured and AWS CLI is working with your account &lt;a href="https://docs.aws.amazon.com/cli/" rel="noopener noreferrer"&gt;AWS CLI Docs&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You have a basic understanding of AWS CDK &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html" rel="noopener noreferrer"&gt;Getting started with CDK&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Node.js and npm are installed on you workstation &lt;a href="https://docs.npmjs.com/downloading-and-installing-node-js-and-npm" rel="noopener noreferrer"&gt;Install Node/NPM&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Your AWS environment has been bootstrapped properly &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/cli.html#cli-bootstrap" rel="noopener noreferrer"&gt;CLI-bootstrap&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once these requirements are met, you're ready to proceed!&lt;/p&gt;
&lt;h1&gt;
  
  
  OIDC Stack
&lt;/h1&gt;

&lt;p&gt;Before we begin with deployment, we need to ensure that GitHub is connected to our AWS account. The first step is to modify &lt;strong&gt;deploy/cdk/lib/components/oidc.ts&lt;/strong&gt; file. In this file there is an &lt;code&gt;oidc&lt;/code&gt; class where we specify which GitHub repository can assume the role&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export class oidc{
  constructor(scope: Construct, rolename: string, repo: string, provider: GithubActionsIdentityProvider) {
    const accessSSMRole = new GithubActionsRole(scope, rolename, {
        provider: provider,   
        owner: '&amp;lt;your_github_owner&amp;gt;',
        repo: repo,
        roleName: rolename
    });   accessSSMRole.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('PowerUserAccess'));
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Adjusting the role&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Replace  with the owner of your GitHub repository.&lt;/li&gt;
&lt;li&gt;Since this is &lt;strong&gt;test code&lt;/strong&gt;, I have assigned the PowerUserAccess policy to simplify deployment.&lt;/li&gt;
&lt;li&gt;⚠️ This is NOT best practice. If you plan to use this in a production environment, consider implementing a custom policy with the necessary permissions (least privilege approach).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deploying the OIDC Stack&lt;/strong&gt;&lt;br&gt;
Once the role is adjusted, deploy the stack using the following commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd deploy/cdk
npm i -D aws-cdk-github-oidc
npm run cdk deploy OidcStack -- --profile &amp;lt;your_aws_profile&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a few seconds, the deployment should be complete, and runners triggered from our GitHub repository should be able to authenticate with AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defining the Role in the Main Stack&lt;/strong&gt;&lt;br&gt;
The role name we can be defined in the main stack located in &lt;strong&gt;deploy/cdk/lib/oidc-stack.ts&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new components.oidc(this, "github-actions-role", "aws-serverless-hands-on-template", provider);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to update the role name and repository name accordingly. Additionally, this change must be reflected in the GitHub Actions Workflow, which we will cover in the GitHub Actions Deployment section.&lt;/p&gt;

&lt;h1&gt;
  
  
  Backend Stack
&lt;/h1&gt;

&lt;p&gt;The Backend Stack is responsible for creating the Lambda function, API Gateway, and DynamoDB table. The code for this stack is located in &lt;strong&gt;deploy/cdk/lib/backend-stack.ts&lt;/strong&gt;. &lt;br&gt;
The implementation is straightforward, but one important point to highlight is CORS (Cross-Origin Resource Sharing). Without CORS headers, the API Gateway would block frontend requests from CloudFront. To resolve this, we explicitly define the necessary CORS headers in the API Gateway response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring CORS in API Gateway&lt;/strong&gt;&lt;br&gt;
To allow the frontend to access the backend, we need to modify the API Gateway responses:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    resource.addMethod('GET', lambdaIntegration, {
        methodResponses: [
            {
                statusCode: '200',
                responseParameters: {
                    'method.response.header.Access-Control-Allow-Origin': true,
                    'method.response.header.Access-Control-Allow-Methods': true,
                    'method.response.header.Access-Control-Allow-Headers': true,
                },
            },
        ],
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Additionally, we define CORS options on API resources, ensuring that data can be shared between the backend and frontend:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    resource.addCorsPreflight({
        allowOrigins: ['*'],
        allowMethods: ['GET', 'POST', 'OPTIONS'],
        allowHeaders: ['Content-Type', 'X-Amz-Date', 'Authorization', 'X-Api-Key', 'X-Amz-Security-Token'],
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Exposing API Gateway URL&lt;/strong&gt;&lt;br&gt;
This stack also outputs the API Gateway URL, which is used in the Frontend Stack to automate the deployment process. This prevents the need for manual updates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new CfnOutput(this, 'API_URL', { value: api.url })
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API_URL is later updated dynamically in the frontend configuration via GitHub Actions, which we will cover in the CI/CD section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda Function&lt;/strong&gt;&lt;br&gt;
The Lambda function is located in the &lt;strong&gt;deploy/cdk/lambda/index.js&lt;/strong&gt; file. The handler logic is straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GET request → Reads from DynamoDB. Returns 400 if no value is provided or 200 on success.&lt;/li&gt;
&lt;li&gt;POST request → Writes to DynamoDB. Returns errors for a few predefined conditions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Connecting to DynamoDB &amp;amp; CORS&lt;/strong&gt;&lt;br&gt;
At the beginning of the Lambda function, we initialize the DynamoDB client and define CORS headers to allow cross-origin access:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const dynamoClient = new DynamoDBClient();
const docClient = DynamoDBDocumentClient.from(dynamoClient);

const headers = {
    "Access-Control-Allow-Origin": "*", 
    "Access-Control-Allow-Methods": "OPTIONS,GET,POST",
    "Access-Control-Allow-Headers": "Content-Type",
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures that requests from the frontend can communicate with the backend without CORS restrictions.&lt;/p&gt;

&lt;h1&gt;
  
  
  Frontend Stack
&lt;/h1&gt;

&lt;p&gt;Before diving into the details, I have to admit that I had no experience working on the frontend side. So, I teamed up with ChatGPT, which helped me generate a simple index.html, style.css, and script.js to make things work.&lt;/p&gt;

&lt;p&gt;**Main Structure (index.html)&lt;br&gt;
The core part of the frontend is the index.html file. The key section is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;body&amp;gt;
    &amp;lt;div class="container"&amp;gt;
        &amp;lt;h1&amp;gt;Welcome to the Serverless App&amp;lt;/h1&amp;gt;
        &amp;lt;h2&amp;gt;Please leave the trace and add yourself as visitor. You can then check the data if anyone is from your city!&amp;lt;/h2&amp;gt;
        &amp;lt;button id="fetchVisitors"&amp;gt;Fetch Data from Backend&amp;lt;/button&amp;gt;
        &amp;lt;button id="addVisitor"&amp;gt;Add Data&amp;lt;/button&amp;gt;
        &amp;lt;pre id="output"&amp;gt;&amp;lt;/pre&amp;gt;
    &amp;lt;/div&amp;gt;
    &amp;lt;script src="script.js"&amp;gt;&amp;lt;/script&amp;gt;
&amp;lt;/body&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Elements&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The two buttons (fetchVisitors and addVisitor) allow users to interact with the backend.&lt;/li&gt;
&lt;li&gt;The script.js file is referenced at the bottom—it handles the logic when buttons are clicked.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Handling API Calls (script.js)&lt;/strong&gt;&lt;br&gt;
The first line of script.js defines the API_URL, which is dynamically replaced during the GitHub Actions (GHA) deployment step:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;const API_URL = 'PLACEHOLDERdata';&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;During deployment, GitHub Actions replaces &lt;code&gt;PLACEHOLDERdata&lt;/code&gt; with the actual API Gateway URL from the Backend Stack, ensuring that the frontend communicates with the correct backend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event Listeners for Button Clicks&lt;/strong&gt;&lt;br&gt;
Inside script.js, we listen for the DOMContentLoaded event and define what happens when each button is clicked:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Fetch Data from Backend" button → Sends a GET request to API Gateway.&lt;/li&gt;
&lt;li&gt;"Add Data" button → Sends a POST request with user input to the backend.
The logic is straightforward, so you can go through the code and let me know if you have any questions.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  GitHub Actions Deployment
&lt;/h1&gt;

&lt;p&gt;This is the most crucial part of our setup, as it automates the deployment of our stacks through a CI/CD pipeline. The workflow file is located in the &lt;strong&gt;./github/workflows folder&lt;/strong&gt;. Let’s go through it step by step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow Configuration&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy serverless app to AWS
on:
  workflow_dispatch:
  # push:
  #   branches:
  #     - main

permissions:
  id-token: write
  contents: read
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Apart from defining the workflow name, we should remove the # comments from lines 4-6 in the forked repository.&lt;/li&gt;
&lt;li&gt;I commented out those lines in the template repo to prevent automatic pipeline runs.&lt;/li&gt;
&lt;li&gt;The workflow_dispatch event allows us to manually trigger the workflow (when working on the main branch).&lt;/li&gt;
&lt;li&gt;The permissions section is crucial for configuring AWS credentials, it must be included for the workflow to function properly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Job Definition&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jobs:
  deploy:
    runs-on: ubuntu-latest
    env:
      AWS_ACCOUNT: '&amp;lt;your_AWS_ACCOUNT&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;We define one job, named deploy.&lt;/li&gt;
&lt;li&gt;The job runs on Ubuntu (ubuntu-latest), a default GitHub runner with pre-installed packages.&lt;/li&gt;
&lt;li&gt;The AWS account number is stored in the AWS_ACCOUNT environment variable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deployment Steps&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Checkout Code&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    steps:
      - name: Checkout Code
        uses: actions/checkout@v4
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The first step is checking out the repository code.&lt;/li&gt;
&lt;li&gt;Instead of writing custom scripts, we use a pre-built action from &lt;a href="https://github.com/marketplace?type=actions" rel="noopener noreferrer"&gt;GitHub Market Place&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Set Up Node.js&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Set Up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 18
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This step ensures Node.js v18 is installed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Install Dependencies &amp;amp; Run Tests&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Install Dependencies &amp;amp; test
        run: |
          pushd /home/runner/work/aws-serverless-hands-on/aws-serverless-hands-on/deploy/cdk
          npm install -g aws-cdk
          npm install
          npm test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Installs AWS CDK, project dependencies, and executes unit tests before proceeding.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Configure AWS Credentials&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ env.AWS_ACCOUNT }}:role/github-actions-role
          aws-region: eu-west-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;This step assumes the IAM role created in the OIDC Stack, allowing GitHub Actions to deploy resources in our AWS account.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Deploy Backend and Frontend&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Deploy Backend Stack&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Deploy Backend
        id: backend
        run: |
          pushd /home/runner/work/aws-serverless-hands-on/aws-serverless-hands-on/deploy/cdk
          cdk deploy BackendStack --require-approval never --outputs-file cdk.outputs.json
          API_URL=$(jq -r '.["BackendStack"]["APIURL"]' cdk.outputs.json)
          echo "API_URL=$API_URL" &amp;gt;&amp;gt; $GITHUB_OUTPUT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The BackendStack is deployed first.&lt;/li&gt;
&lt;li&gt;The API Gateway URL is captured in cdk.outputs.json and stored as API_URL.&lt;/li&gt;
&lt;li&gt;We assign an ID (backend) to this step, making it easier to reference its output later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Deploy Frontend Stack&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Deploy Frontend
        run: |
          echo ${{ steps.backend.outputs.API_URL }}
          pushd /home/runner/work/aws-serverless-hands-on/aws-serverless-hands-on/deploy/cdk
          sed -i 's|PLACEHOLDER|'${{ steps.backend.outputs.API_URL }}'|g' frontend/script.js
          cdk deploy FrontendStack --require-approval never
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The frontend deployment retrieves the API_URL from the backend deployment step.&lt;/li&gt;
&lt;li&gt;It replaces the PLACEHOLDER in frontend/script.js with the actual API Gateway URL.&lt;/li&gt;
&lt;li&gt;Finally, the FrontendStack is deployed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Checking Deployment Output*&lt;br&gt;
It might take up to 8-10 minutes. Once completed, the pipeline should be green. &lt;br&gt;
To find the **CloudFront URL&lt;/strong&gt;, click on &lt;strong&gt;Deploy Frontend&lt;/strong&gt; scroll down a little bit, and look for the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Outputs:
FrontendStack.DistributionId = d1gmj3c7gkt5q0.cloudfront.net
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And maybe it would be also helpful to see how it looks in practice.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy62q8ogyojm080fg4vct.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy62q8ogyojm080fg4vct.gif" alt="alt image" width="480" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Costs and Considerations
&lt;/h1&gt;

&lt;p&gt;We did it! If everything went well your serverless app should not be up and running. &lt;br&gt;
If you encountered any issues or found some sections unclear, feel free to leave a comment!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Exercise Matters&lt;/strong&gt;&lt;br&gt;
Personally I believe that hands-on projects like this are an excellent way to gain real-world experience with AWS infrastructure, cloud services and automation. While going through each stage carefully can be time-consuming, the knowledge and confidence you gain are absolutely worth it. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Costs Breakdown&lt;/strong&gt;&lt;br&gt;
The good news is that this hands-on project is extremely cost-effective.&lt;br&gt;
On my personal AWS account, I did not pay a single dollar, even though my deployed stacks remained active for nearly a month with multiple tests. This is thanks to AWS's Free Tier, which provides generous limits across various services.&lt;br&gt;
Here’s a quick breakdown of the AWS Free Tier benefits relevant to this project:&lt;br&gt;
&lt;strong&gt;Lambda&lt;/strong&gt; has 1 milion requests per month free,&lt;br&gt;
&lt;strong&gt;API GW&lt;/strong&gt; for first 12 months 1 milion requests are also free but even if not, I would say we could spend &amp;lt;1$ if actively used during labs.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;S3&lt;/strong&gt; Upon sign-up, new AWS customers receive 5GB of Amazon S3 storage in the S3 Standard storage class for first 12 months (per month)&lt;br&gt;
&lt;strong&gt;DynamoDB&lt;/strong&gt; provides 25GB of storage, along with 25 provisioned Write and 25 provisioned Read Capacity Units (WCU, RCU) which is enough to handle 200M requests per month&lt;br&gt;
&lt;strong&gt;CloudFront&lt;/strong&gt; 1 TB of data transfer out to the internet per month&lt;br&gt;
10,000,000 HTTP or HTTPS Requests per month&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=*all&amp;amp;awsf.Free%20Tier%20Categories=*all" rel="noopener noreferrer"&gt;Sources - you can find here&lt;/a&gt;&lt;br&gt;
Note: After the free-tier limits expire, costs will vary depending on usage. However, even moderate activity is unlikely to exceed a few dollars per month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
This project also demonstrates how cost efficient serverless architectures can be. With AWS pay-as-you-go pricing and Free Tier, you can experiment and learn without worrying about high costs. &lt;/p&gt;

&lt;p&gt;Thanks a lot for reading!&lt;/p&gt;

&lt;p&gt;If you have any feedback, questions, or issues just let me know.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>devops</category>
      <category>automation</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS Serverless hands on part 1/2</title>
      <dc:creator>Piotr Popiolek</dc:creator>
      <pubDate>Tue, 11 Feb 2025 07:47:08 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-serverless-hands-on-part-12-33d6</link>
      <guid>https://forem.com/aws-builders/aws-serverless-hands-on-part-12-33d6</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;When I started my cloud journey six years ago, serverless architecture was the most difficult concept for me to understand . I transitioned into DevOps world from an OnPrem engineer role (primarily working with mostly PoverVM and VMware). So if someone told me that they were running their app on the serverless schema, I could not really get it. &lt;br&gt;
This post aims to make the serverless topic easier to understand by providing both theoretical insights and a hands-on practical approach. The article is split into two parts to separate planning from implementation, making it easier to absorb.&lt;/p&gt;

&lt;h1&gt;
  
  
  Our Action Plan
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Part One:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Main Idea: What we want to achieve&lt;/li&gt;
&lt;li&gt;Architecture of our app&lt;/li&gt;
&lt;li&gt;Assumptions and declarations&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Part Two:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detailed app description&lt;/li&gt;
&lt;li&gt;OIDC implementation&lt;/li&gt;
&lt;li&gt;Backend implementation&lt;/li&gt;
&lt;li&gt;Frontend implementation&lt;/li&gt;
&lt;li&gt;Deployment with GitHub Actions (GHA)&lt;/li&gt;
&lt;li&gt;Considerations&lt;/li&gt;
&lt;li&gt;Costs analysis&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  What Do We Want to Achieve?
&lt;/h1&gt;

&lt;p&gt;In my experience, the best way to learn is by combining theoretical knowledge with practical experimentation. This post will focus on understanding the basics of serverless architecture and then implementing a fully automated &lt;strong&gt;serverless app&lt;/strong&gt; on AWS. &lt;br&gt;
And when we say serverless, we mean a cloud computing model where developers can build and run applications without having to manage the underlying server infrastructure. &lt;br&gt;
Of course the servers are still involved, but they are abstracted away from the user by the cloud provider. This allows developers to focus on writing and deploying code while the cloud provider takes care of server management (like scaling, provisioning, maintenance etc).&lt;br&gt;
We are going to use several AWS services to power our application and we will rely on automation tools to avoid making manual changes through AWS Console. I think the plan is ambitious and assumes that the readers have a basic understanding about of IT concepts and are eager to learn more. &lt;br&gt;
Let's dive in!&lt;/p&gt;

&lt;h1&gt;
  
  
  Architecture and overview the services
&lt;/h1&gt;

&lt;p&gt;Our application will include both a backend and a frontend. Deployment and resource creation will be fully automated. Following best DevOps practices, all changes will be tracked through code ensuring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Orderly cloud resource managment&lt;/li&gt;
&lt;li&gt;Rapid recovery capabilities&lt;/li&gt;
&lt;li&gt;Full transparency of changes&lt;/li&gt;
&lt;li&gt;Minimized human error&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's now go through services we will use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backend Services&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS API Gateway &lt;em&gt;(fully managed service to create/publish/maintain/monitor and secure API at any scale)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;AWS Lambda &lt;em&gt;(serverless compute service that runs the code without provisioning or managing servers, max timeout = 15 mins)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;AWS DynamoDB &lt;em&gt;(fully managed NoSQL database service designed for HA, performance and scalability)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Frontend Services&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CloudFront &lt;em&gt;(content delivery network [known as CDN] service that accelerates the delivery of static and dynamic web content to users globally)&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;AWS S3 bucket &lt;em&gt;(scalable object storage service designed to store and retrieve any amount of data at anytime anywhere)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;High-Level Architecture Overview&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf3sf1a5jkke270sdtvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvf3sf1a5jkke270sdtvp.png" alt="Serverless App Architecture" width="800" height="817"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Main Assumptions
&lt;/h1&gt;

&lt;p&gt;All services will be created using Infrastructure as Code (IaaC) via AWS CDK (Cloud Development Kit). AWS CDK is a software development framework that allows to define cloud infrastructure using code in many programming languages (like TypeScript, Python, Java). &lt;br&gt;
Unlike declarative templates like YAML or JSON, CDK uses programming constructs as building blocks. The stacks defined in CDK correspond to CloudFormation stacks containing all created resources. If you are new to CDK, check out the &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/home.html" rel="noopener noreferrer"&gt;official AWS CDK docs&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;Our codebase will reside on GitHub, and any changes will trigger deployments via GitHub Actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Note on GitHub Actions (GHA)&lt;/strong&gt;&lt;br&gt;
GitHub Actions is an automation platform integrated into GitHub, allowing developers to automate workflows within repositories. Workflows can handle tasks like building, testing, deploying, and maintaining applications.&lt;/p&gt;

&lt;p&gt;Workflows are triggered by events such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pushes (new code is pushed to a branch)&lt;/li&gt;
&lt;li&gt;Pull Requests (opened/merged or updated)&lt;/li&gt;
&lt;li&gt;Manual triggers (or other GitHub events)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our case, we’ll create a simple workflow to deploy CDK constructs, ensuring AWS resources remain up to date.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using OIDC for Secure Communication&lt;/strong&gt;&lt;br&gt;
To securely connect AWS with GitHub, we’ll use OpenID Connect (OIDC), a secure authentication protocol. OIDC enables GitHub Actions to authenticate with AWS without requiring long-lived credentials, reducing the risk of credential exposure.&lt;/p&gt;

&lt;p&gt;It works as follow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions acts as OIDC IdP (Identity Provider), allowing AWS to verify workflow identities via signed token&lt;/li&gt;
&lt;li&gt;On AWS, the IAM role is configured to trust GitHub OIDC provider, with conditions specifying which repositories and workflows can assume the role&lt;/li&gt;
&lt;li&gt;GitHub Actions uses the OIDC token to assume the trusted IAM role, obtaining temporary credentials to access AWS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More about OIDC can be read &lt;a href="https://openid.net/developers/how-connect-works/" rel="noopener noreferrer"&gt;here&lt;/a&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  First part summary
&lt;/h1&gt;

&lt;p&gt;In this first part, we’ve covered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Key services that we’ll use in our project&lt;/li&gt;
&lt;li&gt;A high-level overview of our goals and architecture&lt;/li&gt;
&lt;li&gt;An introduction to the tools and technologies involved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope everything is clear so far! Please leave a comment if you have any concerns or need further clarification. When you are ready, move on to the &lt;a href="https://dev.to/aws-builders/aws-serverless-hands-on-part-22-3mcd"&gt;next part&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>cloud</category>
      <category>aws</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
