<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Shireen Bano A</title>
    <description>The latest articles on Forem by Shireen Bano A (@shireen).</description>
    <link>https://forem.com/shireen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/shireen"/>
    <language>en</language>
    <item>
      <title>Production-Grade Container Hardening</title>
      <dc:creator>Shireen Bano A</dc:creator>
      <pubDate>Thu, 19 Feb 2026 21:21:11 +0000</pubDate>
      <link>https://forem.com/shireen/beyond-the-dockerfile-a-7-layer-blueprint-for-production-grade-container-hardening-24hk</link>
      <guid>https://forem.com/shireen/beyond-the-dockerfile-a-7-layer-blueprint-for-production-grade-container-hardening-24hk</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1a6sv0rrxpbxy1o5a3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1a6sv0rrxpbxy1o5a3r.png" alt="Docker Security" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Shireenbanu/AI-recipe-finder/blob/main/Dockerfile" rel="noopener noreferrer"&gt;Link to Dockerfile&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In modern DevOps, running containers as &lt;code&gt;root&lt;/code&gt; isn't just sloppy — it's an open invitation. If your application is compromised while running as root, the attacker isn't just inside your app. They own the entire container. Every secret, every mounted volume, every network socket.&lt;/p&gt;

&lt;p&gt;The good news? You can architect containers where a successful exploit lands an attacker in a box with nothing — no shell, no tools, no write access, no privileges. That's what this article is about.&lt;/p&gt;

&lt;p&gt;We're building a &lt;strong&gt;hardened, production-grade container&lt;/strong&gt; designed to run on AWS ECS Fargate, using defense-in-depth at every layer: the image, the process manager, the filesystem, and the task definition itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Layer 1: The Multi-Stage Build — Asset Stripping, Not Just Space Saving&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Most developers know multi-stage builds shrink image size. Fewer realize they're also your first line of defense.&lt;br&gt;
The strategy is simple: build dirty, run clean. Your first stage installs compilers, pulls npm packages, runs tests — all the messy work. Your final stage inherits none of it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Stage 1: The dirty build environment
FROM node:20-alpine AS builder
WORKDIR /app
COPY . .
RUN npm ci &amp;amp;&amp;amp; npm run build

# Stage 2: The clean runtime — no npm, no git, no source code
FROM nginx:1.25-alpine
COPY --from=builder --chown=appuser:appgroup /app/client/dist /usr/share/nginx/html
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice the --chown flag on the COPY instruction. Files land with the correct ownership immediately — no root middleman, no chmod dance afterward.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Layer 2: The Ghost Account — Least Privilege as Architecture&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Alpine Linux defaults to running everything as root. We fix that immediately.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;RUN addgroup -S appgroup &amp;amp;&amp;amp; adduser -S appuser -G appgroup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The -S flag creates a system user — no password, no login shell, no home directory with a .bashrc to backdoor. It's a ghost account: it exists only so the kernel has a non-root identity to assign to your process.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;USER appuser&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;This single line changes everything. From this point forward, every RUN, CMD, and ENTRYPOINT executes as appuser. The ceiling is enforced by the OS itself.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Layer 3: Taming Nginx — The Privileged Citizen Problem&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Here's where it gets interesting. Standard Nginx assumes it's running as root. It wants to write its PID file to /var/run/nginx.pid and its logs to /var/log/nginx/. Our appuser is forbidden from touching either of those paths.&lt;br&gt;
Rather than granting extra permissions, we patch Nginx to work within our constraints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Redirect Nginx internals to paths appuser actually owns
RUN sed -i 's|pid /var/run/nginx.pid;|pid /tmp/nginx.pid;|g' /etc/nginx/nginx.conf

# Pre-create the temp paths and hand them to appuser
RUN mkdir -p /tmp/client_body /tmp/proxy_temp /var/cache/nginx \
    &amp;amp;&amp;amp; chown -R appuser:appgroup /tmp /var/cache/nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We're not lowering the security bar to accommodate Nginx — we're forcing Nginx to operate within our security model. The &lt;code&gt;PID file&lt;/code&gt; and all scratch storage move to &lt;code&gt;/tmp&lt;/code&gt;, which we then mount as &lt;code&gt;ephemeral&lt;/code&gt;, hardened &lt;code&gt;tmpfs&lt;/code&gt; volumes in the Fargate task definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;linuxParameters = {
  tmpfs = [
    { containerPath = "/tmp",      size = 128, mountOptions = ["noexec", "nosuid", "nodev"] },
    { containerPath = "/app/logs", size = 64,  mountOptions = ["noexec", "nosuid", "nodev"] },
  ]
  readonlyRootFilesystem = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those three mount options are doing &lt;strong&gt;serious&lt;/strong&gt; work:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;noexec&lt;/strong&gt; — Nothing in /tmp can be executed. Even if an attacker writes a binary there, it won't run.&lt;br&gt;
&lt;strong&gt;nosuid&lt;/strong&gt; — Blocks privilege escalation via setuid binaries dropped into the volume.&lt;br&gt;
&lt;strong&gt;nodev&lt;/strong&gt; — Prevents creation of device files that could be used to bypass hardware-level security.&lt;/p&gt;

&lt;p&gt;And &lt;code&gt;readonlyRootFilesystem = true&lt;/code&gt; is the crown jewel: the entire container filesystem is immutable at runtime. The only writable paths are the explicitly mounted &lt;code&gt;tmpfs&lt;/code&gt; volumes — and those &lt;strong&gt;can't execute&lt;/strong&gt; anything.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Layer 4: Supervisord Without the Crown&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In a traditional setup, a process manager like systemd runs as root. We use Supervisord, and we strip its crown before it starts&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[supervisord]
user=appuser
logfile=/tmp/supervisord.log
pidfile=/tmp/supervisord.pid

[program:nginx]
command=nginx -g 'daemon off;'
stdout_logfile=/dev/stdout
stderr_logfile=/dev/stderr
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;user=appuser&lt;/code&gt; means even the manager of processes has no administrative power. It coordinates but cannot escalate.&lt;br&gt;
The &lt;code&gt;stdout_logfile=/dev/stdout&lt;/code&gt; line solves another problem quietly: logs are streamed directly to Docker's logging driver and never written to disk inside the container. No sensitive log data sitting in a writable layer. No persistence for an attacker to mine.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Layer 5: Dropping Linux Capabilities — Cutting the Kernel's Leash&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Even a non-root user can hold Linux capabilities — granular kernel permissions like the ability to bind low-numbered ports &lt;code&gt;(NET_BIND_SERVICE)&lt;/code&gt;, manipulate network interfaces &lt;code&gt;(NET_ADMIN)&lt;/code&gt;, or bypass file permission checks &lt;code&gt;(DAC_OVERRIDE)&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Every capability your container holds is an attack surface. The principle is simple: if you don't need it, drop it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;In your Fargate task definition:
linuxParameters = {
  capabilities = {
    drop = ["ALL"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After dropping all capabilities, your container's &lt;code&gt;/proc/self/status&lt;/code&gt; should show:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CapEff: 0000000000000000
CapBnd: 0000000000000000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;CapEff&lt;/code&gt; at zero means the process has no active kernel privileges. &lt;code&gt;CapBnd&lt;/code&gt; at zero means it can never acquire any — capabilities removed from the bounding set cannot be added back. The kernel's leash is cut.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Layer 6: The Task Definition as a Second Lock&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Your Dockerfile hardens the image. Your Fargate Task Definition hardens the runtime. These are two independent locks on the same door.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;json"containerDefinitions": [
  {
    "privileged": false,
    "user": "appuser",
    "readonlyRootFilesystem": true,
    "linuxParameters": {
      "capabilities": { "drop": ["ALL"] }
    }
  }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why does this matter if the Dockerfile already sets USER appuser? Because the Task Definition is enforced by the AWS Fargate agent at runtime, independently of what the image contains. Even if someone pushes a misconfigured image that forgot the &lt;code&gt;USER&lt;/code&gt; directive, Fargate will still enforce appuser. Defense-in-depth means each layer protects against the failure of the layer before it.&lt;/p&gt;

&lt;p&gt;privileged: false is the explicit rejection of the Docker --privileged flag, which would otherwise give the container near-full host access. On Fargate, the "host" is AWS's infrastructure — you definitely don't want that.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Layer 7: Trust But Verify — Container Image Signing with Notation&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Hardening your runtime is only half the story. How do you know the image you're deploying is the one you built? Supply chain attacks — where a malicious image is substituted somewhere between your registry and your cluster — are a growing threat.&lt;/p&gt;

&lt;p&gt;Notation (a CNCF project) lets you cryptographically sign container images and verify those signatures before deployment.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash# Sign after pushing to ECR
notation sign &amp;lt;your-ecr-registry&amp;gt;/your-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify before deploying&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;notation verify &amp;lt;your-ecr-registry&amp;gt;/your-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Integrate this into your CI/CD pipeline: sign on push, verify on deploy. If the signature doesn't match, the deployment doesn't happen. You get cryptographic proof that what's running in Fargate is exactly what your pipeline built — no substitutions, no tampering.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛡️ Security Hardening Matrix
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Focus&lt;/th&gt;
&lt;th&gt;Risk&lt;/th&gt;
&lt;th&gt;Threat&lt;/th&gt;
&lt;th&gt;Mitigation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Multi-Stage Build&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Build-time residue (compilers, &lt;code&gt;.git&lt;/code&gt;, secrets) left in image.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Lateral Movement:&lt;/strong&gt; Attackers use leftover tools to compile malware or pivot deeper into the network.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Build Dirty, Run Clean:&lt;/strong&gt; Separate build and runtime stages; only production assets move to the final image.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L2&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Non-Root Identity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Containers running as &lt;code&gt;root&lt;/code&gt; (UID 0) by default.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Host Escape:&lt;/strong&gt; Exploits (e.g., CVE-2024-21626) allow a root process to break out and control the host.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Ghost Accounts:&lt;/strong&gt; Create a system &lt;code&gt;appuser&lt;/code&gt; with no shell or home directory to enforce a permission "ceiling."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Hardened Nginx&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;App requires root access to write to system paths like &lt;code&gt;/var/run&lt;/code&gt;.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Runtime Tampering:&lt;/strong&gt; Attackers overwrite configs or web files to serve malware or redirect traffic.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;User-Owned Paths:&lt;/strong&gt; Patch Nginx to use &lt;code&gt;/tmp&lt;/code&gt; for PIDs/cache and &lt;code&gt;chown&lt;/code&gt; those paths to the &lt;code&gt;appuser&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L4&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Unprivileged Manager&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Process managers (Supervisord) traditionally running with root "crowns."&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Privilege Escalation:&lt;/strong&gt; A hijacked manager grants the attacker "God Mode" over all managed sub-processes.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Powerless Manager:&lt;/strong&gt; Run &lt;code&gt;supervisord&lt;/code&gt; as &lt;code&gt;appuser&lt;/code&gt; and stream logs to &lt;code&gt;stdout&lt;/code&gt; to prevent local data mining.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Immutable Filesystem&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Writable runtime layers allow attackers to modify the OS environment.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Malware Persistence:&lt;/strong&gt; Attackers download web shells or scripts that survive as long as the container runs.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;The "DVD" Model:&lt;/strong&gt; Enable &lt;code&gt;readonlyRootFilesystem&lt;/code&gt; and use &lt;code&gt;tmpfs&lt;/code&gt; mounts with &lt;code&gt;noexec&lt;/code&gt; to kill execution.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L6&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Kernel Capabilities&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Granular kernel permissions (Capabilities) active by default.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Privilege Jumping:&lt;/strong&gt; Attackers use &lt;code&gt;NET_RAW&lt;/code&gt; or &lt;code&gt;DAC_OVERRIDE&lt;/code&gt; to sniff traffic or bypass file security.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;The Blackout:&lt;/strong&gt; Use &lt;code&gt;drop = ["ALL"]&lt;/code&gt; to zero out &lt;code&gt;CapEff&lt;/code&gt; and &lt;code&gt;CapBnd&lt;/code&gt;, stripping all kernel-level privileges.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;L7&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Image Signing&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Unverified images pulled from an untrusted or compromised registry.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Supply Chain Attack:&lt;/strong&gt; An attacker swaps a legitimate image with a "poisoned" version containing a backdoor.&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Notation (CNCF):&lt;/strong&gt; Cryptographically sign images in CI/CD and verify signatures before every deployment.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>container</category>
      <category>security</category>
      <category>docker</category>
      <category>aws</category>
    </item>
    <item>
      <title>Understanding AWS Autoscaling with Grafana</title>
      <dc:creator>Shireen Bano A</dc:creator>
      <pubDate>Tue, 10 Feb 2026 18:25:00 +0000</pubDate>
      <link>https://forem.com/shireen/understanding-aws-autoscaling-with-grafana-gl8</link>
      <guid>https://forem.com/shireen/understanding-aws-autoscaling-with-grafana-gl8</guid>
      <description>&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/1F6iBje5WDQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture Overview
&lt;/h3&gt;

&lt;p&gt;My application is deployed on AWS as a containerized system:&lt;br&gt;
React frontend served by Nginx&lt;br&gt;
Node.js backend deployed as a Docker container&lt;/p&gt;

&lt;p&gt;The backend relies heavily on:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;RDS (reads/writes)&lt;/li&gt;
&lt;li&gt;S3 (uploads/downloads)&lt;/li&gt;
&lt;li&gt;Gemini API (LLM inference)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This architecture is intentionally realistic — it represents the type of stack many modern apps use today.&lt;br&gt;
The Goal: High-Stress Scaling Through Load Testing&lt;/p&gt;

&lt;p&gt;I wanted to validate autoscaling behavior under pressure. Specifically:&lt;/p&gt;

&lt;p&gt;Can ECS scale out when traffic spikes?&lt;/p&gt;

&lt;p&gt;How fast does it scale?&lt;/p&gt;

&lt;p&gt;Does latency stay stable?&lt;/p&gt;

&lt;p&gt;Does error rate increase under stress?&lt;/p&gt;

&lt;p&gt;Which dependency becomes the bottleneck first?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load Testing Strategy (k6)&lt;/strong&gt;&lt;br&gt;
To make the test realistic, I didn’t just hit a single endpoint repeatedly. Instead, I created a k6 test with two parallel scenarios:&lt;/p&gt;

&lt;p&gt;1) Backend Load Scenario (Triggers Scaling)&lt;br&gt;
This scenario generates the high traffic volume needed to push the backend and observe ECS behavior.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Warm-up at 20 users&lt;/li&gt;
&lt;li&gt;Spike instantly to 500 users&lt;/li&gt;
&lt;li&gt;Hold for 9 minutes&lt;/li&gt;
&lt;li&gt;Drop back down&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Observe scale-in behavior&lt;/p&gt;

&lt;p&gt;2) UI Monitoring Scenario (Real User Flow)&lt;br&gt;
This scenario runs a small number of browser-based users to monitor actual UI behavior while the system is under stress.&lt;/p&gt;

&lt;p&gt;This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;login&lt;/li&gt;
&lt;li&gt;navigation to medical history&lt;/li&gt;
&lt;li&gt;viewing a PDF report&lt;/li&gt;
&lt;li&gt;adding a condition&lt;/li&gt;
&lt;li&gt;requesting recipes&lt;/li&gt;
&lt;li&gt;uploading a report&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This helped validate whether the UI stayed usable during the stress event.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The First Surprise: 500 VUs Did Not Spike CPU&lt;/strong&gt;&lt;br&gt;
At 500 virtual users, I expected ECS CPU utilization to become the main bottleneck. Instead, the CPU stayed surprisingly low — barely crossing 18%, even while the load test pushed close to 25,000 requests through the system. At first, this felt wrong, and I genuinely questioned whether my load test was working. But the test was fine — my assumption was not. After digging deeper, I realized the application workload simply wasn’t CPU-intensive. Most of the request time was spent waiting on external dependencies like RDS reads/writes, S3 uploads/downloads, and Gemini API responses. This made the system primarily I/O-bound, which explains why CPU-based autoscaling did not react strongly, even under heavy traffic.&lt;/p&gt;

&lt;p&gt;This was one of the biggest lessons from the experiment:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;High traffic does not always mean high CPU.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43wurepmqadpqphwpycz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43wurepmqadpqphwpycz.png" alt="Request Count in Grafana"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the above graph, I almost hit more than 20000 request.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fli8eiob5qbtd7r1wczew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fli8eiob5qbtd7r1wczew.png" alt="Heartbeat"&gt;&lt;/a&gt;&lt;br&gt;
At the same time, here is my CPU and memory utilization graph, hitting no more than 20%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3ox57dnp32n6qiit0vr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3ox57dnp32n6qiit0vr.png" alt="Auto-Scaling policy"&gt;&lt;/a&gt;&lt;br&gt;
According to my autoscaling policy, CPU utilization must cross 70% before CloudWatch triggers the alarm. Since my application isn’t naturally CPU-intensive, I wasn’t sure how else to push CPU high enough to test scaling properly.&lt;/p&gt;

&lt;p&gt;So I manually generated CPU stress inside the running ECS container by executing an infinite loop using the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecs execute-command --cluster recipe-finder-prod-cluster \
    --task a55518997ca84f24bc2fd614cbc18f20 \
    --container recipe-finder-api \
    --interactive \
    --command "/bin/sh -c 'while true; do :; done &amp;amp; while true; do :; done'"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Within a few minutes, this forced the container CPU to spike aggressively, reaching a consistent ~99% utilization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvl3og637fs5eslbr8wzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvl3og637fs5eslbr8wzt.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the real question becomes: how long does scale-out actually take?&lt;/p&gt;

&lt;p&gt;Based on the autoscaling configuration, CloudWatch requires 60 seconds of sustained CPU breach before it enters the ALARM state. Once the alarm is triggered, ECS detects it and begins launching new tasks.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Event&lt;/th&gt;
&lt;th&gt;Timestamp&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CPU crossed 70%&lt;/td&gt;
&lt;td&gt;12:09:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alarm triggered&lt;/td&gt;
&lt;td&gt;12:13:25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Desired tasks increased&lt;/td&gt;
&lt;td&gt;12:14:00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;New task running&lt;/td&gt;
&lt;td&gt;12:15:00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;CPU crossed the 70% threshold at 12:09, but the CloudWatch alarm didn’t trigger until 12:13. ECS then increased the desired task count at 12:14, and the new task became fully running by 12:15 — meaning the full scale-out process took roughly 6 minutes from threshold breach to a healthy new task.&lt;/p&gt;

&lt;p&gt;So, Autoscaling doesn’t react the instant CPU crossed the 70% threshold. CloudWatch evaluates CPU in 1-minute datapoints, and my alarm required &lt;strong&gt;3 breaching datapoints&lt;/strong&gt; within 3 minutes. Only after the alarm entered the ALARM state did ECS trigger scale-out and launch new Fargate tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9ehr1l484hhloddql20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd9ehr1l484hhloddql20.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's look at the scale in process:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Event&lt;/th&gt;
&lt;th&gt;Timestamp&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CPU fell below scale-in threshold (&amp;lt;63%)&lt;/td&gt;
&lt;td&gt;12:34&lt;/td&gt;
&lt;td&gt;Based on Grafana&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Low alarm triggered (OK → ALARM)&lt;/td&gt;
&lt;td&gt;12:49&lt;/td&gt;
&lt;td&gt;15-min evaluation period complete&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ECS desired tasks decreased&lt;/td&gt;
&lt;td&gt;12:50&lt;/td&gt;
&lt;td&gt;ECS starts stopping tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Extra task stopped (scale-in complete)&lt;/td&gt;
&lt;td&gt;12:52&lt;/td&gt;
&lt;td&gt;Task fully terminated&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Notice how the alarm now triggers 15 minutes after the CPU fell below threshold, matching the Low alarm rule of 15 datapoints in 15 minutes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Closing Note:
&lt;/h2&gt;

&lt;p&gt;Autoscaling ensures your application can handle spikes, but it comes with temporary performance trade-offs:&lt;/p&gt;

&lt;p&gt;During scale-out: When CPU spikes and new Fargate tasks are being launched, your application may briefly return 5xx errors or slower responses. In our experiment, we did see &lt;strong&gt;5% errors&lt;/strong&gt; for a few minutes during the initial warm-up period before the new tasks fully came online. This “warm-up latency” is an inherent part of reactive autoscaling.&lt;/p&gt;

&lt;p&gt;During scale-in: ECS gradually terminates idle tasks once the Low alarm confirms sustained low CPU. This process is intentionally slow to avoid task flapping, ensuring that users aren’t suddenly impacted if traffic spikes again.&lt;/p&gt;

&lt;p&gt;Observing CPU, alarm state, and task events together helps understand exactly how long users may experience degraded performance during scaling, and informs decisions about pre-warming, thresholds, and evaluation periods to minimize those user-facing impacts.&lt;/p&gt;

&lt;p&gt;Github link: 

&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/Shireenbanu" rel="noopener noreferrer"&gt;
        Shireenbanu
      &lt;/a&gt; / &lt;a href="https://github.com/Shireenbanu/AI-recipe-finder" rel="noopener noreferrer"&gt;
        AI-recipe-finder
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://private-user-images.githubusercontent.com/56209782/546629044-95ae6852-7c09-4d79-aa26-148eb5916770.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzQ2NDI4MzEsIm5iZiI6MTc3NDY0MjUzMSwicGF0aCI6Ii81NjIwOTc4Mi81NDY2MjkwNDQtOTVhZTY4NTItN2MwOS00ZDc5LWFhMjYtMTQ4ZWI1OTE2NzcwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAzMjclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMzI3VDIwMTUzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWU0YzMxOTFhNTc1NDgyZGY4ZDFiZmMzMmY0MDg2MDEyODdkMDI4ZTVlNmEwNTI5NjUwNTI1N2JlMThmYTFjOGUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.oVPH9Jrqr1PyuEItXBgUbKL205-yAS74k3KAp7Q0eU8"&gt;&lt;img width="807" height="540" alt="Screenshot 2026-02-06 at 6 17 33 PM" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fprivate-user-images.githubusercontent.com%2F56209782%2F546629044-95ae6852-7c09-4d79-aa26-148eb5916770.png%3Fjwt%3DeyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzQ2NDI4MzEsIm5iZiI6MTc3NDY0MjUzMSwicGF0aCI6Ii81NjIwOTc4Mi81NDY2MjkwNDQtOTVhZTY4NTItN2MwOS00ZDc5LWFhMjYtMTQ4ZWI1OTE2NzcwLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAzMjclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMzI3VDIwMTUzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWU0YzMxOTFhNTc1NDgyZGY4ZDFiZmMzMmY0MDg2MDEyODdkMDI4ZTVlNmEwNTI5NjUwNTI1N2JlMThmYTFjOGUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.oVPH9Jrqr1PyuEItXBgUbKL205-yAS74k3KAp7Q0eU8"&gt;&lt;/a&gt;&lt;br&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Application Overview&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This application helps users manage their health by securely storing medical history, lab reports, and personal profile information. Based on a patient’s conditions, it generates personalized healthy recipes using a recommendation engine integrated with the Gemini API. The goal is to provide actionable nutrition guidance while maintaining HIPAA compliance, data privacy, and secure storage. It also caches generated recipes for quick retrieval and seamless user experience.&lt;/p&gt;
&lt;p&gt;How to install:&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;    terraform apply
    terrform destroy  #to destroy the infra
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Recipe Finder Application:&lt;/h2&gt;
&lt;/div&gt;

&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;&lt;strong&gt;1) Profile (CRUD + Database Reads/Writes)&lt;/strong&gt;&lt;/h3&gt;
&lt;/div&gt;

&lt;p&gt;Users can view and update profile information.&lt;/p&gt;

&lt;p&gt;This workflow represents the most typical web-app traffic pattern: &lt;strong&gt;read and write operations to the database&lt;/strong&gt;
&lt;a rel="noopener noreferrer" href="https://private-user-images.githubusercontent.com/56209782/547302770-68aaa74a-f6b0-4138-8669-4266c09342d3.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzQ2NDI4MzEsIm5iZiI6MTc3NDY0MjUzMSwicGF0aCI6Ii81NjIwOTc4Mi81NDczMDI3NzAtNjhhYWE3NGEtZjZiMC00MTM4LTg2NjktNDI2NmMwOTM0MmQzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAzMjclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMzI3VDIwMTUzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWE1NzZiNzJlZDA3OWU5ZjYyNDIwM2RkMDNhODBjZGEwZGU0NzhhNGFkZWVmNWY2YzJiZTA1YWIwZjBiZTY4OTUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.fPC2lbEl5a5bVmnaCbCQoXQkZgsUsEzKk5YaDYiJkcc"&gt;&lt;img width="2000" height="1154" alt="image" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fprivate-user-images.githubusercontent.com%2F56209782%2F547302770-68aaa74a-f6b0-4138-8669-4266c09342d3.png%3Fjwt%3DeyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzQ2NDI4MzEsIm5iZiI6MTc3NDY0MjUzMSwicGF0aCI6Ii81NjIwOTc4Mi81NDczMDI3NzAtNjhhYWE3NGEtZjZiMC00MTM4LTg2NjktNDI2NmMwOTM0MmQzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAzMjclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMzI3VDIwMTUzMVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWE1NzZiNzJlZDA3OWU5ZjYyNDIwM2RkMDNhODBjZGEwZGU0NzhhNGFkZWVmNWY2YzJiZTA1YWIwZjBiZTY4OTUmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.fPC2lbEl5a5bVmnaCbCQoXQkZgsUsEzKk5YaDYiJkcc"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h3 class="heading-element"&gt;&lt;strong&gt;2) Medical History (Uploads + Processing + AI Pipeline)&lt;/strong&gt;&lt;/h3&gt;

&lt;/div&gt;
&lt;p&gt;Users can upload lab reports and add medical conditions.&lt;/p&gt;
&lt;p&gt;Once submitted, the backend processes the medical data and sends it to a &lt;strong&gt;recommendation engine&lt;/strong&gt;, which then forwards structured…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/Shireenbanu/AI-recipe-finder" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;




</description>
      <category>aws</category>
      <category>devops</category>
      <category>monitoring</category>
      <category>performance</category>
    </item>
    <item>
      <title>Learn What it Takes to Be a Solid Rails Developer!</title>
      <dc:creator>Shireen Bano A</dc:creator>
      <pubDate>Tue, 24 Jan 2023 14:41:50 +0000</pubDate>
      <link>https://forem.com/shireen/learn-what-it-takes-to-be-a-solid-rails-developer-42of</link>
      <guid>https://forem.com/shireen/learn-what-it-takes-to-be-a-solid-rails-developer-42of</guid>
      <description>&lt;p&gt;Unlock the full potential of Ruby on Rails and build web applications that will leave your peers in awe. But before you can do that, you need to master these essential concepts that are the building blocks of any successful Rails project. Let's dive in and explore them one by one&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Familiarize yourself with Active record association and its best practices&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8nGxE4Ji--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z117oasdz5j8czu5tap5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8nGxE4Ji--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z117oasdz5j8czu5tap5.png" alt="Image description" width="369" height="355"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://www.bigbinary.com/books/learn-rubyonrails-book/defining-associations-and-best-practices" rel="noopener noreferrer"&gt;
      bigbinary.com
    &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;I regret not finding this article when I was struggling to learn active record associations🥲(It's so good). This article will not only explain the various types of associations, but also provide a clear overview of how the associations interact with the database through the use of foreign key and primary key constructs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Design patterns are the MACHINE GUNS🚀 of your codebase&lt;/strong&gt;&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://www.bacancytechnology.com/blog/design-patterns-in-ruby-on-rails" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--TFF6S1zY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.bacancytechnology.com/blog/wp-content/uploads/2019/12/04-12-2019.jpg" height="480" class="m-0" width="880"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://www.bacancytechnology.com/blog/design-patterns-in-ruby-on-rails" rel="noopener noreferrer" class="c-link"&gt;
          The Best Design Patterns in Ruby On Rails
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Learn how to implement 10 basic design patterns in Ruby on Rails and know how it helps refactor MVC components in Rails.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--_RGXGP_7--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.bacancytechnology.com/blog/wp-content/uploads/2018/04/favicon.png" width="16" height="16"&gt;
        bacancytechnology.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Clean code should never be underestimated, as it can save significant time for both yourself and other developers when debugging. Design patterns are the best practices followed by industry leaders. They not only make your code more readable (eg: Query objects) but also reduces code complexity(eg: Form objects) by separating the concerns (eg: Service object). Therefore, easy to maintain and review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Only Unit testing can save you from Production crashes🔥&lt;/strong&gt;&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://thoughtbot.com/blog/back-to-basics-writing-unit-tests-first" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--A33s6lfq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://images.thoughtbot.com/blog-images/social-share-default.png" height="300" class="m-0" width="300"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://thoughtbot.com/blog/back-to-basics-writing-unit-tests-first" rel="noopener noreferrer" class="c-link"&gt;
          Back to Basics: Writing Unit Tests First
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Step-by-step instructions for learning Test-Driven Development (TDD) in Ruby.
There&amp;amp;rsquo;s nothing to fear!
It&amp;amp;rsquo;s fun.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--ZNlS_Pjj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thoughtbot.com/blog/assets/favicon-c1fa98a84eab9d930e8b09cf6a4dbb1156d1436c25c225e6e14fcc5cc84d1b34.ico" width="48" height="48"&gt;
        thoughtbot.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Testing your code with every single input under the sun after every tiny tweak is like a recurring nightmare, amirite? 😴. That's where unit testing comes into rescue. It's a smart way writing code, where you write tests first before you even write the code. Basically, you make changes to your code, run the tests and boom👊🏻!, all the failed tests show up on your screen. Check out the above article for more details.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://www.guru99.com/code-coverage.html" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://res.cloudinary.com/practicaldev/image/fetch/s--z-yhmZ_L--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.guru99.com/images/jsp/030116_0814_LearnStatem1.png" height="67" class="m-0" width="720"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://www.guru99.com/code-coverage.html" rel="noopener noreferrer" class="c-link"&gt;
          Code Coverage Tutorial: Branch, Statement, Decision, FSM
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Code coverage is a measure which describes the degree of which the source code of the program has been tested. Following are major code coverage methods Statement Coverage, Condition Coverage, Branch Coverage, Toggle Coverage, FSM Coverage
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://res.cloudinary.com/practicaldev/image/fetch/s--SlutYfU1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.guru99.com/images/favicon-new-logo.png" width="64" height="62"&gt;
        guru99.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;Just writing test for the concerned parts of the code doesn't make your code production crash proof. Writing quality test that can provide maximum code coverage is what makes your code crash proof (99.9% times😅). The article above breaks down all the ways to make sure you're testing every nook and cranny of your code before it goes live. .&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Code optimization is our Ultimate Goal💪🏻&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There's a lot of things that can slow down our code execution, like using too many resources at the same time, bad design choices, and unoptimized code, but the one thing we can control is our code. So, let's make sure it's running as smooth as butter.&lt;br&gt;
The following are the most common causes of poor application performance.&lt;/p&gt;
&lt;h5&gt;
  
  
  Slow view rendering :
&lt;/h5&gt;

&lt;p&gt;Run any view of your rails application and check the logs in the terminal. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HPxAoAf2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0iwu3l2etibdohea66bj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HPxAoAf2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0iwu3l2etibdohea66bj.png" alt="Image description" width="880" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will be able to find a line that will let you know how long it took for your view to load, how long it took for ActiveRecord to fetch data from the database, and how much time was spent creating new objects. Slow view rendering can be caused by factors such as an excessive amount of logic in the view, a large number of partials, and unoptimized images and other assets.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://teamhq.app/blog/tech/17-how-rendering-partials-can-slow-your-rails-app-to-a-crawl" rel="noopener noreferrer"&gt;
      teamhq.app
    &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;If your view is taking forever to load and you can't figure out why, it might be a good idea to give the above article a quick read.&lt;/p&gt;

&lt;h5&gt;
  
  
  N+1 queries :
&lt;/h5&gt;

&lt;p&gt;A common performance issue in Rails is the N+1 query problem, where a separate database query is made for each item in a collection. This can cause a lot of unnecessary database queries and slow down the view rendering.&lt;br&gt;
To understand what n+1 queries are in rails, you can refer the below article.&lt;br&gt;
    &lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/junko911" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rUpIaX1A--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--pftMnlEp--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/504532/79f9cbd8-1413-4f49-9e2c-3b7fa659a6fc.jpeg" alt="junko911"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/junko911/rails-n-1-queries-and-eager-loading-10eh" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Rails N+1 queries and eager loading&lt;/h2&gt;
      &lt;h3&gt;Junko T. ・ Dec 26 '20 ・ 3 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#rails&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#sql&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#beginners&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#performance&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;h5&gt;
  
  
  Large number of partials :
&lt;/h5&gt;

&lt;p&gt;When you've got too many little chunks of views (partials) hanging around, it can slow down how fast your page loads. It's best to keep that number down, But If your existing application has too many partials which are not optimizable then its time to move to view_components. View components are a way to group common view logic together, which can simplify your code and make your application perform better.&lt;/p&gt;

&lt;p&gt;If you're interested in experimenting with it, the article below can serve as a good starting point.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://www.honeybadger.io/blog/ruby-view-components/" rel="noopener noreferrer"&gt;
      honeybadger.io
    &lt;/a&gt;
&lt;/div&gt;


&lt;h5&gt;
  
  
  Lack of caching :
&lt;/h5&gt;

&lt;p&gt;Rails, by default, does not include caching functionality. However, it does provide several options for caching through the use of third-party gems, such as the &lt;code&gt;dalli&lt;/code&gt; gem for memcached caching and the &lt;code&gt;redis-rails&lt;/code&gt; gem for Redis caching. Additionally, Rails also provides caching options through built-in methods such as &lt;code&gt;cache&lt;/code&gt; and &lt;code&gt;expires_in&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;There are several gems available that can help identify and solve caching issues in Rails code. These gems can analyze your application and provide recommendations for optimizing caching strategies, such as identifying areas where caching can be added or improved. Some examples of such gems include &lt;code&gt;rails_best_practices&lt;/code&gt;, &lt;code&gt;bullet&lt;/code&gt;, and &lt;code&gt;rack-mini-profiler&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you're feeling like a mad scientist ready to conduct some experiments with these gems, the below post can be your lab coat and goggles😜.&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="https://medium.com/swlh/fix-it-until-you-make-it-a-simple-ruby-on-rails-performance-guide-4fa95b79df8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8GMez9E8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://miro.medium.com/fit/c/96/96/1%2AMLONGE0U3Aye19EQAN6bMg.jpeg" alt="Jorge Najera"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://medium.com/swlh/fix-it-until-you-make-it-a-simple-ruby-on-rails-performance-guide-4fa95b79df8" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Fix it until you make it, a simple Ruby on Rails Performance Guide | by Jorge Najera | The Startup | Medium&lt;/h2&gt;
      &lt;h3&gt;Jorge Najera ・ &lt;time&gt;Jun 5, 2020&lt;/time&gt; ・ 
      &lt;div class="ltag__link__servicename"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hnDHPsJs--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev.to/assets/medium-f709f79cf29704f9f4c2a83f950b2964e95007a3e311b77f686915c71574fef2.svg" alt="Medium Logo"&gt;
        Medium
      &lt;/div&gt;
    &lt;/h3&gt;
&lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


&lt;p&gt;All in all, Clean code practices for Ruby on Rails developer are a combination of using OOP principles, adhering to Rails conventions, automated testing, and keeping the codebase organized and well-commented.&lt;/p&gt;

</description>
      <category>rails</category>
      <category>beginners</category>
      <category>cleancode</category>
      <category>devjournal</category>
    </item>
  </channel>
</rss>
