<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Divesh Kumar</title>
    <description>The latest articles on Forem by Divesh Kumar (@diveshkumar).</description>
    <link>https://forem.com/diveshkumar</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/diveshkumar"/>
    <language>en</language>
    <item>
      <title>How I Slashed My AWS Bill to $0 While Securing a 46GB Backup (In 1 Command) 🚀</title>
      <dc:creator>Divesh Kumar</dc:creator>
      <pubDate>Mon, 11 May 2026 05:11:58 +0000</pubDate>
      <link>https://forem.com/diveshkumar/how-i-slashed-my-aws-bill-to-0-while-securing-a-46gb-backup-in-1-command-fgd</link>
      <guid>https://forem.com/diveshkumar/how-i-slashed-my-aws-bill-to-0-while-securing-a-46gb-backup-in-1-command-fgd</guid>
      <description>&lt;p&gt;Managing cloud infrastructure often feels like a balancing act between utility and cost. Recently, I faced a challenge: I needed to perform a total shutdown of an AWS account while ensuring a 46GB infrastructure backup was perfectly preserved. &lt;/p&gt;

&lt;p&gt;Doing this manually via the AWS Console is a recipe for missed resources and lingering costs. Instead, I chose the path of automation. This article breaks down how I transformed a complex multi-service environment into a "single-enter" terminal operation.&lt;/p&gt;




&lt;h2&gt;
  
  
  🔍 Phase 1: The Automated Audit
&lt;/h2&gt;

&lt;p&gt;Before you can back up, you have to know what you have. A simple look at the EC2 dashboard isn't enough. &lt;/p&gt;

&lt;p&gt;I used scripts to crawl the account, mapping every dependency. One crucial trick was &lt;strong&gt;inspecting Lambda environment variables&lt;/strong&gt;. Often, these variables contain hidden RDS connection strings, MongoDB URIs, or API keys that aren't immediately visible in the service-specific dashboards. &lt;/p&gt;

&lt;p&gt;Mapping these "hidden" connections ensured that I didn't just back up the code, but also identified every data source that needed extraction.&lt;/p&gt;

&lt;h2&gt;
  
  
  📦 Phase 2: Surgical Extraction
&lt;/h2&gt;

&lt;p&gt;Once the map was ready, the terminal took over. Here’s the breakdown of the core commands used for each service:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. S3 Sync
&lt;/h3&gt;

&lt;p&gt;For high-performance file retrieval, &lt;code&gt;aws s3 sync&lt;/code&gt; is the gold standard. It only copies new or changed files, making it efficient for large buckets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3 &lt;span class="nb"&gt;sync &lt;/span&gt;s3://my-bucket-name ./backup/s3/my-bucket-name
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Lambda Downloads
&lt;/h3&gt;

&lt;p&gt;Retrieving Lambda code isn't as direct as S3. You first get a temporary signed URL, which you then use to download the ZIP file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Get the function details&lt;/span&gt;
aws lambda get-function &lt;span class="nt"&gt;--function-name&lt;/span&gt; MyFunction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Note: Look for the &lt;code&gt;Code.Location&lt;/code&gt; URL in the JSON response—that's your download key!&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. RDS SQL Dumps
&lt;/h3&gt;

&lt;p&gt;For local portability and ease of restoration, I bypassed snapshots and went straight for raw &lt;code&gt;.sql&lt;/code&gt; files using &lt;code&gt;pg_dump&lt;/code&gt; (for PostgreSQL) or &lt;code&gt;mysqldump&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pg_dump &lt;span class="nt"&gt;-h&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;endpoint] &lt;span class="nt"&gt;-U&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;user] &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;dbname] &lt;span class="nt"&gt;-f&lt;/span&gt; backup.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. DynamoDB JSON Export
&lt;/h3&gt;

&lt;p&gt;For NoSQL data, I used full table scans to export everything into portable JSON format.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws dynamodb scan &lt;span class="nt"&gt;--table-name&lt;/span&gt; MyTable &lt;span class="nt"&gt;--output&lt;/span&gt; json &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; MyTable.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧹 Phase 3: The "Zero Bill" Cleanup
&lt;/h2&gt;

&lt;p&gt;The most satisfying part? Watching the cost counter stop. Once the 46GB backup was verified locally, I ran a cleanup script to target the "silent cost killers":&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;EC2 Instances:&lt;/strong&gt; &lt;code&gt;aws ec2 terminate-instances --instance-ids i-12345...&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Load Balancers:&lt;/strong&gt; &lt;code&gt;aws elbv2 delete-load-balancer --load-balancer-arn ...&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Elastic IPs:&lt;/strong&gt; Often overlooked! Releasing them is vital to avoid hourly idle charges.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws ec2 release-address &lt;span class="nt"&gt;--allocation-id&lt;/span&gt; eipalloc-...
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;RDS Instances:&lt;/strong&gt; Deleted after verifying the local SQL dumps were 100% intact. I skipped the final snapshot to ensure zero remaining storage costs.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws rds delete-db-instance &lt;span class="nt"&gt;--db-instance-identifier&lt;/span&gt; my-db &lt;span class="nt"&gt;--skip-final-snapshot&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ☁️ Phase 4: Final Cloud Redundancy
&lt;/h2&gt;

&lt;p&gt;A backup isn't a backup until it's in at least two places. I synced the entire local &lt;code&gt;backup/&lt;/code&gt; folder to &lt;strong&gt;Google Drive&lt;/strong&gt; using &lt;code&gt;rclone&lt;/code&gt;. This provided geo-redundancy and peace of mind, knowing the data was safe outside of the AWS ecosystem.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 The Takeaway
&lt;/h2&gt;

&lt;p&gt;The result was a comprehensive, documented backup and a clean AWS bill. &lt;/p&gt;

&lt;p&gt;Modern DevOps is about building scripts that handle the manual heavy lifting. By automating the research, extraction, and cleanup, you minimize human error and ensure that "turning off the lights" doesn't leave any expensive bulbs burning in the background.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do you handle your infrastructure decommissioning? Let's discuss in the comments!&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  aws #devops #automation #cloudcomputing #backup
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwanogf87ape7i4abj07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwanogf87ape7i4abj07.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>automation</category>
      <category>cloud</category>
    </item>
    <item>
      <title>🔔 Django Signals: Supercharging Your App with Event-Driven Architecture</title>
      <dc:creator>Divesh Kumar</dc:creator>
      <pubDate>Thu, 04 Sep 2025 04:49:16 +0000</pubDate>
      <link>https://forem.com/diveshkumar/django-signals-supercharging-your-app-with-event-driven-architecture-e5l</link>
      <guid>https://forem.com/diveshkumar/django-signals-supercharging-your-app-with-event-driven-architecture-e5l</guid>
      <description>&lt;p&gt;When building a Django application, you often need to perform actions whenever something specific happens — for example, sending a welcome email after a user registers.&lt;/p&gt;

&lt;p&gt;Instead of writing &lt;strong&gt;extra logic inside your views or models&lt;/strong&gt;, Django gives us a powerful feature: &lt;strong&gt;Signals&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In this post, we’ll break down &lt;strong&gt;what signals are, why you need them, and a real-world example with code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 What are Django Signals?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Django Signals allow &lt;strong&gt;decoupled applications&lt;/strong&gt; to get notified when certain actions occur.&lt;/p&gt;

&lt;p&gt;Think of them like a &lt;strong&gt;notification system inside Django.&lt;/strong&gt;&lt;br&gt;
    • Event happens → Signal is fired&lt;br&gt;
    • Listener catches it → Executes extra logic&lt;/p&gt;

&lt;p&gt;Examples:&lt;br&gt;
    • A user logs in → Update last login timestamp&lt;br&gt;
    • A new order is created → Send invoice email&lt;br&gt;
    • A profile is saved → Resize uploaded avatar&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🛠️ Common Built-in Signals&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Django comes with some &lt;strong&gt;built-in signals&lt;/strong&gt;:&lt;br&gt;
    • pre_save / post_save → before/after saving a model&lt;br&gt;
    • pre_delete / post_delete → before/after deleting a model&lt;br&gt;
    • m2m_changed → when a ManyToMany relation is updated&lt;br&gt;
    • request_started / request_finished → when a request starts/ends&lt;br&gt;
    • user_logged_in / user_logged_out → authentication-related signals&lt;/p&gt;

&lt;p&gt;You can also create custom signals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧑‍💻 Example: Auto-Creating a Profile When a User Registers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of writing profile creation logic inside the signup view, let’s use &lt;strong&gt;signals&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create a Profile model&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# app/models.py
from django.db import models
from django.contrib.auth.models import User

class Profile(models.Model):
    user = models.OneToOneField(User, on_delete=models.CASCADE)
    bio = models.TextField(blank=True)
    created_at = models.DateTimeField(auto_now_add=True)

    def __str__(self):
        return f"{self.user.username}'s profile"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Write a Signal Receiver&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# app/signals.py
from django.db.models.signals import post_save
from django.dispatch import receiver
from django.contrib.auth.models import User
from .models import Profile

@receiver(post_save, sender=User)
def create_user_profile(sender, instance, created, **kwargs):
    if created:
        Profile.objects.create(user=instance)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s what happens:&lt;br&gt;
    • When a new User is created → post_save signal is fired.&lt;br&gt;
    • Our create_user_profile function catches it.&lt;br&gt;
    • A Profile is automatically created for the new user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Connect the Signal in apps.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# app/apps.py
from django.apps import AppConfig

class AppConfig(AppConfig):
    default_auto_field = 'django.db.models.BigAutoField'
    name = 'app'

    def ready(self):
        import app.signals
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, whenever a new user registers, Django automatically creates a profile without extra code in the view.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚡ Why Use Signals?&lt;/strong&gt;&lt;br&gt;
✅ Keeps code clean &amp;amp; decoupled&lt;br&gt;
✅ Avoids duplicate logic across views&lt;br&gt;
✅ Makes your app event-driven&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚠️ When NOT to Use Signals?&lt;/strong&gt;&lt;br&gt;
    • When the logic is &lt;strong&gt;very specific to one place&lt;/strong&gt; (better to keep it in the view).&lt;br&gt;
    • When debugging complex chains (signals can make code harder to trace).&lt;br&gt;
    • If overused → can create “hidden logic” that’s difficult for teams to maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🎯 Final Thoughts&lt;/strong&gt;&lt;br&gt;
Django Signals are a game-changer for clean and modular code. Use them wisely to automate side-effects like profile creation, notifications, and logging.&lt;/p&gt;

&lt;p&gt;👉 If you found this helpful, drop a ❤️ or share your thoughts in the comments.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>django</category>
      <category>python</category>
      <category>djangocms</category>
    </item>
  </channel>
</rss>
