<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: praveena 0506</title>
    <description>The latest articles on Forem by praveena 0506 (@praveena).</description>
    <link>https://forem.com/praveena</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/praveena"/>
    <language>en</language>
    <item>
      <title>𝗠𝗶𝘀𝘀𝗶𝗼𝗻 𝗛𝗼𝗻𝗲𝘆𝗽𝗼𝘁: 𝗛𝗼𝘄 𝗜 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗲𝗱 𝗮𝗻 𝗔𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀 𝗔𝗜 𝘁𝗼 𝗦𝗰𝗮𝗺 𝘁𝗵𝗲 𝗦𝗰𝗮𝗺𝗺𝗲𝗿𝘀</title>
      <dc:creator>praveena 0506</dc:creator>
      <pubDate>Tue, 03 Mar 2026 06:48:55 +0000</pubDate>
      <link>https://forem.com/praveena/-5d0e</link>
      <guid>https://forem.com/praveena/-5d0e</guid>
      <description>&lt;p&gt;&lt;strong&gt;I Over-Engineered an AI to Gaslight Financial Scammers at 80ms Latency 🍯👴&lt;/strong&gt;&lt;br&gt;
Scammer: "URGENT: Your SBI account is blocked. Send OTP."&lt;br&gt;
Me, an intellectual: spins up an asynchronous, multi-agent AI pipeline to systematically destroy their patience.&lt;/p&gt;

&lt;p&gt;We all get those spam texts. Most normal, well-adjusted humans just block the number and move on with their lives. But as a developer, I saw a problem: Ignoring scammers doesn't waste their time. So, rather than doing something useful with my weekend like fixing my broken Docker containers, I built an Autonomous Counter-Scamming Agent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Welcome to Project Grandpa.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Architecture of Petty Vengeance&lt;br&gt;
The goal wasn't just to talk to them. A basic while True loop with a randomized response could do that. I wanted to build a system that baits them, maintains a consistent, infuriating persona, and silently harvests their threat data in the background.&lt;/p&gt;

&lt;p&gt;Here is the tech stack I used to make a scammer question their life choices:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Brain: DSPy &amp;amp; Llama-3 🧠&lt;/strong&gt;&lt;br&gt;
I could have used a basic OpenAI wrapper, but why write an if statement when you can use a declarative programming framework to mathematically optimize a confused 72-year-old man?&lt;/p&gt;

&lt;p&gt;I used DSPy to define a rigorous Cognitive Engine. The persona? Ramachandran. He’s 72, he doesn't know what a "UPI" is, his caps-lock is occasionally stuck, and he is very eager to give you his bank details if he could just figure out how to unlock his phone.&lt;/p&gt;

&lt;p&gt;I ran this on Groq’s LPUs (Llama-3). Because when a scammer asks for my OTP, the AI needs to generate five paragraphs of pure, unfiltered technological confusion in under 80 milliseconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The Extraction Trap: Regex &amp;amp; Background Tasks 🕸️&lt;/strong&gt;&lt;br&gt;
While the LLM is distracting them with questions about "where the 'any' key is," the real work happens in the background.&lt;/p&gt;

&lt;p&gt;Every incoming message is parsed through deterministic Regex patterns. The moment the frustrated scammer drops a "secure" UPI ID (like scammer.fraud@fakebank) or a phone number to "call for help," the system silently intercepts it.&lt;/p&gt;

&lt;p&gt;It then triggers a background FastAPI task to POST their structured threat data straight to a callback URL for the authorities. They think they are extracting my bank details, while I am literally JSON-serializing their identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The Backbone: FastAPI &amp;amp; Render ⚡&lt;/strong&gt;&lt;br&gt;
Built on an async FastAPI event-loop, because wasting scammers' time should be highly concurrent and non-blocking.&lt;/p&gt;

&lt;p&gt;But here’s the best part: Cloud APIs rate-limit you. Groq occasionally throws a 429 Too Many Requests. Instead of letting the server crash, I built a custom zero-retry fallback. If the LLM chokes, the API instantly catches the exception and returns:&lt;/p&gt;

&lt;p&gt;"Oh dear, my internet seems slow. Can you say that again?"&lt;/p&gt;

&lt;p&gt;It takes &amp;lt;0.08 seconds to execute, and it is a canonically perfect boomer response. It’s not a bug; it’s a feature.&lt;/p&gt;

&lt;p&gt;Is AI going to take our jobs? Maybe. But today, it’s taking the jobs of scam victims.&lt;/p&gt;

&lt;p&gt;If you want to spin up your own honeypot, check out the source code below. Feel free to fork it, add your own personas, and deploy it. Ruining a scammer's day should be open-source.&lt;/p&gt;

&lt;p&gt;👾 Source Code: &lt;a href="https://github.com/praveena0506/honeypot.git" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have you ever built something incredibly complex for an incredibly petty reason? Let me know in the comments. 👇&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Zombie AI: How I Built a Model That Refuses to Die (and Why I Fired BERT)</title>
      <dc:creator>praveena 0506</dc:creator>
      <pubDate>Sun, 18 Jan 2026 14:49:17 +0000</pubDate>
      <link>https://forem.com/praveena/the-zombie-ai-how-i-built-a-model-that-refuses-to-die-and-why-i-fired-bert-491m</link>
      <guid>https://forem.com/praveena/the-zombie-ai-how-i-built-a-model-that-refuses-to-die-and-why-i-fired-bert-491m</guid>
      <description>&lt;p&gt;A story of failure, Cloudflare blocks, and why I chose a "dumb" model over a Transformer to save money.&lt;/p&gt;

&lt;p&gt;🥔 &lt;strong&gt;The Problem: AI Models age like Milk&lt;/strong&gt;&lt;br&gt;
Let’s be honest. Most of our ML projects are liars. We train them on a CSV from 2023, show off the 98% accuracy on LinkedIn, and then... abandon them. The moment you deploy that model, it starts rotting. The world changes. The Supreme Court passes new judgements. But your model is still living in the past, blissfully ignorant.&lt;/p&gt;

&lt;p&gt;I refused to build another "Potato Model" (one that sits there and rots). I wanted to build Legal Eagle AI—a system that wakes up, drinks coffee, reads the news, and gets smarter every day without me nagging it.&lt;/p&gt;

&lt;p&gt;🤦‍♂️ &lt;strong&gt;Phase 1: The "I am a Genius" Phase (And the inevitable crash)&lt;/strong&gt;&lt;br&gt;
My Grand Plan:&lt;/p&gt;

&lt;p&gt;Write a script to scrape Indian Kanoon (Government legal archives).&lt;/p&gt;

&lt;p&gt;Train a massive Transformer.&lt;/p&gt;

&lt;p&gt;Change the world.&lt;/p&gt;

&lt;p&gt;The Reality: I wrote the scraper. I hit "Run". And Cloudflare immediately punched me in the face. 🥊 403 Forbidden. Access Denied. Are you a robot?&lt;/p&gt;

&lt;p&gt;I tried header spoofing. I tried rotating user agents. Cloudflare looked at my cute little Python script and laughed. I had a fancy architecture but literally zero data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;💡 Phase 2: The "Lazy Engineer" Pivot&lt;/strong&gt;&lt;br&gt;
They say "Laziness is the mother of invention." Instead of fighting the firewall, I went around it.&lt;/p&gt;

&lt;p&gt;I realized Google News RSS Feeds are:&lt;/p&gt;

&lt;p&gt;XML (Deliciously easy to parse).&lt;/p&gt;

&lt;p&gt;Real-time.&lt;/p&gt;

&lt;p&gt;Unblockable. (Google wants you to read the news).&lt;/p&gt;

&lt;p&gt;But wait! RSS feeds don't come with labels like "Win" or "Loss." I didn't have a budget to hire interns to label data. So, I built the "Vacuum Cleaner." 🧹&lt;/p&gt;

&lt;p&gt;I wrote a "dumb" Heuristic Engine that scans headlines for words like "Acquitted" or "Allowed" and stamps them as WIN. If it sees "Dismissed", it stamps LOSS. Boom. Infinite, free, labeled training data. Take that, Cloudflare.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧠 Phase 3: Why I Fired BERT (The Controversy)&lt;/strong&gt;&lt;br&gt;
Warning: This section might offend NLP purists.&lt;/p&gt;

&lt;p&gt;Everyone asked me: "Why didn't you use a Transformer? BERT is SOTA! Do you even Attention Mechanism bro?"&lt;/p&gt;

&lt;p&gt;Look, I know how Transformers work. I actually built one from scratch (seriously, I hand-coded the Multi-Head Attention math—you can check the pain and suffering in my Deep Dive Repo here&lt;br&gt;
&lt;a href="https://github.com/praveena0506/Transformer-from-scratch" rel="noopener noreferrer"&gt;&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;But here is the thing about Transformers:&lt;/p&gt;

&lt;p&gt;They are Divas. 💅&lt;/p&gt;

&lt;p&gt;They demand GPUs.&lt;/p&gt;

&lt;p&gt;They eat RAM like Chrome tabs.&lt;/p&gt;

&lt;p&gt;They are s-l-o-w.&lt;/p&gt;

&lt;p&gt;I am a student running on free cloud tiers. I cannot afford a Diva. I need a Toyota Corolla. So, I used a PyTorch EmbeddingBag network.&lt;/p&gt;

&lt;p&gt;The difference?&lt;/p&gt;

&lt;p&gt;BERT: Reads the text, contemplates the existential meaning of the word "the," checks the context of the previous 500 tokens... [Latency: 400ms].&lt;/p&gt;

&lt;p&gt;My Model: Averages the vectors. Smashes them into a Linear Layer. Done. [Latency: 12ms].&lt;/p&gt;

&lt;p&gt;It is 10x faster, runs on a standard CPU, and costs $0. And guess what? It’s 95% accurate. Efficiency &amp;gt; Hype.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;☁️ Phase 4: The Immortal Architecture (MongoDB + Airflow)&lt;/strong&gt;&lt;br&gt;
A Zombie AI needs a brain that doesn't get wiped when the server restarts. If I stored my data in a .csv on Heroku, it would vanish every 24 hours. (RIP).&lt;/p&gt;

&lt;p&gt;So I brought in the heavy hitters:&lt;/p&gt;

&lt;p&gt;MongoDB Atlas (The Brain Bucket):&lt;/p&gt;

&lt;p&gt;Why? Because legal text is messy. SQL tables scream if you miss a column. MongoDB just takes the JSON and says "Thank you."&lt;/p&gt;

&lt;p&gt;Now, even if my code crashes, the Knowledge Base survives in the cloud.&lt;/p&gt;

&lt;p&gt;The "Groundhog Day" Loop:&lt;/p&gt;

&lt;p&gt;I designed the system to mimic an Apache Airflow DAG.&lt;/p&gt;

&lt;p&gt;6:00 AM: Wake up.&lt;/p&gt;

&lt;p&gt;6:05 AM: Scrape Google.&lt;/p&gt;

&lt;p&gt;6:10 AM: Auto-Label.&lt;/p&gt;

&lt;p&gt;6:15 AM: Retrain the Model.&lt;/p&gt;

&lt;p&gt;If the Supreme Court changes a law today, my model learns it by dinner time.&lt;/p&gt;

&lt;p&gt;🐳 The "Works on My Machine" Vaccine&lt;br&gt;
To ensure this delicate house of cards doesn't collapse when I move it from my laptop to the cloud, I wrapped the whole thing in Docker. And I used Poetry because requirements.txt is stuck in 2015 and I like my dependencies deterministic, thank you very much.&lt;/p&gt;

&lt;p&gt;📈 The Verdict&lt;br&gt;
Initial Loss: 1.08 (My model was basically flipping a coin).&lt;/p&gt;

&lt;p&gt;Final Loss: 0.26 (After 5 epochs of auto-scraped data).&lt;/p&gt;

&lt;p&gt;I built a system that feeds itself, teaches itself, and runs for free. Sometimes, the "dumb" solution is actually the smartest one.&lt;/p&gt;

&lt;p&gt;🔗 Code &amp;amp; Proof&lt;br&gt;
The Immortal Project: Legal Eagle AI Repo &lt;a href="https://github.com/praveena0506/legal-eagle-ai" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Discussion: Have you ever ditched a "State of the Art" model because it was just too expensive/slow for production? Let me know in the comments! 👇&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Rebuilding Modern AI for... Fun? A Transformer Story.</title>
      <dc:creator>praveena 0506</dc:creator>
      <pubDate>Thu, 13 Nov 2025 05:40:20 +0000</pubDate>
      <link>https://forem.com/praveena/rebuilding-modern-ai-for-fun-a-transformer-story-501f</link>
      <guid>https://forem.com/praveena/rebuilding-modern-ai-for-fun-a-transformer-story-501f</guid>
      <description>&lt;p&gt;&lt;strong&gt;Sure, You Can import transformers. Or You Could Just Rebuild Modern AI From Scratch, I Guess&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We've all done it. pip install transformers, from transformers import AutoModel, and... you're a modern AI developer. It's magic.&lt;/p&gt;

&lt;p&gt;But what's really happening under the hood? What's going on in that "Attention Is All You Need" paper that everyone cites but maybe... didn't fully read?&lt;/p&gt;

&lt;p&gt;I decided to find out. I went on a quest to rebuild the Transformer Encoder from scratch in PyTorch. No nn.Transformer allowed.&lt;/p&gt;

&lt;p&gt;My goal: To build a model that could perform Text Classification (specifically, sentiment analysis) on the IMDB movie review dataset. And, just maybe, to finally understand what q, k, and v really mean.&lt;/p&gt;

&lt;p&gt;Spoiler: it worked. And it was a journey. Here's how I did it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Suffer? (The Real Goal)&lt;/strong&gt;&lt;br&gt;
Okay, sarcasm aside, why do this?&lt;/p&gt;

&lt;p&gt;Because "using" an API and "understanding" an architecture are two different things. This project is about moving from being an "API user" to an "architect." I wanted to know why the design works, and that meant building the "Lego bricks" myself.&lt;/p&gt;

&lt;p&gt;The architecture I built is an Encoder-Only model, the same conceptual design used by classic models like BERT for text understanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Blueprint: Rebuilding the Core Components&lt;/strong&gt;&lt;br&gt;
I built the entire model as a set of PyTorch nn.Module classes. You can't just build one big class; you have to build the pieces first.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Problem: The Model is "Order-Blind"
The MultiHeadAttention mechanism sees all words at once, like pulling them from a bag. It has no idea "man bites dog" is different from "dog bites man."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The Fix: PositionalEncoding.&lt;/p&gt;

&lt;p&gt;How it Works: I had to build a class that creates a unique "position vector" for every word in the sequence using the famous sin and cos formulas from the paper. This isn't a learned vector; it's a fixed mathematical "fingerprint" for each position.&lt;/p&gt;

&lt;p&gt;The Magic: You just add this position vector to the word's "meaning vector" (its embedding). The final vector the model sees is Vector("man") + Vector(position=0).&lt;/p&gt;

&lt;p&gt;he Core: Multi-Head Attention (The "Head Chef")&lt;br&gt;
This is the heart of the Transformer. It's not one giant "spotlight" of attention; it's 4 (or 8, or 12) smaller, "specialist" spotlights working in parallel.&lt;/p&gt;

&lt;p&gt;I built this class to be a "Head Chef" that manages the whole process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It hires 4 trainable "specialist" layers (w_q, w_k, w_v) to learn how to project the input vectors into Query, Key, and Value "subspaces."&lt;/li&gt;
&lt;li&gt;It splits the main 256-dimension vector into 4 smaller "heads" (of 64-dimensions each).&lt;/li&gt;
&lt;li&gt;It delegates the real math to a simple scaled_dot_product_attention function. &lt;/li&gt;
&lt;li&gt;This is where the $softmax(\frac{QK^T}{\sqrt{d_k}})V$ formula lives.&lt;/li&gt;
&lt;li&gt;It calculates the scores, masks out padding, and creates the new "blended" vector.It stitches the 4 "specialist" reports back together and passes them through one final trainable w_o layer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With my EncoderLayer "Lego block" built, the rest was easy. I stacked 3 of them together, added an Embedding layer at the start, and a simple nn.Linear classifier head at the end.&lt;/p&gt;

&lt;p&gt;And... it actually worked.&lt;/p&gt;

&lt;p&gt;I trained it on 5,000 movie reviews from the IMDB dataset. After just 3 epochs, my Transformer—built from nothing but raw PyTorch and the paper—achieved 75% accuracy on the test set.&lt;/p&gt;

&lt;p&gt;It learned!&lt;br&gt;
you can find the code at [(&lt;a href="https://github.com/praveena0506/Transformer-from-scratch" rel="noopener noreferrer"&gt;https://github.com/praveena0506/Transformer-from-scratch&lt;/a&gt;)&lt;br&gt;
]&lt;/p&gt;

</description>
      <category>dev2</category>
      <category>ai</category>
      <category>python</category>
      <category>llm</category>
    </item>
    <item>
      <title>Not All Returns Are Regrets – Some Just Need a Little AI Magic</title>
      <dc:creator>praveena 0506</dc:creator>
      <pubDate>Mon, 10 Mar 2025 16:52:16 +0000</pubDate>
      <link>https://forem.com/praveena/not-all-returns-are-regrets-some-just-need-a-little-ai-magic-1535</link>
      <guid>https://forem.com/praveena/not-all-returns-are-regrets-some-just-need-a-little-ai-magic-1535</guid>
      <description>&lt;p&gt;&lt;strong&gt;E&lt;/strong&gt;ver wondered what happens when you return a product? No, it doesn’t magically teleport back to the warehouse and get a second life (if only life worked like that!). Instead, it enters the chaotic world of Reverse Logistics, where companies must decide whether to recycle, repair, or resell the item.&lt;/p&gt;

&lt;p&gt;Now, let’s be honest—humans already struggle with tough decisions (Do I really need another coffee?), so why not let AI handle return management too? That’s exactly what my project does! By leveraging AI, automation, and optimization algorithms, I have built a system that makes reverse logistics faster, smarter, and fraud-resistant.&lt;/p&gt;

&lt;p&gt;🤖 &lt;strong&gt;&lt;em&gt;CNNs: The AI That Judges Your Broken Stuff&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
One of the biggest challenges in reverse logistics is deciding what to do with a returned product. Manual classification is not only time-consuming but also prone to errors. This is where Convolutional Neural Networks (CNNs) come in.&lt;/p&gt;

&lt;p&gt;🔍 &lt;strong&gt;How It Works:&lt;/strong&gt;&lt;br&gt;
My CNN model takes an image of the returned product and classifies it into one of three categories:&lt;/p&gt;

&lt;p&gt;🔹 Repair – "I can fix this!" (Minor damages, repairable components)&lt;br&gt;
🔹 Recycle – "No hope, buddy. Off you go to the scrap yard!" (Severe damage, unusable parts)&lt;br&gt;
🔹 Resell – "Good as new! Let’s send it back to the shelves!" (Perfect condition, minimal/no damage)&lt;br&gt;
this model work by classifying image based on damage level threshold value comparing with original image by product id as keyword&lt;/p&gt;

&lt;p&gt;🎯 &lt;strong&gt;Why This Matters:&lt;/strong&gt;&lt;br&gt;
✅ Automates decision-making, reducing the need for manual inspections.&lt;br&gt;
✅ Speeds up processing, ensuring quicker turnaround times.&lt;br&gt;
✅ Minimizes waste, promoting a more sustainable supply chain.&lt;/p&gt;

&lt;p&gt;Instead of humans spending hours inspecting returns, this AI-driven approach helps businesses streamline the return process, saving both time and money.&lt;/p&gt;

&lt;p&gt;🔄 &lt;strong&gt;_If-Else Simulations: Because Returns Aren’t Always Straightforward&lt;br&gt;
_&lt;/strong&gt;&lt;br&gt;
Not every return is simple. Some products are physically intact but electronically faulty, while others are damaged beyond repair. Instead of applying a one-size-fits-all rule, I implemented If-Else Simulations that dynamically adjust workflows based on different conditions.&lt;/p&gt;

&lt;p&gt;🛠️ &lt;strong&gt;Key Factors Considered:&lt;/strong&gt;&lt;br&gt;
✅ Is the product physically damaged beyond repair? – If yes, route to recycling.&lt;br&gt;
✅ Is the return window still valid? – If expired, return request is denied.&lt;br&gt;
✅ Does the warehouse have spare parts? – If yes, route to repair instead of recycling.&lt;br&gt;
✅ Is the item under warranty? – If yes, the return process follows a priority repair queue.&lt;/p&gt;

&lt;p&gt;🔥 &lt;strong&gt;Benefits of This Simulation Approach:&lt;/strong&gt;&lt;br&gt;
🔹 Dynamic workflow automation – No fixed rules, everything is adaptive!&lt;br&gt;
🔹 Real-time decision-making – Reduces manual intervention and speeds up processing.&lt;br&gt;
🔹 Optimized logistics – Ensures returns are handled in the most cost-effective way.&lt;/p&gt;

&lt;p&gt;By simulating different return scenarios, the system ensures that every product follows the most efficient path, reducing operational bottlenecks.&lt;/p&gt;

&lt;p&gt;📦&lt;strong&gt;_ Barcode Verification: Because Fraud Exists!_&lt;/strong&gt;&lt;br&gt;
Let’s face it—not all returns are genuine. Some customers (we see you! 👀) try to return fake, swapped, or damaged products in hopes of getting a refund or replacement.&lt;/p&gt;

&lt;p&gt;To combat return fraud, my system incorporates barcode verification, ensuring that only genuine returns make it through the process.&lt;/p&gt;

&lt;p&gt;🔍** How Barcode Verification Works:**&lt;br&gt;
✅ Scans the product barcode &amp;amp; matches it to the purchase database.&lt;br&gt;
✅ Verifies if the returned item’s serial number matches the original purchase.&lt;br&gt;
✅ Flags suspicious returns (wrong SKU, mismatched product details).&lt;br&gt;
✅ Prevents unauthorized returns, reducing fraud-related losses.&lt;/p&gt;

&lt;p&gt;💡 Why This Is a Game-Changer:&lt;br&gt;
🔹 Reduces fraudulent returns, saving businesses millions in fake refunds.&lt;br&gt;
🔹 Eliminates manual cross-checking, making return processing much faster.&lt;br&gt;
🔹 Ensures customer trust, preventing abuse of return policies.&lt;/p&gt;

&lt;p&gt;With barcode verification, businesses can instantly detect fraud attempts and take necessary action, preventing revenue loss.&lt;/p&gt;

&lt;p&gt;🚚 &lt;strong&gt;&lt;em&gt;Route Optimization: The Fastest Way to Return a Return&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Once a return is classified (Repair, Recycle, Resell), the next big question is:&lt;br&gt;
📍 Where should it go?&lt;/p&gt;

&lt;p&gt;Transporting returns inefficiently can lead to higher costs, delays, and environmental impact. To solve this, I implemented Dijkstra’s Algorithm, which optimizes return routes based on:&lt;/p&gt;

&lt;p&gt;✅ Shortest distance to repair centers, recycling hubs, or warehouses.&lt;br&gt;
✅ Minimizing fuel consumption and transportation costs.&lt;br&gt;
✅ Reducing carbon footprint by optimizing delivery schedules.&lt;/p&gt;

&lt;p&gt;🚀 Why Route Optimization is Crucial:&lt;br&gt;
🔹 Faster processing – Returns don’t sit idle, reducing wait times.&lt;br&gt;
🔹 Lower costs – Smart routing minimizes unnecessary trips.&lt;br&gt;
🔹 Eco-friendly – Fewer miles = reduced CO₂ emissions.&lt;/p&gt;

&lt;p&gt;By leveraging route optimization, businesses can handle returns efficiently, cut costs, and reduce environmental impact at the same time!&lt;/p&gt;

&lt;p&gt;📊 &lt;strong&gt;&lt;em&gt;Power BI + SQL = Smarter Return Management&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
Reverse logistics isn’t just about moving products—it’s about understanding why returns happen and how to reduce them.&lt;/p&gt;

&lt;p&gt;To provide insights into return patterns, I used Power BI and SQL to analyze:&lt;/p&gt;

&lt;p&gt;📈&lt;strong&gt;_ Most common return reasons – Is it product defects, shipping damage, or customer misuse?_&lt;/strong&gt;&lt;br&gt;
🛠️ Repair vs. Scrap trends – How many returns are being successfully repaired vs. recycled?&lt;br&gt;
🌍 Environmental impact – Tracking sustainability efforts, carbon footprint reductions.&lt;/p&gt;

&lt;p&gt;With these insights, businesses can:&lt;br&gt;
✅ Improve product design – Reducing common defects that lead to returns.&lt;br&gt;
✅ Enhance logistics workflows – Optimizing inventory and return centers.&lt;br&gt;
✅ Make data-driven decisions – Identifying opportunities to minimize return rates.&lt;/p&gt;

&lt;p&gt;By combining AI-powered classification, barcode verification, and logistics optimization, my system revolutionizes how businesses handle reverse logistics.&lt;/p&gt;

&lt;p&gt;✨** Why Does This Matter?**&lt;br&gt;
Companies lose billions each year due to inefficient return processes. My approach solves this by making reverse logistics:&lt;/p&gt;

&lt;p&gt;💰 Cost-effective – Automates return classification, reducing manual labor.&lt;br&gt;
🌍 Sustainable – Ensures products are reused or recycled rather than wasted.&lt;br&gt;
⚡ Efficient – Speeds up decision-making with AI-driven automation.&lt;/p&gt;

&lt;p&gt;By leveraging AI, automation, and smart logistics, businesses can turn returns from a costly burden into an opportunity for efficiency and sustainability.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>NLP-Poetry for the Digitally Challenged..</title>
      <dc:creator>praveena 0506</dc:creator>
      <pubDate>Mon, 20 Jan 2025 05:25:37 +0000</pubDate>
      <link>https://forem.com/praveena/poetry-for-the-digitally-challenged-1536</link>
      <guid>https://forem.com/praveena/poetry-for-the-digitally-challenged-1536</guid>
      <description>&lt;p&gt;&lt;strong&gt;EVER WAnted&lt;/strong&gt; to write a poem that resonates or read one and think, “This is beautiful, but what does it even mean?” Well, I decided to take that poetic confusion and turn it into an opportunity with some NLP magic and creativity. My project combines the power of poem simplification, poem generation, and figure-of-speech analysis to make poetry approachable for everyone—whether you’re a literature fan or someone who finds poetry intimidating.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzofykax96cq760pa69tq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzofykax96cq760pa69tq.png" alt="HOME PAGE" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s what my project offers:&lt;br&gt;
&lt;strong&gt;Simplify Complex Poems:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It scans for archaic words and replaces them with modern equivalents, making the language clearer.&lt;br&gt;
Every word is cross-checked against a CSV file of frequently used English words to gauge complexity.&lt;br&gt;
Words that feel overly complex? They get replaced with simpler synonyms, all while maintaining the poem’s structure and rhythm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6u57nunfetzqpw5pw12.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw6u57nunfetzqpw5pw12.png" alt=" " width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Generate Your Own Poem&lt;/strong&gt;&lt;br&gt;
Want to write a poem but need a little help? Just give a topic—whether it’s love, coffee, or cosmic mysteries—and let the API generate a unique poem for you.&lt;br&gt;
It’s dynamic, so every topic results in a fresh, creative piece of poetry.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabgdq2qlm52koxypcsv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabgdq2qlm52koxypcsv9.png" alt=" " width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Analyze Poetic Devices:&lt;br&gt;
*&lt;/em&gt;&lt;br&gt;
Curious about what makes a poem stand out? The project detects figures of speech like metaphors, similes, alliterations, and more, giving you a detailed breakdown of the artistic tools used in any poem.&lt;br&gt;
Whether it’s to sound smart or genuinely understand the beauty of a piece, this feature is your best friend.&lt;br&gt;
Why It Matters:&lt;br&gt;
This tool bridges the gap between traditional poetic beauty and modern accessibility. It allows anyone to appreciate, create, and learn from poetry without needing a degree in literature. Whether you’re simplifying a classic, crafting your masterpiece, or analyzing hidden meanings, this project does it all.&lt;/p&gt;

&lt;p&gt;With this project, I’m not just coding—I’m making poetry fun, relatable, and understandable for everyone. Dive into the code and let’s make poetry a little less intimidating and a lot more enjoyable—one verse at a time!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9q1tx5e3vtqdm8j7kpqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9q1tx5e3vtqdm8j7kpqn.png" alt=" " width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let me know if you want further tweaks!&lt;/p&gt;

</description>
      <category>nlp</category>
      <category>api</category>
      <category>huggingface</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
