<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Mudassir Khan</title>
    <description>The latest articles on Forem by Mudassir Khan (@mudassirworks).</description>
    <link>https://forem.com/mudassirworks</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mudassirworks"/>
    <language>en</language>
    <item>
      <title>CLaRa: Fixing RAG’s Broken Retrieval–Generation Pipeline With Shared-Space Learning</title>
      <dc:creator>Mudassir Khan</dc:creator>
      <pubDate>Tue, 09 Dec 2025 11:00:13 +0000</pubDate>
      <link>https://forem.com/mudassirworks/clara-fixing-rags-broken-retrieval-generation-pipeline-with-shared-space-learning-1448</link>
      <guid>https://forem.com/mudassirworks/clara-fixing-rags-broken-retrieval-generation-pipeline-with-shared-space-learning-1448</guid>
      <description>&lt;p&gt;Retrieval-Augmented Generation &lt;strong&gt;(RAG)&lt;/strong&gt; has become the default solution for grounding LLM outputs in external knowledge. But the classical &lt;strong&gt;RAG&lt;/strong&gt; setup still carries a major architectural flaw: the retriever and generator learn in isolation. This separation quietly sabotages accuracy, increases hallucinations, and prevents genuine end-to-end optimization.&lt;/p&gt;

&lt;p&gt;CLaRa (Closed-Loop Retrieval and Augmentation) introduces a fundamentally different approach — one that actually allows the retriever to learn from what the generator gets wrong.&lt;/p&gt;

&lt;p&gt;Let’s break down why that matters.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The Core Problem:&lt;/strong&gt; &lt;strong&gt;RAG&lt;/strong&gt; Is Optimizing Two Brains That Never Talk&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Traditional &lt;strong&gt;RAG&lt;/strong&gt; pipelines train two components separately:&lt;/p&gt;

&lt;p&gt;Retriever → picks documents using similarity search (dense or sparse).&lt;/p&gt;

&lt;p&gt;Generator &lt;strong&gt;(LLM)&lt;/strong&gt; → takes raw text and tries to answer.&lt;/p&gt;

&lt;p&gt;The failure point?&lt;br&gt;
There is no gradient flow between these two components.&lt;/p&gt;

&lt;p&gt;The retriever has no idea whether the documents it selected actually helped the generator produce the correct answer. It only optimizes for similarity—not usefulness.&lt;/p&gt;

&lt;p&gt;This leads to:&lt;/p&gt;

&lt;p&gt;"Close but wrong" retrieved documents&lt;/p&gt;

&lt;p&gt;Irrelevant context passed to the LLM&lt;/p&gt;

&lt;p&gt;Weak factual grounding because retrieval can't learn from generation errors&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RAG&lt;/strong&gt; keeps trying harder at the wrong task.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;CLaRa’s Fix:&lt;/strong&gt; A Shared Continuous Representation Space&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CLaRa solves the broken gradient issue by mapping both queries and documents into a shared representation space.&lt;/p&gt;

&lt;p&gt;This changes everything.&lt;/p&gt;

&lt;p&gt;How the shared space helps:&lt;/p&gt;

&lt;p&gt;Document embeddings and query embeddings coexist in the same vector space&lt;/p&gt;

&lt;p&gt;The generator’s final answer loss backpropagates through the retriever&lt;/p&gt;

&lt;p&gt;Retriever learns what actually helps answer a query&lt;/p&gt;

&lt;p&gt;Retrieval stops being a similarity contest and becomes a relevance optimization loop&lt;/p&gt;

&lt;p&gt;This feedback loop is the missing piece in traditional &lt;strong&gt;RAG&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The result:&lt;br&gt;
Your retriever becomes intelligent — not just associative.&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;Document Compression:&lt;/strong&gt; Retrieval Without Text Bloat&lt;/p&gt;

&lt;p&gt;One of CLaRa’s most practical innovations is how it handles documents:&lt;/p&gt;

&lt;p&gt;It never retrieves raw text. It retrieves compressed memory tokens.&lt;/p&gt;

&lt;p&gt;These are compact, dense vector representations that summarize meaning, not wording.&lt;/p&gt;

&lt;p&gt;How it works:&lt;/p&gt;

&lt;p&gt;Document → compressed memory tokens (embeddings)&lt;/p&gt;

&lt;p&gt;Retriever fetches tokens instead of full text&lt;/p&gt;

&lt;p&gt;Generator consumes tokens directly&lt;/p&gt;

&lt;p&gt;Why this matters:&lt;/p&gt;

&lt;p&gt;Context length shrinks dramatically&lt;/p&gt;

&lt;p&gt;You can process more documents without hitting LLM token limits&lt;/p&gt;

&lt;p&gt;Computation cost drops&lt;/p&gt;

&lt;p&gt;Throughput increases&lt;/p&gt;

&lt;p&gt;This isn’t just more accurate — it’s more efficient.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;SCP:&lt;/strong&gt; Training the Compressor to Capture Meaning, Not Noise&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CLaRa doesn’t trust standard compression to produce semantically meaningful vectors (and rightly so).&lt;br&gt;
So it introduces Salient Compressor Pre-training &lt;strong&gt;(SCP)&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Goal of SCP:&lt;/p&gt;

&lt;p&gt;Make compressed representations focus on meaning, not superficial text features.&lt;/p&gt;

&lt;p&gt;How &lt;strong&gt;SCP&lt;/strong&gt; trains the compressor:&lt;/p&gt;

&lt;p&gt;The system uses synthetic data generated by an LLM:&lt;/p&gt;

&lt;p&gt;Simple QA pairs&lt;/p&gt;

&lt;p&gt;Complex QA tasks&lt;/p&gt;

&lt;p&gt;Paraphrased document sets&lt;/p&gt;

&lt;p&gt;The compressor is trained to:&lt;/p&gt;

&lt;p&gt;Generate embeddings that can answer these questions&lt;/p&gt;

&lt;p&gt;Reconstruct paraphrased meaning (not the exact text)&lt;/p&gt;

&lt;p&gt;This forces the vectors to internalize the semantic core of the document.&lt;/p&gt;

&lt;p&gt;By the time end-to-end training starts, the compressor already knows how to distill content into high-information embeddings.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Why CLaRa Matters ?&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;CLaRa isn't just a tweak — it’s a structural correction to how &lt;strong&gt;RAG&lt;/strong&gt; should work:&lt;/p&gt;

&lt;p&gt;Retriever learns from generator errors&lt;/p&gt;

&lt;p&gt;Vector-based compressed memory beats raw-text retrieval&lt;/p&gt;

&lt;p&gt;End-to-end gradients reconnect the entire pipeline&lt;/p&gt;

&lt;p&gt;Accuracy improves without inflating compute&lt;/p&gt;

&lt;p&gt;Embeddings become meaning-first, not token-first&lt;/p&gt;

&lt;p&gt;This is the kind of architecture shift that will define the next generation of knowledge-augmented LLM systems.&lt;/p&gt;

</description>
      <category>rag</category>
      <category>llm</category>
      <category>architecture</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>The Rise of Prompt-Driven Development: Why the Future of Software May Be Written in Prompts</title>
      <dc:creator>Mudassir Khan</dc:creator>
      <pubDate>Wed, 24 Sep 2025 03:36:03 +0000</pubDate>
      <link>https://forem.com/mudassirworks/the-rise-of-prompt-driven-development-why-the-future-of-software-may-be-written-in-prompts-27g0</link>
      <guid>https://forem.com/mudassirworks/the-rise-of-prompt-driven-development-why-the-future-of-software-may-be-written-in-prompts-27g0</guid>
      <description>&lt;p&gt;f someone told you a few years ago that software engineers would spend more time writing prompts than code, you probably would have laughed. Yet, here we are, standing right on the edge of that reality.&lt;/p&gt;

&lt;p&gt;Think about it. The last decade was all about cloud computing, DevOps, and agile transformations. Today, the shift is happening all over again—this time towards Prompt-Driven Development (PDD) and Governed Prompt Software (GPS) Engineering.&lt;/p&gt;

&lt;h2&gt;
  
  
  📖 A Little Story: From Code to Prompts
&lt;/h2&gt;

&lt;p&gt;Imagine you're building a chatbot for a healthcare startup.&lt;/p&gt;

&lt;p&gt;The Old Way: You'd spend weeks coding dialogue trees, managing APIs, and writing endless if/else statements.&lt;/p&gt;

&lt;p&gt;The New Way: You simply design structured prompts: "If a patient reports chest pain, escalate immediately to a human doctor and provide emergency guidelines."&lt;/p&gt;

&lt;p&gt;The code is still there, of course. But most of the intelligence now comes from how you craft prompts, set rules, and govern the AI's behavior.&lt;/p&gt;

&lt;p&gt;It feels less like writing code… and more like writing instructions for a very smart but unpredictable intern. 😅&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 What is Prompt-Driven Development (PDD)?
&lt;/h2&gt;

&lt;p&gt;PDD treats prompts the same way traditional software treats code. Instead of just focusing on syntax and logic, you are now responsible for designing prompt workflows, testing them, and documenting your design choices.&lt;/p&gt;

&lt;p&gt;Some of its main building blocks include:&lt;/p&gt;

&lt;p&gt;Prompt Requests (PRs): Think of them as reusable functions, but for prompts.&lt;/p&gt;

&lt;p&gt;Architectural Decision Records (ADRs): Why did you phrase the prompt this way? Why was this workflow better than another?&lt;/p&gt;

&lt;p&gt;Prompt History Records (PHRs): A changelog of how your prompts evolve over time, just like version control for code.&lt;/p&gt;

&lt;p&gt;Testing &amp;amp; Validation: Running prompts against test cases (almost like TDD, but for language models).&lt;/p&gt;

&lt;h2&gt;
  
  
  🛡️ And What About GPS (Governed Prompt Software)?
&lt;/h2&gt;

&lt;p&gt;If PDD is about building the system, GPS is about keeping it safe.&lt;/p&gt;

&lt;p&gt;AI is incredibly powerful, but it can also be unpredictable. GPS Engineering ensures that your prompts don't just "work," but that they also follow governance rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prevent biased or harmful outputs.&lt;/li&gt;
&lt;li&gt;Ensure compliance with safety standards.&lt;/li&gt;
&lt;li&gt;Maintain accountability (who wrote this prompt, and why?).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it as DevSecOps for prompts—a safety layer that ensures your AI systems are trustworthy.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Why Does All This Matter?
&lt;/h2&gt;

&lt;p&gt;Let's look at a few real-world scenarios:&lt;/p&gt;

&lt;p&gt;Finance: Imagine tokenizing ETFs (like what BlackRock is experimenting with). Instead of traders managing everything manually, prompts can govern transactions, risk checks, and reporting in real time, 24/7.&lt;/p&gt;

&lt;p&gt;Healthcare: A digital nurse agent could run entirely on prompt workflows, escalating only when human intervention is truly required.&lt;/p&gt;

&lt;p&gt;Education: Personalized tutors, driven by carefully crafted prompts, could adapt teaching styles to each student in ways no static curriculum ever could.&lt;/p&gt;

&lt;p&gt;In all of these cases, the prompts themselves become the real intellectual property (IP).&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚡ The Big Shift: From Coder to Prompt Architect
&lt;/h2&gt;

&lt;p&gt;Just as the rise of DevOps created new roles like SREs and Platform Engineers, this new era is already giving birth to new roles:&lt;/p&gt;

&lt;p&gt;Prompt Architects&lt;/p&gt;

&lt;p&gt;AI Workflow Engineers&lt;/p&gt;

&lt;p&gt;Governance Leads for AI Systems&lt;/p&gt;

&lt;p&gt;The engineers of tomorrow may not just write Python or Java—they will design conversational logic, governance frameworks, and agent orchestration strategies.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚠️ A Reality Check
&lt;/h2&gt;

&lt;p&gt;We are still in the very early innings of this journey. Every company has its own approach, and the standards are still evolving. But remember: agile, DevOps, and cloud computing all started this way—as small experiments that eventually reshaped the entire industry.&lt;/p&gt;

&lt;p&gt;Prompt-Driven Development and GPS Engineering could be the next major transformation.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧭 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;The future of software might not be written line by line in code, but designed prompt by prompt, governed with care, and orchestrated at scale.&lt;/p&gt;

&lt;p&gt;The real question is: are you ready to evolve from just a developer into a prompt architect?&lt;/p&gt;

&lt;p&gt;Because the next generation of apps won't just be coded. They'll be prompted, governed, and trusted.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;What do you think?&lt;/strong&gt; Let me know in the comments! Will prompts ever truly replace coding workflows, or will they simply add another powerful layer to the software stack?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>softwaredevelopment</category>
      <category>code</category>
    </item>
    <item>
      <title>I Had a Fight With My Toaster. It Made Me Realize Everything About the Future of AI.</title>
      <dc:creator>Mudassir Khan</dc:creator>
      <pubDate>Tue, 23 Sep 2025 06:43:40 +0000</pubDate>
      <link>https://forem.com/mudassirworks/i-had-a-fight-with-my-toaster-it-made-me-realize-everything-about-the-future-of-ai-48n4</link>
      <guid>https://forem.com/mudassirworks/i-had-a-fight-with-my-toaster-it-made-me-realize-everything-about-the-future-of-ai-48n4</guid>
      <description>&lt;p&gt;It was 7 AM. The coffee was brewing, the sun was streaming in, and I was arguing with a toaster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You had one job&lt;/strong&gt;, I mumbled, scraping the blackened edges of my toast into the sink. The toaster, a "smart" one, was supposed to know my preferences. Yet, here we were. This little gadget, a tiny island of supposed intelligence, was completely disconnected from the rest of my morning. It didn't know I was running late. It didn't know the coffee machine had just finished a dark roast, which pairs terribly with burnt bread. It just knew &lt;strong&gt;toast&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This frustratingly common experience is a symptom of a larger problem. We've built millions of "smart" things, but they're not wise. They're isolated, they follow rigid rules, and they don't talk to each other.&lt;/p&gt;

&lt;p&gt;But what if they did? What if we're on the verge of a world that isn't just smart, but truly alive? I recently stumbled upon a vision for the future that gave this idea a name: &lt;strong&gt;Agentia World&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;From Rigid Commands to Living Conversations&lt;br&gt;
Before we dive in, let's talk about how things work now. Almost every digital interaction you have is governed by APIs (Application Programming Interfaces). Think of an API as a strict restaurant menu. You can order item #3 with a side of #B, and the kitchen knows exactly what to do. It's efficient, but rigid. You can't say, "I'm feeling something light and spicy today." The system doesn't understand intent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentia World&lt;/strong&gt; imagines a future that scraps this menu. Instead, our devices—our "agents"—will have intelligent dialogues.&lt;/p&gt;

&lt;p&gt;Imagine your car agent talking to your home agent. It wouldn't send a rigid API call like home.gate.open(). It would have a conversation:&lt;/p&gt;

&lt;p&gt;Car Agent: "Hey, I'm about five minutes away with the owner. It's been a long day."&lt;/p&gt;

&lt;p&gt;Home Agent: "Understood. Preparing for arrival. I'll open the garage, turn on the hallway lights, and start the evening focus playlist on the speakers."&lt;/p&gt;

&lt;p&gt;This isn't just a command; it's a collaborative exchange based on understanding the goal. The home doesn't just obey; it anticipates.&lt;/p&gt;

&lt;p&gt;A World Where Everything is an Agent&lt;br&gt;
This is where the vision gets really wild. It proposes that everything becomes an AI agent. Not just your phone and your car, but the mundane, everyday objects.&lt;/p&gt;

&lt;p&gt;Your coffee machine becomes a personal barista, checking your calendar for early meetings and your health tracker for your caffeine limits.&lt;/p&gt;

&lt;p&gt;Your houseplants have agents that negotiate with the window blinds for the perfect amount of sunlight.&lt;/p&gt;

&lt;p&gt;An entire city becomes a macro-agent. The traffic light agents talk to public transport agents, which talk to the power grid agents, all working in a seamless, living network to eliminate traffic jams and save energy.&lt;/p&gt;

&lt;p&gt;This creates a &lt;strong&gt;living network&lt;/strong&gt; that's constantly learning and adapting. It's a system that's both digital (the AI making decisions) and physical (the agents controlling real-world objects).&lt;/p&gt;

&lt;p&gt;So, What Does This Mean for Us?&lt;br&gt;
For the average person, it means a world that simply works. A world where the technology disappears into the background, seamlessly orchestrating our lives for the better. The constant micro-management of our digital lives fades away.&lt;/p&gt;

&lt;p&gt;For us developers, it represents the next frontier. We'll move from building isolated apps and services to designing collaborative agents that can negotiate, learn, and act within a massive, decentralized ecosystem. The challenge will shift from writing rigid code to teaching intelligent systems.&lt;/p&gt;

&lt;p&gt;The fight with my toaster was a reminder of how far we still have to go. We don't just need smarter devices; we need a wiser world. And that's the promise of a future where everything, from our toasters to our cities, is part of one intelligent, living conversation.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>futurechallenge</category>
      <category>webdev</category>
      <category>discuss</category>
    </item>
    <item>
      <title>From Full-Stack to AI + Web3: Why I’m Re-Entering Tech in 2025</title>
      <dc:creator>Mudassir Khan</dc:creator>
      <pubDate>Sat, 06 Sep 2025 13:29:05 +0000</pubDate>
      <link>https://forem.com/mudassirworks/from-full-stack-to-ai-web3-why-im-re-entering-tech-in-2025-935</link>
      <guid>https://forem.com/mudassirworks/from-full-stack-to-ai-web3-why-im-re-entering-tech-in-2025-935</guid>
      <description>&lt;p&gt;Hey DEV! I’m &lt;strong&gt;Mudassir Khan&lt;/strong&gt;—a full-stack builder getting back to hands-on coding with a sharper focus on &lt;strong&gt;AI automation&lt;/strong&gt; and &lt;strong&gt;blockchain&lt;/strong&gt;. I’ve run products and client work for a while; now I’m doubling down on building in public, sharing what actually works, and collaborating with the community.&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Full-stack with &lt;strong&gt;Next.js, React, Node.js, TypeScript&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Automations with &lt;strong&gt;Python, n8n, CrewAI&lt;/strong&gt;, a bit of &lt;strong&gt;PyTorch&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Exploring &lt;strong&gt;Web3&lt;/strong&gt; (Solidity + practical dApps)
&lt;/li&gt;
&lt;li&gt;Open to &lt;strong&gt;open-source&lt;/strong&gt; &amp;amp; &lt;strong&gt;startup collaborations&lt;/strong&gt; (remote)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why I’m back to building (and sharing)
&lt;/h2&gt;

&lt;p&gt;I missed the craft: shipping features, deleting code that doesn’t pull its weight, and watching users find value. I’m here to write short, practical posts—the kind you can skim in 3–5 minutes and apply the same day.&lt;/p&gt;




&lt;h2&gt;
  
  
  What you can expect here
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1) Small, repeatable wins&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Short tutorials on things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;turning messy scripts into reliable &lt;strong&gt;n8n&lt;/strong&gt; flows
&lt;/li&gt;
&lt;li&gt;using &lt;strong&gt;CrewAI&lt;/strong&gt; patterns for agent hand-offs
&lt;/li&gt;
&lt;li&gt;cutting build times, faster previews, cleaner DX&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2) Open-source first&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I’ll publish small utilities and examples. If a snippet helps you, PRs are welcome. If you want a maintainer Buddy, ping me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) Honest notes from the trenches&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
What failed, what shipped, and what I’d do differently. No buzzword bingo—just notes that save you time.&lt;/p&gt;




&lt;h2&gt;
  
  
  Tech I use (and enjoy)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; React, Next.js, Tailwind
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Node.js/Express, TypeScript, Python
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data:&lt;/strong&gt; SQL, MongoDB
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI/Automation:&lt;/strong&gt; n8n, CrewAI, PyTorch (light), agents &amp;amp; orchestration
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DevOps:&lt;/strong&gt; Docker
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web3:&lt;/strong&gt; Solidity basics, dApp prototypes&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Recent tiny win: a CLI to batch-convert assets to modern formats and auto-rewrite imports in Next.js repos—helped cut bundle size and image payloads.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  2025 focus
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AI agents for real developer workflows (not demos)
&lt;/li&gt;
&lt;li&gt;Developer utilities that reduce toil
&lt;/li&gt;
&lt;li&gt;Web3 experiments where decentralization makes sense
&lt;/li&gt;
&lt;li&gt;Writing and shipping in public, consistently&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Let’s collaborate
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Building an OSS tool and need a teammate?
&lt;/li&gt;
&lt;li&gt;Startup founder who needs a pragmatic full-stack dev?
&lt;/li&gt;
&lt;li&gt;Have an idea around &lt;strong&gt;AI × automation × Web3&lt;/strong&gt;?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Comment, DM, or drop your GitHub—happy to jam on ideas, review PRs, or pair on a weekend hack.&lt;/p&gt;

&lt;p&gt;Thanks for reading. If any of this resonates, hit &lt;strong&gt;Follow&lt;/strong&gt;—the next post will be a 5-minute automation guide you can copy, tweak, and ship. 🚀&lt;/p&gt;

&lt;p&gt;— &lt;strong&gt;Mudassir Khan&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Muddi00seven" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
