<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Narayan</title>
    <description>The latest articles on Forem by Narayan (@narayan_f8f2c91c99dfd33e6).</description>
    <link>https://forem.com/narayan_f8f2c91c99dfd33e6</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/narayan_f8f2c91c99dfd33e6"/>
    <language>en</language>
    <item>
      <title>🔥 Single Biggest Idea Behind Polars Isn't Rust — It's LAZY 🔥 Part(2/5)</title>
      <dc:creator>Narayan</dc:creator>
      <pubDate>Fri, 07 Nov 2025 05:41:15 +0000</pubDate>
      <link>https://forem.com/narayan_f8f2c91c99dfd33e6/single-biggest-idea-behind-polars-isnt-rust-its-lazy-part25-3o8i</link>
      <guid>https://forem.com/narayan_f8f2c91c99dfd33e6/single-biggest-idea-behind-polars-isnt-rust-its-lazy-part25-3o8i</guid>
      <description>&lt;p&gt;If you're still processing data in sequential steps (Pandas-style), you're missing out on 90% of Polars' performance gains.&lt;/p&gt;

&lt;p&gt;This is the core difference: Eager vs. Lazy. Understanding this makes the Expression API click.&lt;/p&gt;

&lt;p&gt;❌ &lt;strong&gt;𝐓𝐇𝐄 𝐄𝐀𝐆𝐄𝐑 (𝐏𝐀𝐍𝐃𝐀𝐒) 𝐖𝐀𝐘: 𝐄𝐱𝐞𝐜𝐮𝐭𝐞 𝐈𝐦𝐦𝐞𝐝𝐢𝐚𝐭𝐞𝐥𝐲&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every line runs instantly, creating a new DataFrame in memory at each step.&lt;/p&gt;

&lt;p&gt;import pandas as pd&lt;br&gt;
df = pd.read_csv("large_file.csv")  # 1. Loads ALL columns&lt;br&gt;
df['doubled'] = df['new_val'] * 2   # 2. Creates new copy&lt;br&gt;
df = df.groupby('category').sum()   # 3. Final compute&lt;/p&gt;

&lt;p&gt;🚨 Result: Huge memory footprint, wasted I/O, no query optimization.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;𝐓𝐇𝐄 𝐋𝐀𝐙𝐘 (𝐏𝐎𝐋𝐀𝐑𝐒) 𝐖𝐀𝐘: 𝐏𝐥𝐚𝐧 𝐅𝐢𝐫𝐒𝐭, 𝐄𝐱𝐞𝐜𝐮𝐭𝐞 𝐎𝐧𝐜𝐄&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Polars records all operations, builds an optimized plan, and only runs when you call &lt;code&gt;.collect()&lt;/code&gt;. This unlocks &lt;strong&gt;QUERY OPTIMIZATION&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;🎯 The Two-Step Dance:&lt;/p&gt;

&lt;p&gt;Step 1: Define the Plan (LazyFrame). Nothing runs yet.&lt;br&gt;
import polars as pl&lt;br&gt;
q = (&lt;br&gt;
    pl.scan_csv("large_file.csv")&lt;br&gt;
    .filter(pl.col("value") &amp;gt; 100)&lt;br&gt;
    .with_columns(&lt;br&gt;
        (pl.col("new_val") * 2).alias("doubled")&lt;br&gt;
    )&lt;br&gt;
    .group_by("category")&lt;br&gt;
    .agg(pl.col("doubled").sum())&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;Step 2: Execute the Optimized Plan.&lt;br&gt;
result = q.collect()&lt;/p&gt;

&lt;p&gt;🧠 &lt;strong&gt;𝐖𝐇𝐀𝐓 𝐓𝐇𝐄 𝐐𝐔𝐄𝐑𝐘 𝐎𝐏𝐓𝐈𝐌𝐈Z𝐄𝐑 𝐃𝐎𝐄𝐒&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Polars applies transformations to your plan:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Projection Pushdown:&lt;/strong&gt; Only read the columns you use.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Predicate Pushdown:&lt;/strong&gt; Filter rows while reading the CSV (skip rows at the source).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Expression Fusion:&lt;/strong&gt; Combine multiple operations into a single, efficient kernel (no intermediate copies).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;💰 &lt;strong&gt;𝐑𝐄𝐀𝐋-𝐖𝐎𝐑𝐋𝐃 𝐈𝐌𝐏𝐀𝐂𝐓 (10𝐆𝐁 𝐂𝐒𝐕 𝐁𝐞𝐧𝐜𝐡𝐦𝐚𝐫𝐤)&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Pandas (Eager)&lt;/th&gt;
&lt;th&gt;Polars (Lazy)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Time&lt;/td&gt;
&lt;td&gt;~8 minutes&lt;/td&gt;
&lt;td&gt;~45 seconds (&lt;strong&gt;10x faster&lt;/strong&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Memory&lt;/td&gt;
&lt;td&gt;12GB peak&lt;/td&gt;
&lt;td&gt;2GB peak (&lt;strong&gt;6x less&lt;/strong&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Why the difference?&lt;/strong&gt; Polars only loaded what it needed, filtered while reading, and fused operations.&lt;/p&gt;

&lt;p&gt;🔑 &lt;strong&gt;𝐊𝐄𝐘 𝐓𝐀𝐊𝐄𝐀𝐖𝐀𝐘&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lazy evaluation is why Polars is fast. The speedups come from:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Loading only what you need.&lt;/li&gt;
&lt;li&gt;Filtering at the source.&lt;/li&gt;
&lt;li&gt;Fusing operations.&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  DataEngineering #Polars #Python #DataScience #DataAnalytics
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>💥 Polars vs. Pandas: Why Your Next ETL Pipeline Should Run on Rust (Part 1/5)</title>
      <dc:creator>Narayan</dc:creator>
      <pubDate>Tue, 14 Oct 2025 20:57:10 +0000</pubDate>
      <link>https://forem.com/narayan_f8f2c91c99dfd33e6/polars-vs-pandas-why-your-next-etl-pipeline-should-run-on-rust-part-15-536o</link>
      <guid>https://forem.com/narayan_f8f2c91c99dfd33e6/polars-vs-pandas-why-your-next-etl-pipeline-should-run-on-rust-part-15-536o</guid>
      <description>&lt;p&gt;If you're a Data Engineer, you've seen the struggle: Pandas is brilliant for analysis, but when you hit the scaling, multi-threading, or memory wall in production, it falls short.&lt;/p&gt;

&lt;p&gt;I've been doing a deep dive into Polars as part of my own "learning in public" journey. It's not just "faster Pandas"; it's a complete shift in how data processing is handled, built on a Rust core to solve our toughest ETL problems.&lt;/p&gt;

&lt;p&gt;This post shares my findings and conviction that Polars is the future of performant, single-node data engineering.&lt;/p&gt;

&lt;p&gt;🚀 &lt;strong&gt;The Core Difference: Rust and Arrow&lt;/strong&gt;&lt;br&gt;
Polars' superior performance isn't magic—it's architecture. It leverages the best features of modern systems engineering:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Engine Core:&lt;/strong&gt; Rust, a blazingly fast and memory-safe systems language. This is where the speed comes from, allowing Polars to execute code efficiently and in parallel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory Model:&lt;/strong&gt; Apache Arrow, which uses a columnar format. This means Polars only loads the columns it needs, uses less memory overall, and enables zero-copy sharing between processes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ikm3pz1r7iwqtdy8i4k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ikm3pz1r7iwqtdy8i4k.png" alt=" " width="800" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;Code Simplicity: The Functional API&lt;/strong&gt;&lt;br&gt;
Polars encourages a clean, functional style that reduces bugs and improves readability.&lt;/p&gt;

&lt;p&gt;In Pandas, you often mutate the DataFrame (df[col] = ...). In Polars, you build an execution plan using chained methods and expressions, which is key to its optimization engine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyder5jgmw7l57mpdb8k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyder5jgmw7l57mpdb8k.png" alt=" " width="723" height="44"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Polars method is a declarative instruction that the Rust optimizer can rearrange and fuse for maximum performance. This is critical for maintainable, high-speed pipelines.&lt;/p&gt;

&lt;p&gt;This is Part 1 of a 5-part miniseries documenting my deep dive and journey into Polars for scalable ETL.&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;Question:&lt;/strong&gt; What size dataset (rows/GBs) was the tipping point that made you start looking beyond Pandas? Share your experience below!&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>rust</category>
      <category>python</category>
      <category>performance</category>
    </item>
    <item>
      <title>Two Years of Microsoft Fabric: Game Changer or Still Leveling Up? 🚀</title>
      <dc:creator>Narayan</dc:creator>
      <pubDate>Wed, 03 Sep 2025 21:59:41 +0000</pubDate>
      <link>https://forem.com/narayan_f8f2c91c99dfd33e6/two-years-of-microsoft-fabric-game-changer-or-still-leveling-up-165e</link>
      <guid>https://forem.com/narayan_f8f2c91c99dfd33e6/two-years-of-microsoft-fabric-game-changer-or-still-leveling-up-165e</guid>
      <description>&lt;p&gt;Microsoft Fabric has been making waves for two years now, and after diving deep with more than 6 months of hands-on experience, I’ve got some thoughts. Is it the unified data dream, or does it still have some growing pains? Let’s break it down.&lt;/p&gt;

&lt;p&gt;✅ The Superpowers: What Fabric Nails&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Unified Platform &amp;amp; OneLake 🤝&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is Fabric’s biggest win. OneLake as a "single pane of glass" isn't just marketing hype; it genuinely cuts down on data duplication and massively improves data quality. Imagine all your data, from raw to refined, living in one place, accessible by all your tools.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Efficiency Gains ⚡️&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Remember the days of juggling Azure Data Factory, Synapse, and Power BI separately? With Fabric, those days are gone. The integration is seamless, making the entire data workflow feel like one continuous, efficient process. It’s a huge time-saver.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;BI + AI Integration 🧠&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With Copilot in Power BI, turning natural language into analytics feels incredibly powerful. It makes the entire analytics and data engineering process more fluid and accessible, bringing BI and AI closer together than ever before.&lt;/p&gt;

&lt;p&gt;⚠️ The Road Ahead: What's Still Evolving&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Scale 📈&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Fabric handles big volumes, but under extreme, highly concurrent workloads, I've seen some hiccups. It's not a deal-breaker for most, but it’s something to watch and plan for, especially with mission-critical pipelines.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Governance 🛡️&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While the basics are there, the fine-grained controls and enterprise-level audit trails that large organizations need are still maturing. For a company with strict regulatory requirements, this is a key area that needs more polish.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adoption 🧭&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Momentum is building, but most orgs are still in pilot mode or in limited production. Fully embracing Fabric requires a significant cultural shift and a reskilling effort for data teams, which naturally slows down widespread adoption.&lt;/p&gt;

&lt;p&gt;👉 My Take: Early Growth, Powerful Potential&lt;br&gt;
Fabric is a game-changer for teams already deep in the Microsoft ecosystem. However, it's not yet the “one platform to rule them all.” It’s powerful, but still in its early growth phase. The potential is massive, but the journey to full enterprise maturity is still underway.&lt;/p&gt;

&lt;p&gt;💬 What's your take? Let's discuss!&lt;br&gt;
Would you trust Fabric with production-grade pipelines today, or wait for more polish? Share your experience below! 👇&lt;/p&gt;

</description>
      <category>azure</category>
      <category>dataengineering</category>
      <category>cloud</category>
      <category>microsoftfabric</category>
    </item>
    <item>
      <title>🚀 Synthetic Data: The Next Frontier for Data Engineers</title>
      <dc:creator>Narayan</dc:creator>
      <pubDate>Wed, 20 Aug 2025 15:57:02 +0000</pubDate>
      <link>https://forem.com/narayan_f8f2c91c99dfd33e6/synthetic-data-the-next-frontier-for-data-engineers-52ig</link>
      <guid>https://forem.com/narayan_f8f2c91c99dfd33e6/synthetic-data-the-next-frontier-for-data-engineers-52ig</guid>
      <description>&lt;p&gt;Hey data community, let’s talk about a topic that's quickly becoming essential. We all know the struggle: a world hungry for data and AI, but with privacy rules (HIPAA, GDPR) that make it tough to get what we need.&lt;/p&gt;

&lt;p&gt;That's where synthetic data comes in. Forget about "fake data"—think of it as a meticulously engineered copycat of the real thing. It looks, feels, and acts just like production data, but without any of the sensitive info. This lets us build, test, and innovate without the usual legal headaches.&lt;/p&gt;

&lt;p&gt;📊 What Are We Actually Using This For?&lt;/p&gt;

&lt;p&gt;If you're wondering where the real work is happening, here’s a quick look at where data teams are putting their effort:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filxyhtdufph0czdpc5x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Filxyhtdufph0czdpc5x7.png" alt="A bar chart showing percentages of effort spend " width="731" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Pipelines&lt;/strong&gt;: This is a big one. We can now run our ETL jobs on massive, realistic datasets to catch bugs and fine-tune performance before they ever touch live data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Training AI&lt;/strong&gt;: AI models need tons of data, and synthetic data provides an unlimited, ethical source. It's how we're building better, less-biased models for drug discovery and other critical tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running Simulations&lt;/strong&gt;: Ever wanted to test a new business strategy? Now we can run realistic simulations in a secure environment, saving tons of time and money.&lt;/p&gt;

&lt;p&gt;💻 A Peek Under the Hood&lt;/p&gt;

&lt;p&gt;This isn't just theory. We’re using powerful tools to get it done. Libraries like SDV are perfect for learning the statistical DNA of a dataset and generating a twin. Then, for enterprise-level scaling, platforms like Gretel.ai and Mostly AI take it to the next level.&lt;/p&gt;

&lt;p&gt;Take a look at this quick example, which has data generated using Faker. See how the synthetic data isn't just random—it captures the relationships from the original data, like how age might correlate with a medical diagnosis. That's the real magic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbzfcxkdlrkfth0bwk3w.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbzfcxkdlrkfth0bwk3w.PNG" alt="A dataset of patient and medical diagnosis generated using Faker" width="601" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This isn’t just a cool new toy. The ability to build, manage, and scale these pipelines is quickly becoming a core skill for any data engineer. It's a fundamental shift in our field, and it’s happening now.&lt;/p&gt;

&lt;p&gt;So, for those of you out there who have run a proof-of-concept with synthetic data, what was your biggest win or a lesson you learned the hard way? Share your thoughts below! 👇&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>syntheticdata</category>
      <category>clinicalresearch</category>
      <category>ai</category>
    </item>
    <item>
      <title>🌟 Back After 3 Years: My First Full Stack App in 3 Days</title>
      <dc:creator>Narayan</dc:creator>
      <pubDate>Thu, 07 Aug 2025 04:26:14 +0000</pubDate>
      <link>https://forem.com/narayan_f8f2c91c99dfd33e6/back-after-2-years-my-first-full-stack-app-in-3-days-12mc</link>
      <guid>https://forem.com/narayan_f8f2c91c99dfd33e6/back-after-2-years-my-first-full-stack-app-in-3-days-12mc</guid>
      <description>&lt;p&gt;It’s been more than 3 years since I last posted, and I’m quietly excited to share a big step: I built my first full-stack app, MediTrack, in just three days during my company’s Vibe Coding challenge. As a data engineer with no app dev experience, this was a big deal for me, and I’m eager to tell you about it.&lt;/p&gt;

&lt;p&gt;What’s Vibe Coding? 🛠️&lt;br&gt;
It’s a short, creative coding event where developers explore tools like Cursor, Replit, Gemini Code Assist, GitHub Copilot, and frameworks like Windsurf. The goal is to learn, build something useful, and share it with others.&lt;/p&gt;

&lt;p&gt;I used Windsurf, which made building an app feel surprisingly doable.&lt;/p&gt;

&lt;p&gt;💊 Why MediTrack?&lt;/p&gt;

&lt;p&gt;Missed a pill lately? I built MediTrack to help people—seniors, caregivers, or anyone—stay on top of their medication schedules with ease.&lt;/p&gt;

&lt;p&gt;⚙️ My Tech Stack&lt;/p&gt;

&lt;p&gt;MediTrack runs on Windsurf, a beginner-friendly full-stack framework with:&lt;/p&gt;

&lt;p&gt;Next.js 📱: For seamless frontend and backend.&lt;/p&gt;

&lt;p&gt;Prisma 📊: Easy database setup.&lt;/p&gt;

&lt;p&gt;TailwindCSS + shadcn/ui 🎨: For a clean, simple design.&lt;/p&gt;

&lt;p&gt;Clerk 🔒: For secure logins.&lt;/p&gt;

&lt;p&gt;Windsurf handled the complex parts, letting me focus on features.&lt;/p&gt;

&lt;p&gt;🚀 What I Built in 3 Days&lt;/p&gt;

&lt;p&gt;MediTrack is packed with practical features:&lt;/p&gt;

&lt;p&gt;Secure Logins 🔐: Dashboards for patients or caregivers.&lt;/p&gt;

&lt;p&gt;Manage Meds ✍️: Add, edit, or delete meds with number-only fields for quantity, refills, and prescribed amounts.&lt;/p&gt;

&lt;p&gt;Smart Suggestions 💡: Auto-suggests tablets, capsules, or syrups for common meds via dropdowns.&lt;/p&gt;

&lt;p&gt;Auto-Formatting 🖌️: Medicine names in Title Case, dosage times as HH:MM.&lt;/p&gt;

&lt;p&gt;Confirm Changes ✅: Prompts when editing names or switching Taken/Missed status.&lt;/p&gt;

&lt;p&gt;Track Doses 📋: Monitor quantity, refills, and prescriptions.&lt;/p&gt;

&lt;p&gt;Weekly Reports 📧: Summaries sent via SMS or email.&lt;/p&gt;

&lt;p&gt;User-Friendly 😊: Clear interface for first-timers.&lt;/p&gt;

&lt;p&gt;Mobile Support 📲: Works great on phones and tablets.&lt;/p&gt;

&lt;p&gt;📚 What I Learned&lt;/p&gt;

&lt;p&gt;This project taught me a lot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You don't need to be a design expert to build a solid app.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tools like Windsurf simplify complex tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AI tools help, but your ideas make the difference.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🌱 Why This Matters&lt;/p&gt;

&lt;p&gt;After three years away, building MediTrack was a personal win. Vibe Coding pushed me to step out of my data engineering comfort zone, and I’m proud of what I created. It’s rewarding to make something that could help people.&lt;/p&gt;

&lt;p&gt;Thinking of trying Windsurf or Copilot? Start with a small project—it’s worth it. Got an idea you’re working on? I’d love to hear about it in the comments! 💬&lt;/p&gt;

&lt;p&gt;Disclaimer: This was part of my company’s Vibe Coding challenge. All thoughts are my own.&lt;/p&gt;

</description>
      <category>vibecoding</category>
      <category>windsurf</category>
      <category>buildinpublic</category>
      <category>fullstack</category>
    </item>
  </channel>
</rss>
