<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Navin Varma</title>
    <description>The latest articles on Forem by Navin Varma (@navinvarma).</description>
    <link>https://forem.com/navinvarma</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/navinvarma"/>
    <language>en</language>
    <item>
      <title>We've Been Here Before: Leading Engineers Through the AI Wave</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Mon, 13 Apr 2026 12:01:40 +0000</pubDate>
      <link>https://forem.com/navinvarma/weve-been-here-before-leading-engineers-through-the-ai-wave-14cg</link>
      <guid>https://forem.com/navinvarma/weve-been-here-before-leading-engineers-through-the-ai-wave-14cg</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.nvarma.com/blog/2026-04-12-leading-engineers-through-ai-wave/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;These are my personal thoughts, experiences, and opinions, and they do not reflect the views of the company I work for.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Last month I was reading a Hacker News thread about a blog post called &lt;a href="https://news.ycombinator.com/item?id=47272734" rel="noopener noreferrer"&gt;"We might all be AI engineers now"&lt;/a&gt;. The comments were honest in a way you don't get from your favorite LLM chatbot.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Not a day goes by that a fellow engineer doesn't text me a screenshot of something stupid an AI did in their codebase. But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write."&lt;/p&gt;

&lt;p&gt;"The code it generates is locally ok, but globally kind of bad...LLMs will always get worse as your codebase grows."&lt;/p&gt;

&lt;p&gt;"All I'm saying is you're gonna have to figure out how to do this with an agent."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Between that thread and my own X &amp;amp; LinkedIn feeds, the terms keep piling up. Forward Deployment Engineering. Agentic Engineering. Token Cost Optimization. Context Engineering. AI Safety Engineering. If I'd shown this thread to the version of me from 2021, I'd assume it was science fiction.&lt;/p&gt;

&lt;p&gt;I've had this exact feeling before across various times in my career now that I'm reflecting on this strange sense of deja vu. In 2001, I was a student in India living through the &lt;a href="https://en.wikipedia.org/wiki/Dot-com_bubble" rel="noopener noreferrer"&gt;dot-com bubble&lt;/a&gt; burst from across the ocean. In 2007, I was an early-career software engineer in India's IT services industry, on the receiving end of the &lt;a href="https://en.wikipedia.org/wiki/Follow-the-sun" rel="noopener noreferrer"&gt;follow-the-sun model&lt;/a&gt;, watching global delivery models get stitched together in real time. By 2010, the mobile app boom had everyone convinced that if you didn't have an app, you didn't have a business. By 2014, every company was scrambling to get their infrastructure "into the cloud." The vocabulary changed each time, the anxiety was real each time, and the people who came out ahead weren't the ones who predicted the future. &lt;strong&gt;They were the ones who recognized the pattern.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I wrote about this feeling of acceleration back in January in &lt;a href="https://www.nvarma.com/blog/2026-01-31-ai-saturation-makes-me-sad/" rel="noopener noreferrer"&gt;AI Saturation&lt;/a&gt; and touched on it again in &lt;a href="https://www.nvarma.com/blog/2026-01-20-software-engineering-as-craft/" rel="noopener noreferrer"&gt;Software Engineering as a Craft&lt;/a&gt;. Three months later, I want to reflect on how time and again, we get reminded of how humans react to changing winds. The more I witness the evolution of AI, the more I realize the playbook for leading through it isn't new. It's been written before by leaders who navigated transitions just as pathbreaking as this one. This post dives into my perspective of learning some interesting things from the past while keeping an eye on how we evolve into the future.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Internet: When it was cool to be on the "Web"
&lt;/h2&gt;

&lt;p&gt;I remember the late '90s as a teenager growing up in the fledgling world of software services in India. You could sense how everyone was joining Engineering colleges that proliferated all over India, especially across South India. You had students scoring the top ranks all choosing Electronics and Communication Engineering (ECE) or Computer Science and Engineering (CSE). The more "core subject" person chose something like Electrical and Electronics Engineering (EEE) or more niche like Chemical Engineering, Textile Engineering and so on. However, the &lt;a href="https://en.wikipedia.org/wiki/Year_2000_problem" rel="noopener noreferrer"&gt;Y2K problem&lt;/a&gt; had sent demand for computer-literate workers through the roof in India, and suddenly everyone was enrolling in &lt;a href="https://en.wikipedia.org/wiki/NIIT" rel="noopener noreferrer"&gt;NIIT&lt;/a&gt; evening classes the same way the generation before had lined up for typewriting courses. Over in the US, every business needed a website, a "web strategy," and a "webmaster" (remember that title?). Entire consulting practices sprang up overnight to help brick-and-mortar businesses figure out HTML. Pretty much everyone who did a non-CS degree still pivoted to an IT job in software factories using labor arbitrage to their advantage. Well, we've come a long way since then in retrospect.&lt;/p&gt;

&lt;p&gt;Most of those web companies failed. This was not because the technology was bad, but because they'd confused having a website with having a business. The ones that survived, Amazon selling books, Google organizing information, Yahoo providing email and news solved a real problem for real people. The technology was the enabler but not the product in and of itself. The leaders I admired during that time and those who led through changing times shared a common trait: &lt;strong&gt;they asked "why are we building this" or "what problem does this solve?" before they asked "how does this technology work?"&lt;/strong&gt; They were paying attention to the new tools to learn how to support their teams through transformation. They refused to let the tools drive the business strategy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbs7gc3ysztgqj36j2ts.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbs7gc3ysztgqj36j2ts.jpg" alt="Twilight over a campus in Mysore, India, February 2007. A large geodesic dome silhouetted against a pink and blue sunset sky with a football field in the foreground." width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A scene from a campus in Mysore, February 2007. From my personal archives, taken during my short stint there. This was the epicenter of India's IT boom, where thousands of fresh graduates were being minted into software engineers every quarter.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Global Delivery: When your work spanned Time Zones
&lt;/h2&gt;

&lt;p&gt;The global delivery model (aka offshore development) wave of the mid-2000s was different because it wasn't really about technology at all. It was about operating models, about economies of scale, tapping into labor arbitrage and financial macroeconomics. Your team wasn't down the hall anymore. They were in Dublin or Bangalore or Hyderabad or Manila, and depending on which time zone got priority, your meetings were either at 7 AM or 9 PM.&lt;/p&gt;

&lt;p&gt;I have lived this from both sides, which gives me a perspective I don't think everyone has. If you did not realize it by now in this post, I grew up in India during the outsourcing boom of mid-90s to mid-2000s. I watched relatives and friends build careers in IT services companies that were scaling people at breakneck speed. The company I first worked for is currently at employee number ~353,500. I was ~71,500 about 20 years ago. Later I moved to the US and experienced the other side of it, working on teams that were figuring out how to collaborate across 12-hour time zone gaps. The fear of missing out on both sides was something else. I still remember getting a DM from someone in Atlanta at 7:30am their time, when I was closing out my day at 5pm after my evening tea. People in the US focused on getting the project completed with the same quality as what they would have done in-house. People in India worried about being treated as interchangeable if this project did not go well, or cost-overruns threatened future contracts. And managers everywhere were dealing with communication overhead, quality concerns, and the dreaded "&lt;a href="https://en.wikipedia.org/wiki/Follow-the-sun" rel="noopener noreferrer"&gt;follow the sun&lt;/a&gt;" model that sounded brilliant in a boardroom but was complicated in practice.&lt;/p&gt;

&lt;p&gt;The good leaders I worked with during that time understood something important. &lt;strong&gt;They invested in communication infrastructure before they invested in headcount.&lt;/strong&gt; Shared documentation, clear ownership, regular face time even if it meant someone was on a video call at an unreasonable hour. They treated distributed teams as a design constraint, not a cost optimization. The leaders who treated offshoring purely as a cost-cutting exercise got exactly what they paid for. The ones who saw it as a way to access global talent and build something more resilient? Those organizations are still thriving. It is funny that the age of AI takes us back to the same principles. I still remember in my new hire training, an off-the-cuff remark made by a trainer about "document everything, so if you get hit by a bus, the project continues". That's not what a starry eyed new grad wants to hear, but it was the reality of operating a world where humans are replaceable cogs in a larger system that chases the bottom line. &lt;/p&gt;

&lt;h2&gt;
  
  
  Mobile Apps: Adjusting to a whole new era of smartphone apps
&lt;/h2&gt;

&lt;p&gt;Apple launched the iPhone in 2007 and the App Store in 2008, which triggered the smartphone app wave. It felt like the dot-com boom all over again but in your pocket. Every business needed a mobile app. It didn't matter if you were a restaurant, a dry cleaner, or a bank. "There's an app for that" became the mantra and entire companies were built around the idea that mobile was going to eat everything.&lt;/p&gt;

&lt;p&gt;I remember the explosion of new titles here too. "Mobile Developer" became its own specialization overnight. You had iOS developers, Android developers, and then cross-platform developers arguing about whether PhoneGap or Titanium or later React Native was the right way to build. App marketplaces created a whole new distribution model that startups could ride without needing a sales team. And the gold rush of apps, most of which were terrible, reminded me of all those dot-com websites that existed just to exist. I still remember seeing my favorite Snake game in Nokia get replaced by Tic Tac Toe or Chess as the default game to play. The concept of games never went away, but the mode of delivery and interactivity birthed a new frontier for businesses wanting in on action.&lt;/p&gt;

&lt;p&gt;What struck me most was how the best companies during that era didn't just build an app and call it done. They rethought their product for the new form factor. The ones who just shrunk their website into an app got mediocre results. The ones who understood that mobile was a different context, different attention span, different interaction and data model, they built things that actually worked. Every country did not have access to these devices nor did the internet penetration really go so quickly, so you had to adapt to tracking metrics differently and build for your audience. It's a lesson I keep coming back to now, because a lot of what I see with AI integration today feels like the "just shrink the website into an app" phase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Migration: When "Just move it to AWS" was the popular answer
&lt;/h2&gt;

&lt;p&gt;The cloud migration wave of the mid-2010s is the one I experienced most directly, first as a hands-on engineer and then as a team leader working on K-12 compliance software. "Lift and shift." "Cloud-native." "Serverless." Customers were wary of the move to the cloud, it meant a loss of control or maybe a doubt around trusting a third party with their most important data. Eventually, they came back to the same question: "Why isn't this in the cloud yet?". When an industry deals with transformations in technology, the first reaction is to not trust the thing. Eventually, the new normal is to expect everything to follow the new pattern of work.&lt;/p&gt;

&lt;p&gt;That era had its own explosion of new titles. Centralized DevOps teams popped up everywhere. "DevOps Engineer" became a real job title that hadn't existed a few years prior. AWS certifications became the new resume standard. Companies were hiring cloud architects before they'd even decided what to migrate. I remember a specific conversation with a VP of Engineering, "We need to provision AWS accounts as soon as possible and move everything to it". I was sceptical about the team's experience with cloud infrastructure. It was marketed as an opportunity to learn. It reminded me of when you buy a brand new Mercedes-Benz for your first behind-the-wheel driving lesson. Or in my case, it reminded me of when I bought a $100 guitar to learn how to play (P.S: I've probably tried learning it with a start/stop cycle of 50 times).&lt;/p&gt;

&lt;p&gt;Coming back to that cloud migration story, I moved on before I could see how that particular story ended. I recall every company was playing whack-a-mole with massive on-prem to cloud migrations. Meanwhile, cloud-native companies entered the fray without any legacy baggage, and acquisitions and mergers drove a lot of the strategy for incumbents. Cloud data centers continued to be built for the day when the great on-prem migration would come. Entire data center industries came up where companies operate private clouds by renting out server space. It became an inflection point where build vs. buy conversations started driving capital expenditure in ways they hadn't before.&lt;/p&gt;

&lt;p&gt;This feels like the same story in a different decade. Cloud computing really was better for most workloads, but the genuine shift got distorted by urgency and hype into a mandate without a plan. &lt;strong&gt;The leaders who got it right treated migration as a multi-step transformational journey, not one quarter's OKR.&lt;/strong&gt; They ran experiments, picked the right workloads first, invested in training before migration tooling, and were okay with some things staying on-prem. However, doing nothing and sticking to old ways led to eventual disintegration as the industry moved to newer frontiers. You were discussing and debating strategy when the market transformed around you.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI in 2026: Where we are now
&lt;/h2&gt;

&lt;p&gt;I'm looking at my feeds right now and it's a fire hose. AI Safety Engineering. Local LLMs you can run on a Mac Mini (I've been &lt;a href="https://www.nvarma.com/blog/2026-02-14-comfyui-mac-mini-laymans-guide/" rel="noopener noreferrer"&gt;doing this myself&lt;/a&gt;). Cloud API providers for every conceivable AI service. Token cost optimization as an emerging discipline. Agentic use case prioritization frameworks. Agentic engineering as a job family. Forward Deployment Engineers who sit between the AI model and the customer. Every week there's a new infrastructure, a new framework, a new model release, a new attempt at trying to pivot cloud offerings into the AI era. My LinkedIn feed looks like a word cloud of terms. The irony is Generative AI in itself has caused so much AI saturation hitting LinkedIn, X, Medium and so many other places. I'm surprised by how much AI saturation there is, the abundance of information is indeed something that provides a great opportunity to filter the signal from the noise.&lt;/p&gt;

&lt;p&gt;This feels familiar because &lt;strong&gt;the pattern is always the same.&lt;/strong&gt; A real technological shift creates genuine new capabilities, which creates a gold rush of new companies and job titles, which creates the FOMO feeling about falling behind, which creates pressure to adopt everything at once. The leaders who navigate this well are the ones who resist that last part.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thoughts on navigating a new era
&lt;/h2&gt;

&lt;p&gt;I don't have it all figured out. I didn't live through the pre-Internet computing world, so that may offer lessons for students of history and is something I'd like to read up on (comments welcome on any reading material!). But having watched four of these cycles play out in my career, I'm thinking of a few things to keep me and the people around me grounded.&lt;/p&gt;

&lt;p&gt;I'm treating AI in my daily workflows as a strategy, and baking in evals at every step of the way. This is the same idea as distributed teams being a strategy with clear contracts, or cloud infrastructure being one with metrics around costs driving decisions. The question I keep coming back to is "what problem are we solving, and does AI actually help us solve it better?". Sometimes the answer is yes. Sometimes being clear about what you want to build into the spec or finding an off-the-shelf library gets you further.&lt;/p&gt;

&lt;p&gt;I'm advocating for experimentation time for my team, I try to keep inspiring them into trying things out and blocking dedicated time for self-learning. One of the biggest mistakes I saw leaders make during the cloud migration era was not giving their teams time to learn before expecting them to deliver. I'm trying not to repeat that but it is hard as the days go by. Sometimes sounding the alarm seems too heavy-handed, but I recognize that lifelong learning is a skill by itself. I wrote about this in &lt;a href="https://www.nvarma.com/blog/2026-01-20-software-engineering-as-craft/" rel="noopener noreferrer"&gt;my post on the craft of engineering&lt;/a&gt;, I really like the idea that everyone starts at zero.&lt;/p&gt;

&lt;p&gt;On the topic of taking risks, I'm being deliberate about which ones to make. During the cloud migration wave, companies that did not pay attention to their core user base but pivoted a 100% to the new technology ended up doing badly at both. I'm wary of applying that same thinking. It seems wise to pick specific use cases where AI adds clear value and getting good at those before chasing every new thing that comes out this week. Deterministic outcomes in certain industries matter more than probabilistic outcomes. Moving with a sense of urgency is good, and I'm thriving in a world where the constraint is no longer the ability to produce something, but making it really useful is an interesting challenge to live through. &lt;/p&gt;

&lt;p&gt;I keep watching for what I think of as the naive "&lt;a href="https://en.wikipedia.org/wiki/Follow-the-sun" rel="noopener noreferrer"&gt;follow the sun&lt;/a&gt;" trap. Follow the sun is a legitimate operating model. Many global companies run it successfully today. The early versions assumed you could hand work off across time zones without losing context often leading to things breaking. The companies that made it work invested heavily in documentation, handoff rituals, shared tooling. The ones that just split epics across time zones and hoped for the best ended up slower with more bugs. I see some interesting parallels with how teams are choosing between traditional software delivery and pivoting to embed agents into workflows. As of today, human in the loop tasks requiring judgment, context, or institutional knowledge are being revisited to see how they should evolve. The companies that invest in the scaffolding around this, clear specs, guardrails, human review loops, will get the value. The ones that skip that work will be in the same predicament around how the cloud migration occurred.&lt;/p&gt;

&lt;h2&gt;
  
  
  The dust always settles
&lt;/h2&gt;

&lt;p&gt;In January, I wrote that I thought we had about two years before the dust settles in this evolution around our software engineering craft. I still believe the world has changed for better, and we're discovering new ways of working, while the two years seem very far away. Every quarter seems more exciting than the past one. Every major industry shift I've lived through followed a similar pattern. There's an explosion of activity followed by a period where things consolidate with winners and losers in technology, processes and patterns. Eventually a new steady state where the real value becomes clear and the noise fades.&lt;/p&gt;

&lt;p&gt;The pioneers of new technology often displaced some of the old players, some adapted to survive in the new world. The shift of the landscape though was permanent. The path from "everything is changing" to "here's how we actually work now" did not take days. It took months, if not years, and the breakneck pace of innovation took a solid 12 months to become structured with formalized set of learnings.&lt;/p&gt;

&lt;p&gt;I don't know what the steady state looks like for AI as it transforms the world. Nobody does, and I'm skeptical of anyone who claims to. But I do know the playbook for getting there: filter the noise, invest in your people, manage your risks deliberately, and give yourself permission to not adopt everything at once.&lt;/p&gt;

&lt;p&gt;We've been here before, we are living through it again now and I'm pretty clear that we'll be here again in the future. The specific technologies change but the leadership challenge stays remarkably consistent: &lt;strong&gt;stay curious enough to learn, disciplined enough to focus, and patient enough to let the dust settle.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you're an engineering leader feeling overwhelmed by the pace of change right now, give yourself time to take a breath. The moment feels similar to what leaders went through in 1999, 2007, 2010, and 2014. It means something real is happening. It also means the most important thing you can do right now is think clearly to navigate through a changing landscape. I was a practitioner in the past transformational shifts, I'm a leader of people and technical strategy in this one. The path forward is bright for those who see light in the possibilities. How you see the world and how you can navigate change may define how you lead your teams. What an exciting time to be building!&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.nvarma.com/blog/2026-04-12-leading-engineers-through-ai-wave/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;. Follow me there for more on software architecture, engineering leadership, and the craft of building things that last.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>ai</category>
      <category>reflections</category>
      <category>software</category>
    </item>
    <item>
      <title>Vibecoding a Video Editing Pipeline</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Mon, 06 Apr 2026 12:01:29 +0000</pubDate>
      <link>https://forem.com/navinvarma/vibecoding-a-video-editing-pipeline-33d9</link>
      <guid>https://forem.com/navinvarma/vibecoding-a-video-editing-pipeline-33d9</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.nvarma.com/blog/2026-04-05-vibecoding-a-video-editing-pipeline/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmxyt8snnkt2ad5okmer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmxyt8snnkt2ad5okmer.png" alt="AI-generated collage of Golden Gate Bridge, crashing waves, and sea lions on a coastal cliff at sunset" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Canva's AI-generated hero image for this post. It crammed every location into one impossible geography. CLIP would score this a 10.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;These are my personal thoughts, experiences, and opinions, and they do not reflect the views of the company I work for.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Last week I wrote about &lt;a href="https://www.nvarma.com/blog/2026-03-25-california-coast-letting-go-to-find-clarity/" rel="noopener noreferrer"&gt;driving the California coast and finding clarity&lt;/a&gt;. That was a reflective post. This one is about nerding out this weekend on all things AI video editing, LLMs and iMovie version.&lt;/p&gt;

&lt;p&gt;I came home from that trip with 27 video files across two cameras with about 17.5 GB of footage. My main driver phone is a Samsung Galaxy S24 Ultra and last Thanksgiving, I got a Xtra Muse vlog camera for 4K scenic shots. During that trip, our first stop was my favorite Pier 39 sea lions after which we took a couple of different views of the Golden Gate Bridge on a rare fog-free day. The next day we went on to 17-Mile Drive and Pebble Beach. This was the California Coast in all its glory waiting to be explored, and my opportunity to capture it unedited in some of the most amazing scenery I have seen.&lt;/p&gt;

&lt;p&gt;It has been on my mind on how a lot of people are using AI tools to do these content creation pipelines, and my curiosity got the better of me that I had footage to experiment with. My requirement was a simple 90-second highlight reel and a couple of YouTube Shorts. Since I was exploring a bunch of Local AI for edge apps prototyping, my guess was it would be quite a breeze to edit these videos. I don't have OpenClaw just yet, so this was my deep dive into learning traditional video editing in case things didn't work out. I've done sound recording and mixing semi-professionally, so how hard can video be? It turns out, it's quite the skill to learn. &lt;/p&gt;
&lt;h2&gt;
  
  
  Defaulting to ComfyUI
&lt;/h2&gt;

&lt;p&gt;My first instinct was ComfyUI. I've &lt;a href="https://www.nvarma.com/blog/2026-02-14-comfyui-mac-mini-laymans-guide/" rel="noopener noreferrer"&gt;written about using it recently&lt;/a&gt; and it felt like the right tool. I thought I could use image-to-video workflows to enhance the footage, or at least use it for some creative transitions.&lt;/p&gt;

&lt;p&gt;I opened Google AI search and started brainstorming. I realized quickly that this wasn't a generation problem. I didn't need to create new footage. I needed to sift through 17 GB of existing footage and find the good parts. ComfyUI is a Swiss Army knife for image and video generation, but for selection and editing? This was the wrong tool.&lt;/p&gt;
&lt;h2&gt;
  
  
  Next Up: Gemma 4 and TurboQuant
&lt;/h2&gt;

&lt;p&gt;Google had just released &lt;a href="https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/" rel="noopener noreferrer"&gt;TurboQuant&lt;/a&gt; alongside &lt;a href="https://deepmind.google/models/gemma/gemma-4/" rel="noopener noreferrer"&gt;Gemma 4&lt;/a&gt;, and I was itching to try local multimodal inference. The idea I had was to extract frames from every video into a folder, feed them through Gemma 4's vision model locally on my Mac Mini, and have it pick the most scenic shots.&lt;/p&gt;

&lt;p&gt;I spent an evening chatting with Gemini about this approach. I even wrote a &lt;code&gt;director.py&lt;/code&gt; script that loaded &lt;code&gt;gemma-4-E4B-it-4bit&lt;/code&gt; through &lt;code&gt;mlx-vlm&lt;/code&gt;, fed it batches of 35 frames resized to 112x112, and asked it to pick the best ones. It sort of worked. But "sort of" is a generous way to describe a script that was slow, it hallucinated filenames, and couldn't reliably distinguish a parking lot from Pebble Beach at thumbnail resolution.&lt;/p&gt;

&lt;p&gt;The core problem was that &lt;strong&gt;I was using a vision language model for a task that didn't need language&lt;/strong&gt;. I didn't need the model to describe what it saw. I needed it to score how scenic each frame was. That's a similarity match problem, not a problem solved using conversational AI.&lt;/p&gt;
&lt;h2&gt;
  
  
  Pivoting to a Claude Code project
&lt;/h2&gt;

&lt;p&gt;I haven't really tried out Google's Vertex AI in depth, or explored Antigravity's capabilities. I'd been going back and forth with a Gemini 3.1 Pro chat, learning a lot about Gemma 4, quantization, and multimodal inference. But I kept running into hurdles. There were model loading quirks. There was a 35-image batch limit beyond which my Mac Mini M4 Pro (24GB) would go bust. The 112px resolution was killing all the detail I needed. After a couple hours of this, I decided to pause and rethink.&lt;/p&gt;

&lt;p&gt;What if I just pivoted to load Claude Code in the folder of raw videos and described what I wanted?&lt;/p&gt;

&lt;p&gt;I opened a new Claude Code session with Opus 4.6, gave it access to &lt;code&gt;~/Movies/SFO_PebbleBeach/raw/&lt;/code&gt;, and explained my goal. Within the first exchange, Claude corrected my typo on something as I was describing my need. It pointed out that &lt;code&gt;mlx-lm&lt;/code&gt; is text-only, that for vision I needed &lt;code&gt;mlx-vlm&lt;/code&gt;, and that for "find nature/scenery frames" &lt;strong&gt;&lt;a href="https://openai.com/index/clip/" rel="noopener noreferrer"&gt;CLIP&lt;/a&gt; is the right primary tool, not a VLM&lt;/strong&gt;. "CLIP (Contrastive Language–Image Pre-training) builds on a large body of work on zero-shot transfer, natural language supervision, and multimodal learning. "&lt;/p&gt;

&lt;p&gt;That was the moment I discovered a lot about prior art (pun intended). I've dabbled a bit before in the early days of AI tech coming out but always went to Cloud AI providers to experiment, this was the first time I had a real world use case to run through it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Discovering CLIP
&lt;/h2&gt;

&lt;p&gt;If you haven't used CLIP for something like this, it's quite simple. Once you give it a frame and a text prompt, it tells you how similar they are.&lt;/p&gt;

&lt;p&gt;So instead of asking a language model "is this scenic?", you need to encode the frame and compare it against prompts like &lt;code&gt;"dramatic Pacific Ocean cliffs"&lt;/code&gt; and &lt;code&gt;"Monterey cypress trees on rocky shore"&lt;/code&gt;. The same kind of thing I'd add as positive prompts in ComfyUI LoRA nodes. Then you need to compare against negative prompts like &lt;code&gt;"blurry photo"&lt;/code&gt; and &lt;code&gt;"car dashboard"&lt;/code&gt;. The difference between the positive and negative similarity scores gives you a single number: how scenic is this frame?&lt;/p&gt;

&lt;p&gt;Running on Apple Silicon MPS, CLIP processes frames so fast it makes VLM inference look like it's stuck in an infinite loop (at least on my machine).&lt;/p&gt;
&lt;h2&gt;
  
  
  Custom rigging a 10-Stage Pipeline
&lt;/h2&gt;

&lt;p&gt;Claude didn't just suggest CLIP. It designed a full 10-stage pipeline and wrote every script. I vibed all the way with nods like I knew what I was doing (I was not :&amp;gt;|). I was just describing what I wanted and iterating on the output.&lt;/p&gt;

&lt;p&gt;Here's what Claude listed as what the pipeline does:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Probe&lt;/strong&gt; every raw file with &lt;code&gt;ffprobe&lt;/code&gt; to inventory codecs, resolutions, and framerates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Detect shots&lt;/strong&gt; using PySceneDetect's adaptive detector, so clips align with actual camera cuts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extract 3 representative frames&lt;/strong&gt; per shot (10%, 50%, 90%) instead of the brute-force 1-fps approach&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Score frames&lt;/strong&gt; with CLIP against scenic prompts, plus classical CV signals (Laplacian sharpness, HSV colorfulness, brightness, face detection)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Filter for shake and blur&lt;/strong&gt; using optical flow variance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tag locations&lt;/strong&gt; using GPS from the Samsung files and CLIP zero-shot for the Xtra Muse (which had no GPS)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Select clips&lt;/strong&gt; for a 90s reel and two 30s shorts, balancing score, location diversity, and chronological order&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Normalize&lt;/strong&gt; each clip to a common format (4K30 HEVC for the reel, 1080x1920 H.264 for shorts)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concatenate&lt;/strong&gt; into final outputs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify&lt;/strong&gt; specs, durations, and that the raw folder was never modified&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The scoring formula was described as a weighted combination:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;frame_score = 0.35 * clip_scenic
            + 0.20 * sharpness
            + 0.15 * colorfulness
            + 0.15 * nature_ratio
            + 0.10 * brightness
            - 0.40 * has_face * (1 - nature_ratio)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That last term is clever. It penalizes faces, but waives the penalty when the scenery dominates. So a wide shot of the Golden Gate Bridge with a few tourists in the corner keeps its score. A selfie gets rejected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Some surprising learnings
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Python's scientific ecosystem is really good for this.&lt;/strong&gt; I didn't need any exotic AI tooling for most of the pipeline. Claude chose OpenCV's Laplacian variance for blur detection, Hasler-Süsstrunk colorfulness, HSV color masks for nature detection, Farneback optical flow for shake. These are computer vision techniques from the 2000s and they worked perfectly. The AI involvement (CLIP) was just one stage out of ten.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Location tagging with CLIP zero-shot actually worked.&lt;/strong&gt; The Xtra Muse camera had no GPS data, so Claude suggested using CLIP to match frames against location-specific prompts like "Golden Gate Bridge" and "Pebble Beach golf coastline". It wasn't perfect (Pier 39 and Pier 41 look similar from the water), but it was good enough to ensure location diversity in the reel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The EXIF correction was its own mini-adventure.&lt;/strong&gt; The Xtra Muse had timestamps from June 2000 instead of March 2026. Off by 25 years, 9 months, and 6 days. Claude walked me through the &lt;code&gt;exiftool&lt;/code&gt; date shift since I've forgotten the exact format after spending most of my college days tagging photos on my Macbook Pro. It caught me when I accidentally ran a write instead of a dry run, and helped me recover from the backup. Somehow my trip footage is from the future and the past simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  The final edit involving a human and a robot
&lt;/h2&gt;

&lt;p&gt;The pipeline output a 90-second reel and two shorts. This was quite impressive for a vibe coded adventure. It had its flaws though, and I definitely wanted to edit this a bit more.&lt;/p&gt;

&lt;p&gt;I imported everything into iMovie for the final pass. I trimmed a couple of clips where the pipeline chose a good scenic shot but caught the tail end of a camera movement. I removed one clip of Golden Gate Bridge that was technically scenic but felt out of place in the flow. This is the kind of editorial judgment that's hard to score numerically.&lt;/p&gt;

&lt;p&gt;For the soundtrack, I turned to Gemini. My last YouTube video (&lt;a href="https://www.youtube.com/watch?v=mhmUS5-utCc" rel="noopener noreferrer"&gt;a Waymo ride through SF&lt;/a&gt;) got hit with a copyright notice for the music, and I didn't want to deal with that again. Gemini generated "The Road and the Sea", a track that fit the mood without any licensing headaches.&lt;/p&gt;

&lt;p&gt;The final step was to put it all together and upload to YouTube and declare victory. (yay!)&lt;/p&gt;

&lt;h2&gt;
  
  
  Learning through pain
&lt;/h2&gt;

&lt;p&gt;I went into this wanting to edit a video. I came out thinking differently about when to reach for which AI tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemma 4 and local VLMs are impressive, but they're conversational tools.&lt;/strong&gt; I found that they are great at describing and reasoning about images but terrible at scoring thousands of them quickly. I spent an evening learning this the hard way, and I don't regret it. I understand quantization and multimodal inference better now than I did before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CLIP is the right tool when you need similarity, not understanding.&lt;/strong&gt; If your question is "does this match my description?", CLIP answers it faster and more reliably than any chat model. I think a lot of people reach for an LLM when a simpler model would do the job better. I know I did.&lt;/p&gt;

&lt;p&gt;The other thing that surprised me was how much of the pipeline didn't need AI at all. Laplacian blur detection, optical flow for shake, HSV color masks for nature. These are well established computer vision techniques. They were perfect for this job. &lt;strong&gt;The right answer often isn't the newest model.&lt;/strong&gt; Sometimes it is just expanding your knowledge of the different prior art that came before all the new hotness.&lt;/p&gt;

&lt;p&gt;Vibecoding works for pipelines for a weekend project. I described what I wanted. Claude built the stages that I reviewed, iterated, and run until I got my output. The output was ten Python scripts, an orchestrator, CLIP prompts, ffmpeg encoding flags from a single session. I couldn't have written optical flow shake detection from scratch that quickly. But I could describe what a good highlight reel looks like, and honestly, that was enough.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=GjoOt8SQe3w" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=GjoOt8SQe3w&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The video's up on YouTube now. Here I present almost ninety seconds of California coastline, some selected by math, some by neural nets and the rest hand finished by a human. It's a bit like mixing a track in GarageBand. The tools lay down the structure, but you still have to trust your ear for the final cut. I'm already thinking about what to automate next time. &lt;/p&gt;

&lt;p&gt;This was not a perfect journey, but it was satisfying to have enjoyed it and finish on a high of completing it. Just like my ride down Pebble Beach on a clear sunny March weekend.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.nvarma.com/blog/2026-04-05-vibecoding-a-video-editing-pipeline/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;. Follow me there for more on software architecture, engineering leadership, and the craft of building things that last.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claudecode</category>
      <category>aicodingtools</category>
      <category>tools</category>
    </item>
    <item>
      <title>The Constant Coastline</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Thu, 26 Mar 2026 12:01:22 +0000</pubDate>
      <link>https://forem.com/navinvarma/the-constant-coastline-dif</link>
      <guid>https://forem.com/navinvarma/the-constant-coastline-dif</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.nvarma.com/blog/2026-03-25-california-coast-letting-go-to-find-clarity/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;These are my personal thoughts, experiences, and opinions, and they do not reflect the views of the company I work for.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This past weekend, I packed up the family and drove to San Francisco. No agenda, no itinerary synced to a calendar, and almost no Slack notifications. Okay, there was one I was a little too eager to respond to. But mostly, it was just the open road, some good music, and a few days with the people I love most.&lt;/p&gt;

&lt;p&gt;We hit the Golden Gate Bridge on a rare fog-free day. We had ice cream at Ghirardelli Square. We drove the 17-Mile Drive through Pebble Beach and picked up dark chocolate treats from All About the Chocolate in Carmel-by-the-Sea. We stopped at the Mystery Spot in Santa Cruz, because honestly, some days you need a place where gravity doesn't make sense to remind you that the world is more playful than your inbox suggests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoojgketrrzpv73xrss7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcoojgketrrzpv73xrss7.jpg" alt="The Golden Gate Bridge on a clear, fog-free day with green Marin hills in the background" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The Golden Gate Bridge, completely fog-free. If you know San Francisco, you know how rare this is.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There's a lot of uncertainty in the world right now. In tech, in geopolitics, in the economy. If you're in a leadership role, you feel the weight of it constantly. The decisions that need to be made, the ambiguity of what's next, the responsibility of guiding people when you have an idea of the path but are leaning on your best judgement to do what's right for everyone around you, knowing full well you can't please all the people all of the time. It builds up. Quietly, persistently, until one morning you realize you've been carrying tension you didn't even name.&lt;/p&gt;

&lt;p&gt;I've learned over the years that one of the most important things I can do for myself, and by extension for my team, is to step away. Not escape. Step away. There's a difference. Escaping is avoidance. Stepping away is choosing to create space so you can come back with clarity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Constant Coastline
&lt;/h2&gt;

&lt;p&gt;The first time I drove to Pebble Beach was in 2004, and the Pacific Ocean was just a stunning backdrop to a day trip with family. I came back in 2010 as a grad student, slightly older, slightly more aware of the world. In 2019, I visited again, and the photo from that trip is still the one on my &lt;a href="https://www.nvarma.com/" rel="noopener noreferrer"&gt;homepage&lt;/a&gt; today. Since then, I've returned every time family or friends visit from out of town. It's become one of those places I keep coming back to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xvtd2orzbw8g8y5ww54.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xvtd2orzbw8g8y5ww54.jpg" alt="Me at Pebble Beach in 2010 as a grad student, sitting on the rocky shoreline with waves in the background" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Pebble Beach, July 2010. Grad student, web developer, part-time DJ, no gray or balding hair. Same rocks, same waves.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here's what gets me every time: &lt;strong&gt;the coastline hasn't changed, but I've grown and evolved.&lt;/strong&gt; The same rocks, the same waves, the same Lone Cypress clinging to its cliff. But the person standing there looking at it has gone from college kid to grad student to engineer to engineering leader, through job changes, cross-country moves, losing loved ones, navigating a pandemic, and now leading teams through a significant transformation. Twenty-two years of evolution. The rocks didn't move an inch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxytovtbj9vo2lhn8rs6c.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxytovtbj9vo2lhn8rs6c.jpg" alt="The Lone Cypress tree at Pebble Beach perched on a rocky outcrop overlooking the Pacific Ocean" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The Lone Cypress at Pebble Beach. There's something about a tree that's been standing alone on a rock for 250 years that puts your quarterly review worries into perspective.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I think about this a lot. We spend so much energy keeping pace with change. New frameworks, new org structures, new strategies, new market conditions. And that's exciting, honestly, because it means we're building in a time where the possibilities keep expanding. But it also means we rarely pause. Standing at the cliffs of Pebble Beach, I'm reminded that the ocean doesn't know what quarter it is. Those rocks were there before your company existed and will be there long after. &lt;strong&gt;It's a scale reset.&lt;/strong&gt; It recalibrates what "urgent" actually means.&lt;/p&gt;

&lt;h2&gt;
  
  
  Simple Things, Big Returns
&lt;/h2&gt;

&lt;p&gt;I don't think you need to fly to Hawaii or book a silent retreat to find this kind of reset. For me, it's always been simple things. A long drive with music playing, the kind of songs that pull you back to a specific year and a specific feeling. Visiting places that hold memories from earlier chapters of my life. Making new memories of artisan chocolates or a fancy restaurant you've never been to. Showing someone a place you love and watching them see it for the first time. These are small acts, but they reconnect you to a version of yourself that existed before the job title, the responsibilities, and the constant hum of notifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp8s1ubi2soqre7at0l9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmp8s1ubi2soqre7at0l9.jpg" alt="Waves crashing against rocks along the Pebble Beach coastline on a clear blue day" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The coastline along 17-Mile Drive. No filter, no edit. Just the Pacific doing what it's been doing for millions of years.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This heat wave weekend was one of those rare Northern California days where the sky was perfectly clear and the air was warm. We drove with the sun-roof down. We spotted sea lions on the docks at Pier 39. My wife pointed out wildflowers on the cliffs. And for a few hours, I wasn't thinking about roadmaps or deadlines or the latest industry disruption. I was just present. The world felt enormous and still and exactly as it should be, even as headlines everywhere insisted otherwise.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pause Is a Leadership Skill
&lt;/h2&gt;

&lt;p&gt;We talk a lot about resilience, about grit, about pushing through. And those are real. But I've found that some of my best decisions as a leader came not from pushing harder, but from stepping back far enough to see the full picture. It's hard to evaluate a situation clearly when you're inside the pressure of it. Distance, even a weekend's worth, changes the perspective.&lt;/p&gt;

&lt;p&gt;What does that look like in practice? It means not checking email on the drive. Not "just quickly" responding to a message at every stop. Actually disconnecting. Actually being in the place you drove to, with the people you brought along. It means trusting your team to hold things down, and trusting yourself to come back sharper for having been away.&lt;/p&gt;

&lt;p&gt;I don't know if everyone feels this way when they stand at the edge of the Pacific and look out at the horizon. But I suspect many of you do. We just don't say it out loud often enough, especially in professional circles where "always on" is still treated like a badge of honor.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Brought Back from the Coast
&lt;/h2&gt;

&lt;p&gt;The world will keep spinning with its uncertainty and noise. Your inbox will be there on Monday. The hard decisions won't go away. But you'll face them differently when you've stood at the foot of the Golden Gate Bridge on a clear day and remembered that the world, for all its chaos, is still breathtakingly beautiful. &lt;strong&gt;The things that evolve fastest need you the most. The things that never change ground you the most.&lt;/strong&gt; Knowing the difference is what keeps you steady.&lt;/p&gt;

&lt;p&gt;Remember to take a step back from time to time, play your favorite music loud, let your favorite places remind you what being steady feels like.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.nvarma.com/blog/2026-03-25-california-coast-letting-go-to-find-clarity/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;. Follow me there for more on software architecture, engineering leadership, and the craft of building things that last.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>life</category>
      <category>reflections</category>
      <category>personal</category>
    </item>
    <item>
      <title>How to Build Free Local AI with Ollama for Small Businesses in March 2026</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Mon, 16 Mar 2026 05:30:24 +0000</pubDate>
      <link>https://forem.com/navinvarma/how-to-build-free-local-ai-with-ollama-for-small-businesses-in-march-2026-ci9</link>
      <guid>https://forem.com/navinvarma/how-to-build-free-local-ai-with-ollama-for-small-businesses-in-march-2026-ci9</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.nvarma.com/blog/2026-03-15-build-your-own-ai-assistant-ollama-small-business/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;My wife is a freelancer looking to start her own home lifestyle business. She'd been using the free tier of ChatGPT to help with things like summarizing research and drafting emails, and kept hitting the limits. "Can I buy the subscription?"&lt;/p&gt;

&lt;p&gt;Now, as a good husband, I should have just bought her the subscription. But as an even better husband, I taught her how to set up her own chatbot locally on her Dell laptop. 16 GB of RAM, no fancy GPU, and it cost her exactly zero dollars in software.&lt;/p&gt;

&lt;p&gt;No subscription. No data leaving the machine. Each model has its own limits on how much text it can take in, and you figure that out by trial and error, but there's no one charging you per question.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop, this post is not for you if:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You're a developer or technical user who already knows what a local LLM is&lt;/li&gt;
&lt;li&gt;You're comfortable with Docker, APIs, and model quantization&lt;/li&gt;
&lt;li&gt;You're looking for an advanced fine-tuning or deployment guide&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;(But if you're technical, skip to the end. There's something there for you.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This post is for everyone else.&lt;/strong&gt; People who have less technical knowledge but want to start using AI on their own data without uploading it somewhere else. Small business owners, freelancers, anyone who cares about privacy and doesn't want to pay a monthly subscription just to summarize emails or draft replies. We're going to install Ollama, pick the right model, and build a working email summarizer, all in about 30 minutes.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: The model recommendations and comparisons in this post are based on what's available on Ollama as of March 2026. This space moves fast, so newer models may be available by the time you read this.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What this will cost you
&lt;/h2&gt;

&lt;p&gt;Let's start with a comparison of the math.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Ollama (local)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;ChatGPT Plus&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Claude Pro&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Gemini Advanced&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Monthly cost&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$20/user&lt;/td&gt;
&lt;td&gt;$20/user&lt;/td&gt;
&lt;td&gt;$20/user&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Annual cost (3 users)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$720&lt;/td&gt;
&lt;td&gt;$720&lt;/td&gt;
&lt;td&gt;$720&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Annual cost (5 users)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$0&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$1,200&lt;/td&gt;
&lt;td&gt;$1,200&lt;/td&gt;
&lt;td&gt;$1,200&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data privacy&lt;/td&gt;
&lt;td&gt;Everything stays on your machine&lt;/td&gt;
&lt;td&gt;Sent to OpenAI's servers&lt;/td&gt;
&lt;td&gt;Sent to Anthropic's servers&lt;/td&gt;
&lt;td&gt;Sent to Google's servers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Internet required&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Quality&lt;/td&gt;
&lt;td&gt;Very good for everyday tasks&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Ollama is free, open-source software that runs AI models directly on your computer. The tradeoff is that local models aren't as powerful as the frontier models from the big providers, but for summarizing emails, answering questions about your data, and drafting replies, they're more than good enough.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Think of it as building your own rig.&lt;/strong&gt; You explore what's possible locally, figure out what you actually need, and then decide if paying for a cloud service makes sense for the tasks that require more horsepower.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why people are concerned about uploading their data
&lt;/h2&gt;

&lt;p&gt;This is the other reason to run AI locally, and it's worth spelling out.&lt;/p&gt;

&lt;p&gt;When you paste text into a cloud AI service, that text travels over the internet to someone else's servers. Even if the provider promises not to train on your data, the data still leaves your building. For some businesses, that's a problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Accountants and bookkeepers&lt;/strong&gt; handle client financial data like tax returns, bank statements, and payroll. Sending that to a third party, even encrypted, introduces risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Law firms&lt;/strong&gt; are bound by attorney-client privilege. Pasting case details into a cloud AI arguably breaches that duty of confidentiality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthcare practices&lt;/strong&gt; deal with HIPAA. Most cloud AI tools are not HIPAA compliant, and violations carry criminal penalties.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Any business&lt;/strong&gt; with proprietary information like pricing strategies, customer lists, or internal communications should think twice about where that data goes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Ollama, the answer is simple: &lt;strong&gt;your data never leaves your machine.&lt;/strong&gt; There's no cloud, no API key, no account, no telemetry. The model runs in memory on your computer and disappears when you close it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you need (and don't need)
&lt;/h2&gt;

&lt;p&gt;Here's the part that surprises most people: &lt;strong&gt;you don't need a fancy GPU.&lt;/strong&gt; Modern AI models run on regular CPUs and standard laptop RAM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mac (Apple Silicon: M1, M2, M3, M4)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RAM:&lt;/strong&gt; 8 GB works (16 GB is better)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU:&lt;/strong&gt; Not needed. Apple's unified memory handles it automatically&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage:&lt;/strong&gt; ~10 GB free for the app and one model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance:&lt;/strong&gt; Expect 15–25 words per second on 8 GB, faster on 16 GB&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Mac (Intel)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RAM:&lt;/strong&gt; 16 GB recommended&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU:&lt;/strong&gt; Not needed, but responses will be slower (3–8 words per second)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Practical limit:&lt;/strong&gt; Stick to smaller models (7B parameters or less)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Windows PC
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;RAM:&lt;/strong&gt; 8 GB minimum, 16 GB recommended&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CPU:&lt;/strong&gt; 4 cores or more, made in the last 5–6 years&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GPU:&lt;/strong&gt; Not needed. If you happen to have an NVIDIA graphics card, it'll speed things up, but it's not required&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage:&lt;/strong&gt; ~10 GB free&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Linux
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Same requirements as Windows. Ollama runs natively.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The bottom line:&lt;/strong&gt; if your computer was made after 2020 and has at least 8 GB of RAM, you can run a local AI model right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Picking the right model
&lt;/h2&gt;

&lt;p&gt;This is where most tutorials lose people. Ollama has dozens of models available and the names mean nothing to a normal person. Here's what you actually need to know.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Models are measured in "parameters" (the B stands for billions).&lt;/strong&gt; More parameters generally means smarter but slower and hungrier for RAM. For a small business, you want the sweet spot: smart enough to be useful, small enough to run on your hardware.&lt;/p&gt;

&lt;h3&gt;
  
  
  Recommended models by use case
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;th&gt;RAM needed&lt;/th&gt;
&lt;th&gt;Best for&lt;/th&gt;
&lt;th&gt;Speed on 8 GB RAM&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Phi-3 Mini&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;3.8B&lt;/td&gt;
&lt;td&gt;~3 GB&lt;/td&gt;
&lt;td&gt;Quick summaries, simple Q&amp;amp;A, drafting short replies&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Llama 3.1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;8B&lt;/td&gt;
&lt;td&gt;~5 GB&lt;/td&gt;
&lt;td&gt;Email summarization, longer writing, general assistant&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Mistral&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;7B&lt;/td&gt;
&lt;td&gt;~5 GB&lt;/td&gt;
&lt;td&gt;Processing lots of text quickly&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemma 3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4B&lt;/td&gt;
&lt;td&gt;~3 GB&lt;/td&gt;
&lt;td&gt;Conversational Q&amp;amp;A, customer-facing tone&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Qwen 3&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4B&lt;/td&gt;
&lt;td&gt;~3 GB&lt;/td&gt;
&lt;td&gt;Multilingual support, good with non-English text&lt;/td&gt;
&lt;td&gt;Fast&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;DeepSeek-R1&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;8B&lt;/td&gt;
&lt;td&gt;~5 GB&lt;/td&gt;
&lt;td&gt;Complex reasoning, technical questions&lt;/td&gt;
&lt;td&gt;Moderate&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Models for searching your data (RAG)
&lt;/h3&gt;

&lt;p&gt;If you want to build a "chat with your data" system, you'll also need an embedding model. This is a small, fast model that helps the system find relevant passages in your files.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Size&lt;/th&gt;
&lt;th&gt;RAM needed&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;nomic-embed-text&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;137M&lt;/td&gt;
&lt;td&gt;~274 MB&lt;/td&gt;
&lt;td&gt;Converts your documents into searchable vectors&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;mxbai-embed-large&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;335M&lt;/td&gt;
&lt;td&gt;~500 MB&lt;/td&gt;
&lt;td&gt;Higher quality embeddings, slightly more RAM&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  My recommendation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Start with Llama 3.1 (8B).&lt;/strong&gt; It's the most popular model on Ollama for a reason. It handles summarization, Q&amp;amp;A, and drafting well, runs on 8 GB RAM, and has a massive 128K context window (meaning it can process very long emails or documents in one go). If your machine struggles, drop down to Phi-3 Mini or Gemma 3 4B.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installing Ollama
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Mac
&lt;/h3&gt;

&lt;p&gt;Open your browser, go to &lt;a href="https://ollama.com" rel="noopener noreferrer"&gt;ollama.com&lt;/a&gt;, and download the Mac app. Open it. That's it.&lt;/p&gt;

&lt;p&gt;To verify it worked, open Terminal (search for "Terminal" in Spotlight) and type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see a version number. Now pull your first model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull llama3.1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This downloads about 4.7 GB. Wait for it to finish, then test it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama run llama3.1 &lt;span class="s2"&gt;"Summarize this in one sentence: The quarterly revenue report shows a 12% increase in recurring subscriptions, though hardware sales declined by 8%. The board recommends increasing marketing spend in Q3."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get a clean one-sentence summary back. No internet needed after the model is downloaded. No account. No API key.&lt;/p&gt;

&lt;h3&gt;
  
  
  Windows
&lt;/h3&gt;

&lt;p&gt;Download the installer from &lt;a href="https://ollama.com" rel="noopener noreferrer"&gt;ollama.com&lt;/a&gt;. Run it. Open Command Prompt or PowerShell and use the same commands as above.&lt;/p&gt;

&lt;h3&gt;
  
  
  Linux
&lt;/h3&gt;

&lt;p&gt;One command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Don't want to write code? Start here instead
&lt;/h2&gt;

&lt;p&gt;If the idea of opening a terminal and writing Python makes you want to close this tab, there are two free desktop apps that give you a full ChatGPT-like interface on top of Ollama. No coding required.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://openwebui.com/" rel="noopener noreferrer"&gt;Open WebUI&lt;/a&gt;&lt;/strong&gt; is a browser-based chat interface that connects to Ollama. You can upload documents and ask questions about them. It looks and feels like ChatGPT, but everything runs locally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://anythingllm.com/" rel="noopener noreferrer"&gt;AnythingLLM&lt;/a&gt;&lt;/strong&gt; is a desktop app with drag-and-drop document ingestion. Point it at a folder of files, and it builds a searchable knowledge base you can chat with.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both are free and open source. They make money through optional enterprise features and hosted versions. The local desktop app costs nothing. Install Ollama first (section above), then install either of these, and you have a working local AI assistant without writing a single line of code.&lt;/p&gt;

&lt;p&gt;If that's all you need, you can stop here. But if you want to build something more tailored to your workflow, like an email summarizer that does exactly what you want, keep reading.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the email summarizer
&lt;/h2&gt;

&lt;p&gt;We're going to build a simple web app where you paste in an email (or a batch of emails) and get back a clean summary with action items. The app runs entirely on your computer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install Python and Streamlit
&lt;/h3&gt;

&lt;p&gt;If you don't have Python installed, download it from &lt;a href="https://www.python.org/downloads/" rel="noopener noreferrer"&gt;python.org&lt;/a&gt;. Make sure to check "Add Python to PATH" during installation on Windows.&lt;/p&gt;

&lt;p&gt;Then open your terminal and install the libraries we need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;streamlit ollama
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it, two packages. Streamlit gives us a browser-based interface. The &lt;code&gt;ollama&lt;/code&gt; package lets Python talk to the Ollama server running on your machine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Create the app
&lt;/h3&gt;

&lt;p&gt;Create a new file called &lt;code&gt;summarizer.py&lt;/code&gt; and paste this in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_page_config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page_title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Email Summarizer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;page_icon&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;📧&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Email Summarizer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;caption&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Paste an email below and get a summary with action items. Everything runs locally on your machine.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Let the user pick which model to use
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;selectbox&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llama3.1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;phi3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemma3:4b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mistral&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;email_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text_area&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Paste your email here&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;height&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;placeholder&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Dear Mr. Johnson, ...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;button&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;primary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;email_text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;spinner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Reading...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ollama&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
                        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant for a small business. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize the following text in 2-3 sentences. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Then list any action items as bullet points. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Be concise and professional.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                    &lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;email_text&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;summary&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subheader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;markdown&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's 30 lines. You can modify the prompt with your own take on what your assistant should know. Tell it to respond in bullet points, focus on deadlines, extract dollar amounts, whatever fits your workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Run it
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;streamlit run summarizer.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your browser will open to &lt;code&gt;localhost:8501&lt;/code&gt; with a clean interface. Paste an email, click Summarize, and watch your local AI go to work. No data leaves your machine. Close the browser tab and the terminal when you're done.&lt;/p&gt;

&lt;h3&gt;
  
  
  Making it handle multiple emails
&lt;/h3&gt;

&lt;p&gt;Want to summarize a batch? Swap the text area for a file uploader. Here's an enhanced version that adds tabs for pasting or uploading &lt;code&gt;.txt&lt;/code&gt; files:&lt;/p&gt;

&lt;p&gt;Click to expand the enhanced version&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;
&lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_page_config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page_title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Email Summarizer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;page_icon&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;📧&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;title&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Email Summarizer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;selectbox&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llama3.1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;phi3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemma3:4b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mistral&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;tab_paste&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tab_upload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;tabs&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Paste&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Upload files&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tab_paste&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;email_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;text_area&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Paste your email here&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;height&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;button&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;primary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;paste&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;email_text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;spinner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Reading...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ollama&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
                    &lt;span class="p"&gt;{&lt;/span&gt;
                        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
                            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant for a small business. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize the following text in 2-3 sentences. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Then list any action items as bullet points. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Be concise and professional.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                        &lt;span class="p"&gt;),&lt;/span&gt;
                    &lt;span class="p"&gt;},&lt;/span&gt;
                    &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;email_text&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
                &lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subheader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;markdown&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tab_upload&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;file_uploader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Upload email files (.txt)&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;txt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;accept_multiple_files&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;button&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize all&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;primary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;upload&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;spinner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarizing &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ollama&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
                        &lt;span class="p"&gt;{&lt;/span&gt;
                            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
                                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant for a small business. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Summarize the following text in 2-3 sentences. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Then list any action items as bullet points. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Be concise and professional.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                            &lt;span class="p"&gt;),&lt;/span&gt;
                        &lt;span class="p"&gt;},&lt;/span&gt;
                        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
                    &lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subheader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;markdown&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
            &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;divider&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  What's next: chat with your data
&lt;/h2&gt;

&lt;p&gt;Once you're comfortable with the email summarizer, the natural next step is building a local knowledge base. Point it at a folder of your business data (manuals, contracts, FAQs, policies) and ask questions in plain English.&lt;/p&gt;

&lt;p&gt;This uses a technique called RAG (Retrieval-Augmented Generation), and the stack is straightforward: Ollama for the AI brain, ChromaDB for searching your files, and Streamlit for the interface. Open WebUI and AnythingLLM (mentioned earlier) both support this out of the box if you want to try it without code.&lt;/p&gt;

&lt;p&gt;Want to figure out the steps yourself? Try chatting with DeepSeek-R1 (&lt;code&gt;ollama run deepseek-r1&lt;/code&gt;). It's great at reasoning through technical problems and can walk you through setting up a RAG pipeline. If you want it to write the code for you, try Qwen Coder (&lt;code&gt;ollama run qwen2.5-coder&lt;/code&gt;). It's specifically trained for code generation and can scaffold a working app from a plain-English description.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is this going to replace ChatGPT?
&lt;/h2&gt;

&lt;p&gt;No. Let's be honest about the tradeoffs.&lt;/p&gt;

&lt;p&gt;Local models running on consumer hardware are not as smart as the frontier models from the big providers. They're smaller, they hallucinate more, and they don't have access to the internet for real-time information. If you need to analyze a complex legal contract or write a nuanced marketing strategy, a cloud service will do it better.&lt;/p&gt;

&lt;p&gt;But that's not the point. &lt;strong&gt;The point is that 80% of what small businesses use AI for (summarizing, drafting, extracting key points, answering routine questions) works just fine locally.&lt;/strong&gt; And for those tasks, you get privacy, zero ongoing cost, and no dependency on someone else's service.&lt;/p&gt;

&lt;h2&gt;
  
  
  If you're the technical one, there's an opportunity here
&lt;/h2&gt;

&lt;p&gt;I told you at the top to skip this post if you're a developer. But if you read it anyway and you're thinking "I could set this up for every small business I know," you're not wrong.&lt;/p&gt;

&lt;p&gt;The AI consulting market is projected to grow from &lt;a href="https://colorwhistle.com/ai-consultation-statistics/" rel="noopener noreferrer"&gt;$11 billion in 2026 to $91 billion by 2035&lt;/a&gt;. McKinsey reports that &lt;a href="https://authorityai.ai/the-rise-of-ai-business-consulting/" rel="noopener noreferrer"&gt;over 70% of U.S. companies plan to adopt AI automation by 2026&lt;/a&gt;, but most small and mid-sized firms don't have anyone in-house who can set it up. The &lt;a href="https://www.uschamber.com/co/run/technology/ai-powered-growth-engines" rel="noopener noreferrer"&gt;U.S. Chamber of Commerce notes&lt;/a&gt; that SMB investment in AI has jumped 58% in two years, and &lt;a href="https://www.icsc.com/news-and-views/icsc-exchange/why-2026-is-the-year-small-businesses-finally-make-ai-work-for-them" rel="noopener noreferrer"&gt;ICSC calls 2026 "the year small businesses finally make AI work for them."&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The people who can bridge that gap, who can walk a small business owner through exactly what this post covers, are going to be in demand.&lt;/strong&gt; You don't need to be a machine learning researcher. You need to know how to install Ollama, pick the right model, and build something that solves a real problem. That's it.&lt;/p&gt;

&lt;p&gt;My wife's been playing around with her local setup for 2 weeks now. She still switches between ChatGPT, Claude, and Gemini for different things, but she also has data locally that she chats with using Ollama. Stuff she'd rather not upload anywhere. She never did ask about that subscription again.&lt;/p&gt;

&lt;p&gt;Start here. See what's possible. If you outgrow it, you'll know exactly what you're paying for when you upgrade.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;These are my personal thoughts and experiences, and they do not reflect the views of the company I work for.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.nvarma.com/blog/2026-03-15-build-your-own-ai-assistant-ollama-small-business/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;. Follow me there for more on software architecture, engineering leadership, and the craft of building things that last.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ollama</category>
      <category>privacy</category>
      <category>smallbusiness</category>
    </item>
    <item>
      <title>Building Radar: A Google Sheets Hack for My Link Feed</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Sat, 07 Mar 2026 04:00:50 +0000</pubDate>
      <link>https://forem.com/navinvarma/building-radar-a-google-sheets-hack-for-my-link-feed-4jjl</link>
      <guid>https://forem.com/navinvarma/building-radar-a-google-sheets-hack-for-my-link-feed-4jjl</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.nvarma.com/blog/2026-03-05-building-radar-google-sheets-link-feed/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'd just finished setting up a newsletter with Kit and tried webmentions (p.s. it was a waste of time, no backlinks showed up). I was thinking about what would actually make this site more useful for anyone who visits. I read a LOT of articles, blogs, hackernews, reddit, random LinkedIn/X/Substack posts, emerging tech news and so much more. I had this idea of a curated "what I'm reading" page. A place where I could share links and have them show up on my website automatically.&lt;/p&gt;

&lt;p&gt;The requirement was simple: click "Share" on an article on my phone, add a quick comment, and have it show up on my site. The name "Radar" came from a list of options (Signal, Bookmarks, Feed, Picks) but "on my radar" felt right for what this is.&lt;/p&gt;

&lt;h2&gt;
  
  
  First attempt: social platforms
&lt;/h2&gt;

&lt;p&gt;My first thought was a public WhatsApp channel. Share a link from my phone, have my site pull it in. Deflect the scale problem to a third-party. Except WhatsApp has no public API for channels. So that was a no-go.&lt;/p&gt;

&lt;p&gt;I looked at alternatives. Telegram had a great bot API, Raindrop.io was clean but another tool to maintain. I tried a couple of social platforms that had open public APIs. No API keys, no OAuth, no rate limit tokens. Just a URL you can hit and get a user's posts back as JSON.&lt;/p&gt;

&lt;p&gt;I built the first version of Radar using one of these (hint: Bluesky). It worked great: fetch my posts at build time, render them as a feed, done. No JavaScript on the client, no live API calls. Just a fetch during the Astro build and some HTML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I moved on
&lt;/h2&gt;

&lt;p&gt;Then I paused and thought about what I was actually doing. I was embedding a social platform into my personal website, not just linking to it, but making it a core data source. Whatever platform I picked, I'd be associating my site with it. And every social platform today carries baggage.&lt;/p&gt;

&lt;p&gt;I don't want my website to signal anything political. I'm not interested in debates, I'm not picking sides. This site is about what I'm building, reading, and thinking about — not which corner of the internet I post from. It also made me realize I need to be careful about what I share on my public website, so there's that.&lt;/p&gt;

&lt;p&gt;When I looked into it, every social platform had something: a reputation, a lean, a controversy. It's not that any of these platforms are bad. Millions of people use them and that's their choice. But I want my site to be a neutral space. So I started evaluating alternatives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mastodon&lt;/strong&gt;, &lt;strong&gt;Telegram&lt;/strong&gt;, &lt;strong&gt;Micro.blog&lt;/strong&gt; — All had usable APIs, but still social platforms. Micro.blog was also $5/month.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supabase&lt;/strong&gt; — Free Postgres with a REST API. Overkill for a list of links.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub repo/issues&lt;/strong&gt; — Free and reliable, but the workflow from phone is clunky.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Notion database&lt;/strong&gt; — Public API, but rate-limited and requires an integration token.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Raindrop.io&lt;/strong&gt; — Beautiful bookmarking tool, but another account to manage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloudflare Workers KV&lt;/strong&gt; — Free tier, but I'd need to build a small API to write to it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Everything was either too heavy, too dependent on a startup's goodwill, or carried associations I wanted to avoid.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Google Sheets hack
&lt;/h2&gt;

&lt;p&gt;I did some hypothesizing with my friend Claude Code, and the simplest option won. Google Forms + Google Sheets.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Google Form&lt;/strong&gt; with two fields: &lt;code&gt;url&lt;/code&gt; (short answer) and &lt;code&gt;comment&lt;/code&gt; (paragraph)&lt;/li&gt;
&lt;li&gt;Form responses automatically populate a &lt;strong&gt;Google Sheet&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Publish the sheet to the web as &lt;strong&gt;CSV&lt;/strong&gt;, which gives you a public URL that returns the sheet data as plain CSV, no API key needed&lt;/li&gt;
&lt;li&gt;Astro fetches the CSV at build time, parses it, and renders the feed&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F535j5lacj1m31yyduw8v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F535j5lacj1m31yyduw8v.png" alt="Google Sheets " width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The CSV URL looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https://docs.google.com/spreadsheets/d/e/.../pub?gid=...&amp;amp;single=true&amp;amp;output=csv
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No auth tokens. No API keys. No rate limits. No startup that might pivot or shut down. It's Google Sheets — a plain, boring data store. Nobody looks at a website and infers anything about the person behind it because they used a spreadsheet. It's invisible infrastructure, like using Gmail or Google Drive. That's exactly what I wanted.&lt;/p&gt;

&lt;p&gt;The workflow from my phone: I added a shortcut to the Google Form on my home screen. I see an article I like, tap the shortcut, paste the URL, add a quick comment, submit. On my Mac, I created a shortcut app called "My Radar" and pinned it to the dock. After I submit my link and comment, the next site rebuild that is currently 3 times a day picks it up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupfz15sudgsgdjac7m81.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fupfz15sudgsgdjac7m81.png" alt="My Radar showing up as a Mac app in Spotlight" width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For each URL, I also fetch Open Graph metadata at build time so each link shows a rich preview card with the article's actual headline and image, not just a raw URL.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.nvarma.com/radar" rel="noopener noreferrer"&gt;Radar page&lt;/a&gt; is live. Go check it out if you want to see what's on my radar. Someday I'd love to build a "Navin's List" that aggregates everything I share across LinkedIn, X, and this blog into one feed. For now, Google Forms isn't glamorous, but it's free, it's reliable, it works from my phone, and nobody's going to read subtext into a spreadsheet.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.nvarma.com/blog/2026-03-05-building-radar-google-sheets-link-feed/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;. Follow me there for more on software architecture, engineering leadership, and the craft of building things that last.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>astro</category>
      <category>buildinginpublic</category>
      <category>googlesheets</category>
    </item>
    <item>
      <title>How I Structure My Thinking in 2026: Learning New Things, Spec Driven Development for Responsible AI</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Sun, 01 Mar 2026 22:45:13 +0000</pubDate>
      <link>https://forem.com/navinvarma/how-i-structure-my-thinking-in-2026-learning-new-things-spec-driven-development-for-responsible-ai-4pdl</link>
      <guid>https://forem.com/navinvarma/how-i-structure-my-thinking-in-2026-learning-new-things-spec-driven-development-for-responsible-ai-4pdl</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.nvarma.com/blog/2026-03-01-spec-driven-development-claude-code/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdd0672tkp8vxpgs22oc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdd0672tkp8vxpgs22oc.jpg" alt="A cherry plum tree in full bloom in my backyard, pink blossoms against a spring sky — February 2026" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The cherry plum tree in my backyard, as we go into the first week of spring 2026. Every year it starts from bare branches and turns into this. It felt like a fitting image for where I am right now — new skills blooming from scratch, a lot of growth happening all at once, and the optimism about what's ahead is the fruit that comes next.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;My mode of learning is a mix of reading high-level documentation, thinking of some interesting problem to solve, and just getting my hands dirty. I've always been that way. I need to break something before I understand how it works. After this point is where I seek more structured forms of learning like courses, video walkthroughs and certifications.&lt;/p&gt;

&lt;p&gt;I started experimenting with spec-driven development in winter 2025, spending weekends learning new things. Getting hands on using my personal Claude Code Max plan when Opus 4.5 dropped made me realize how much our &lt;a href="https://www.nvarma.com/blog/2026-01-20-software-engineering-as-craft/" rel="noopener noreferrer"&gt;craft may be evolving&lt;/a&gt;. It was exciting to feel the &lt;a href="https://www.nvarma.com/blog/2026-02-09-manager-ic-pendulum/" rel="noopener noreferrer"&gt;builder awaken&lt;/a&gt; in me again. GitHub published &lt;a href="https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/" rel="noopener noreferrer"&gt;a great piece on spec-driven development with AI&lt;/a&gt; that resonated with where my head was at the time. A lot has changed since then. Markdown has become the de facto way to store context between coding sessions, and things are evolving fast. If you're reading this months from now, some of what I reference may already look different, but the underlying patterns of how to structure your thinking for AI-assisted work, those I expect to hold up.&lt;/p&gt;
&lt;h2&gt;
  
  
  How We Learn Is Changing
&lt;/h2&gt;

&lt;p&gt;All of this brought me back to something fundamental: how we learn. Structured plans, reading, video walkthroughs, trial and error. We all gravitate toward different modes. What's new is this: &lt;strong&gt;the key to workflow automation using agentic AI is now using the same formal learning methods.&lt;/strong&gt; You write your thoughts down, reason about what these AI collaborators actually produce, and guide them. Dan Koe's piece on &lt;a href="https://x.com/thedankoe/status/2016200242690195509" rel="noopener noreferrer"&gt;strategic thinking&lt;/a&gt; resonates here. Your ability to think across dimensions determines the outcome, and learning to think clearly is the actual skill being tested right now.&lt;/p&gt;

&lt;p&gt;One workflow that clicked for me was &lt;a href="https://x.com/filippkowalski/status/2010317972774994085" rel="noopener noreferrer"&gt;Filip Kowalski's approach&lt;/a&gt;: provide a feature spec, then have Claude interview you about implementation details and tradeoffs &lt;em&gt;before&lt;/em&gt; writing a single line of code. The thinking happens first. The code follows. This pattern is now built into the tools themselves. Claude Code has a &lt;a href="https://code.claude.com/docs/en/common-workflows" rel="noopener noreferrer"&gt;plan mode&lt;/a&gt; that enforces exactly this, restricting the agent to read-only exploration and clarifying questions until you approve a plan. The difference from traditional design documents is that the spec isn't a gate. It's a conversation with an agent that actually consumes it and acts on it in real time.&lt;/p&gt;
&lt;h2&gt;
  
  
  Where I'm Learning This
&lt;/h2&gt;

&lt;p&gt;I found spec-driven development and skills from following practitioners sharing their workflows in real time on X. These often take weeks before they appear as polished blog articles. &lt;strong&gt;The learning loop has outpaced the teaching loop.&lt;/strong&gt; By the time something makes it into a structured course or certification, the next thing has already landed.&lt;/p&gt;

&lt;p&gt;I've been using a personal Claude Code plan for a while now. Anthropic's January 2026 &lt;a href="https://resources.anthropic.com/hubfs/The-Complete-Guide-to-Building-Skill-for-Claude.pdf" rel="noopener noreferrer"&gt;Complete Guide to Building Skills for Claude&lt;/a&gt; was a turning point in my opinion. I had started building my own skills using examples from their published repos, but that guide helped me refine them into something more structured. In late February, Thariq from the Claude Code team wrote &lt;a href="https://x.com/trq212/status/2027463795355095314" rel="noopener noreferrer"&gt;"Seeing like an Agent"&lt;/a&gt; about building tools by asking what the model actually needs, not what a human would want. That shaped how I recently built my own harness. You are teaching the agent how &lt;em&gt;you&lt;/em&gt; think about quality.&lt;/p&gt;
&lt;h2&gt;
  
  
  Structuring Thinking Before Structuring Code
&lt;/h2&gt;

&lt;p&gt;The shift for me hasn't been in the tools alone — it has also been in how I think before I even open them. Early on I noticed something when I was experimenting with side projects: the projects where I wrote down what I was trying to build, the goals, the non-goals, the constraints, those went well. The projects where I just started prompting and iterating turned into a mess I eventually rewrote or abandoned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I’m realizing spec-driven development isn’t about generating better code. It’s about forcing myself to think clearly about what I actually want.&lt;/strong&gt; The spec is a thinking tool to plan your work, the code that results from clear input has better outcomes. When I write "Non-Goals: This is not a production-grade platform" for something I was experimenting with, that's not just a note for Claude. It's a constraint I'm imposing on &lt;em&gt;myself&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This is also where responsible AI use starts for me. When I give an AI agent unconstrained access to my codebase and just say "build me a feature," I'm letting the agent decide what good looks like. The spec, the architecture doc, the guardrails: those are how I stay in the loop. I am still the architect. The agent is the builder. But the builder needs blueprints, and if I don't provide them, it'll improvise. Sometimes brilliantly, sometimes in ways that are hard to undo.&lt;/p&gt;

&lt;p&gt;This applies beyond code too. I recently built a skill for reviewing blog posts before I publish them. I write my posts verbatim from my head, every word typed by me. After writing about The Builder's Guilt earlier this year, that's a vow I take seriously. But writing from your head means you're also editing from your head, and that's where things can go wrong: conflict of interest with my employer, phrasing that could be taken out of context, disclosures I should have included. Is it contradictory to write everything yourself but then have AI review it? I don't think so. It's the difference between having AI &lt;em&gt;write&lt;/em&gt; for you and having AI &lt;em&gt;watch your back&lt;/em&gt;. A spell checker doesn't make the writing less yours. Neither does a reviewer that catches the things you're too close to see.&lt;/p&gt;

&lt;p&gt;I write my judgment as well as my taste into a reusable workflow once, and the agent applies it consistently every time. &lt;strong&gt;That layering of safety checks into everything I produce, while keeping the writing entirely human, is what feels genuinely new about working with AI.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Putting It Into Practice
&lt;/h2&gt;

&lt;p&gt;Let me walk through what that looks like in practice. I've built skills for work and for side projects, but I'll use the blog post reviewer as a simple, relatable example.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Skill Definition as Your Spec
&lt;/h3&gt;

&lt;p&gt;A Claude Code skill starts with a YAML frontmatter that describes &lt;em&gt;when&lt;/em&gt; to trigger and &lt;em&gt;what&lt;/em&gt; it does. Here's the opening of mine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;blog-post-reviewer&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;gt;"&lt;/span&gt;
  &lt;span class="s"&gt;Comprehensive blog post review for professionals publishing on&lt;/span&gt;
  &lt;span class="s"&gt;LinkedIn, personal blogs, or Medium. Use this skill whenever a&lt;/span&gt;
  &lt;span class="s"&gt;user asks you to review, vet, critique, or give feedback on a&lt;/span&gt;
  &lt;span class="s"&gt;blog post... Trigger on phrases like "review my blog post",&lt;/span&gt;
  &lt;span class="s"&gt;"check for conflicts of interest", "is this safe to publish",&lt;/span&gt;
  &lt;span class="s"&gt;...&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That description isn't just documentation. It tells Claude &lt;em&gt;when&lt;/em&gt; to activate the skill and what to expect from the conversation. Get the triggers right, and it shows up exactly when you need it.&lt;/p&gt;

&lt;p&gt;Below the frontmatter, the skill walks through a five-step review workflow: gather context, check for conflicts of interest, assess public perception, evaluate writing quality, and deliver the findings. Each step has specific checklists. For conflict of interest, every issue gets classified as a &lt;strong&gt;hard blocker&lt;/strong&gt; (must fix before publishing), a &lt;strong&gt;recommended change&lt;/strong&gt; (should fix, but the post survives without it), or an &lt;strong&gt;awareness item&lt;/strong&gt; (nothing wrong in the text, but something to watch for). I wrote these classifications once based on what I'd actually worry about if I were reviewing my own work. Now the agent applies them every time, without the "it's probably fine" bias I'd have at 11pm on a Sunday.&lt;/p&gt;

&lt;p&gt;I also baked in my professional context (employer, role, adjacency areas to watch) and sections for my writing voice, so the feedback is calibrated to how I actually write rather than generic advice.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Compound Effect
&lt;/h3&gt;

&lt;p&gt;None of these layers is revolutionary on its own. A skill definition is just a trigger. Professional context is a profile. The writing voice stuff is basically a style guide. But put together, they create something that feels different. &lt;strong&gt;It's not that the AI becomes smarter. It's that I've given it the right context to be effective.&lt;/strong&gt; The agent operates within the context I would if I were reviewing my own work.&lt;/p&gt;

&lt;p&gt;Every project teaches me something new about what to encode and what to leave flexible. If you are out there experimenting with these workflows, I'd love to hear how you are approaching it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where the Tools Are Converging
&lt;/h2&gt;

&lt;p&gt;This pattern of encoding your thinking into structured files isn't unique to Claude Code. The major AI coding tools are converging on a similar architectural pattern — instruction files, composable workflows, and extensibility layers — though the maturity and ergonomics differ significantly:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Claude Code&lt;/th&gt;
&lt;th&gt;Cursor&lt;/th&gt;
&lt;th&gt;Codex CLI&lt;/th&gt;
&lt;th&gt;Gemini CLI&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Instructions file&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;.cursor/rules/*.mdc&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;AGENTS.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;GEMINI.md&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Custom workflows&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Skills (&lt;code&gt;SKILL.md&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Commands (&lt;code&gt;.cursor/commands/&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Agent Skills (&lt;code&gt;SKILL.md&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;Agent Skills (&lt;code&gt;SKILL.md&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Bundled distribution&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Plugins&lt;/td&gt;
&lt;td&gt;Plugins (Feb 2026)&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;Extensions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Structured thinking&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Plan mode&lt;/td&gt;
&lt;td&gt;Plan mode&lt;/td&gt;
&lt;td&gt;Plan mode&lt;/td&gt;
&lt;td&gt;Plan mode (experimental)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The instruction file row is the most consistent — every tool has some version of "put a markdown file in your repo and the agent reads it." Custom workflows vary more: Claude's skills are the most formalized, Cursor's commands are lighter-weight, and the Codex/Gemini abstractions are still evolving. Bundled distribution is where the models diverge most. Claude Code's plugins package skills, agents, hooks, MCP servers, and LSP servers into a single installable unit. Cursor's plugin marketplace launched in February 2026. Gemini leans on extensions. Codex CLI hasn't emphasized bundled distribution in the same way yet. &lt;strong&gt;The pattern is the same across tools. But the fact that they are heading in the same direction tells you where this is going.&lt;/strong&gt; If you're investing time in learning how to structure your thinking for one of these tools, that skill transfers. The real learning is not in picking a tool. It’s in learning how to apply these concepts regardless of the tool.&lt;/p&gt;

&lt;p&gt;The purple leaf plum tree in my backyard — cherry plum, if you want to get technical — went from bare branches to full bloom in about two weeks. That's how this feels right now — a lot of growth happening fast, and I'm just trying to write it down before the next thing lands.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;These are my personal thoughts and experiences from side projects and weekend experiments, and they do not reflect the views of my employer.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.nvarma.com/blog/2026-03-01-spec-driven-development-claude-code/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;. Follow me there for more on software architecture, engineering leadership, and the craft of building things that last.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>specdrivendevelopment</category>
      <category>aicodingtools</category>
      <category>responsibleai</category>
    </item>
    <item>
      <title>AI Almost Sold Me a Subscription I Didn't Need</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Mon, 23 Feb 2026 04:00:23 +0000</pubDate>
      <link>https://forem.com/navinvarma/ai-almost-sold-me-a-subscription-i-didnt-need-1gg8</link>
      <guid>https://forem.com/navinvarma/ai-almost-sold-me-a-subscription-i-didnt-need-1gg8</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.nvarma.com/blog/2026-02-22-ai-almost-sold-me-a-subscription/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I lead distributed engineering teams across multiple time zones, which means my work calendar can look strange on any given day — early morning calls to sync with Europe, midday blocks for other US offices, and the occasional odd gap where I've shifted my schedule around. Outside of work, I enjoy mentoring people on engineering leadership, connecting with folks to learn from each other, career counseling, or just jamming on music. I wanted to make it easy for people to book time with me — a quick breakfast or lunch during the week, or a longer session on the weekend. One link, respects my calendar, done.&lt;/p&gt;

&lt;p&gt;It was not trivially easy. Or rather, it was — I just didn't find the easy answer first.&lt;/p&gt;

&lt;h2&gt;
  
  
  The split-calendar problem
&lt;/h2&gt;

&lt;p&gt;My work runs on Outlook, locked down behind corporate IT. My personal life runs on Gmail. Both calendars sync to my Android phone, so &lt;em&gt;I&lt;/em&gt; can see everything in one view. The problem is letting other people see when I'm available without giving them access to either calendar directly. If you're on iPhone with iCloud or you already pay for &lt;a href="https://calendly.com/" rel="noopener noreferrer"&gt;Calendly&lt;/a&gt;, this is old news. But I got curious about why it felt so hard for everyone else, and it turns out &lt;a href="https://gs.statcounter.com/os-market-share/mobile/worldwide" rel="noopener noreferrer"&gt;Android sits at about 70% of the global mobile market&lt;/a&gt; as of January 2026.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfvkjio5fle9ie53zp4w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwfvkjio5fle9ie53zp4w.png" alt="StatCounter Mobile OS Market Share Worldwide - January 2026, showing Android at 70.36% and iOS at 29.25%" width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Seven out of ten smartphone users are on Android, and the scheduling tool ecosystem largely assumes you can authenticate directly against a cloud calendar provider — the exact thing enterprise IT blocks. So this isn't just my problem.&lt;/p&gt;

&lt;p&gt;I suspected Google Calendar could handle this natively. But instead of poking around in settings for five minutes, I did what a lot of us do now: I opened up &lt;a href="https://gemini.google.com/" rel="noopener noreferrer"&gt;Gemini&lt;/a&gt; and &lt;a href="https://claude.ai/" rel="noopener noreferrer"&gt;Claude&lt;/a&gt; and asked for help.&lt;/p&gt;

&lt;h2&gt;
  
  
  The rabbit hole
&lt;/h2&gt;

&lt;p&gt;Gemini 3 Fast went &lt;em&gt;deep&lt;/em&gt;. I'm talking five rounds of back-and-forth. It walked me through shadow calendars (where you manually duplicate every work meeting into a separate calendar — by hand), OAuth token flows, Android apps like &lt;a href="https://play.google.com/store/apps/details?id=me.sync.syncai" rel="noopener noreferrer"&gt;Calendar.AI&lt;/a&gt; and &lt;a href="https://www.syncgene.com/" rel="noopener noreferrer"&gt;SyncGene&lt;/a&gt; that bridge your phone's local calendar data to the cloud, and free-tier comparisons across multiple scheduling platforms. It even produced a table comparing sync frequency and cost. I genuinely appreciated the thoroughness, every single recommendation was technically valid.&lt;/p&gt;

&lt;p&gt;Claude took a different path but landed in the same place. It suggested &lt;a href="https://calendly.com/" rel="noopener noreferrer"&gt;Calendly&lt;/a&gt;, &lt;a href="https://cal.com/" rel="noopener noreferrer"&gt;Cal.com&lt;/a&gt; (open source), and &lt;a href="https://savvycal.com/" rel="noopener noreferrer"&gt;SavvyCal&lt;/a&gt; as products I can subscribe to. When I explained that my work calendar was behind corporate SSO and I cannot use my personal computer to link calendars, it pivoted to bridging apps like Sync My Calendar and &lt;a href="https://www.davx5.com/" rel="noopener noreferrer"&gt;DAVx5&lt;/a&gt; to push data from my Android to the cloud. It did mention Google Calendar's Booking Pages in passing, early in the conversation, but as one bullet among several rather than the obvious answer.&lt;/p&gt;

&lt;p&gt;This was striking to me: across both conversations, every recommendation involved installing something new or creating an account somewhere. Neither tool paused to ask what I already had. They pattern-matched toward a new purchase rather than a native solution.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I actually ended up doing
&lt;/h2&gt;

&lt;p&gt;Since both calendars already synced to my phone, I checked my Android Calendar settings and made sure that data was flowing up to my Google account. One toggle I'd apparently never flipped. Once my work busy times appeared in Google Calendar alongside my personal events, I set up &lt;a href="https://support.google.com/calendar/answer/10729749" rel="noopener noreferrer"&gt;Booking Pages&lt;/a&gt; — a built-in Google Calendar feature that checks all your visible calendars for conflicts, including subscribed ones, and generates a shareable booking page.&lt;/p&gt;

&lt;p&gt;No new apps. No subscriptions. No OAuth dance. Took me less time than either AI conversation did, which is a little ironic.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Literacy Tax
&lt;/h2&gt;

&lt;p&gt;I keep coming back to what would have happened if I didn't have a strong technical understanding of how calendar technology works — the difference between local and cloud calendars, what an ICS feed is, why enterprise auth blocks third-party integrations. If this was many years ago before I gained all this experience, I would have stopped at the first confident answer from my search tool, installed one of those bridging apps, maybe signed up for &lt;a href="https://cal.com/" rel="noopener noreferrer"&gt;Cal.com&lt;/a&gt;'s free tier, hit the plan limit in a month, and upgraded to a paid subscription. Ten dollars a month for something one of my existing tools already does out of the box.&lt;/p&gt;

&lt;p&gt;I think of this as a &lt;strong&gt;tech literacy tax&lt;/strong&gt;: the money you end up spending on tools, apps, and subscriptions for things your existing software already handles, simply because you didn't know to look. Nobody is being dishonest here. These models learned from a web that is &lt;em&gt;saturated&lt;/em&gt; with "10 best tools" listicles, affiliate reviews, and product marketing pages. When you ask an AI how to solve a problem, it pattern-matches toward the products that exist to solve that problem, because that's what the training data talks about. Nobody writes a listicle about the thing you already have.&lt;/p&gt;

&lt;h2&gt;
  
  
  These models are still learning
&lt;/h2&gt;

&lt;p&gt;There's a reasonable case that this gets better over time. These models improve through reinforcement learning from human feedback — essentially, when enough people flag the inaccuracy of the output from AI. The model adjusts its weights and learns to surface native solutions first. But we're not there yet, and in the meantime, the tax is real — especially for the non-technical majority who take the first answer at face value.&lt;/p&gt;

&lt;p&gt;It also makes me wonder about the scheduling tool market more broadly. There is a massive, legitimate need for enterprises at scale — the kind of complex, multi-stakeholder workflows that organizations deal with every day across billions of actions they take across multiple software products tightly integrated with data flows. In those environments, a robust, unified platform is the only way to manage that level of complexity. But for the basic personal use case — letting someone pick a time slot without a dozen emails — the "native" answer is often already sitting in your pocket. As core platforms continue to integrate features that once required a separate login, the niche market for standalone solutions will need to keep innovating to stay ahead of the game.&lt;/p&gt;

&lt;p&gt;Before you install anything new or ask an AI what tool to use, it's worth spending five minutes asking a simpler question: does the thing I already have do this?&lt;/p&gt;

&lt;p&gt;In my experience, it usually does.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you're curious how this turned out, the "Book a Meeting" link on my &lt;a href="https://www.nvarma.com/connect" rel="noopener noreferrer"&gt;Connect&lt;/a&gt; page is the result of this weekend rabbit hole.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This was a personal project, entirely unrelated to my day job. These are my own thoughts and opinions.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.nvarma.com/blog/2026-02-22-ai-almost-sold-me-a-subscription/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;. Follow me there for more on software architecture, engineering leadership, and the craft of building things that last.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>googlecalendar</category>
      <category>scheduling</category>
    </item>
    <item>
      <title>Making Images, Music, and More with AI on a Mac Mini: One Idea, Many Uses</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Sat, 14 Feb 2026 09:04:41 +0000</pubDate>
      <link>https://forem.com/navinvarma/making-images-music-and-more-with-ai-on-a-mac-mini-one-idea-many-uses-4pj5</link>
      <guid>https://forem.com/navinvarma/making-images-music-and-more-with-ai-on-a-mac-mini-one-idea-many-uses-4pj5</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.nvarma.com/blog/2026-02-14-comfyui-mac-mini-laymans-guide/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you’ve wondered how AI image or music generation actually works—or whether you can run it on a Mac Mini—this is a short, plain-language guide. The main idea: &lt;strong&gt;the same pattern applies whether you’re making an image, a track, or a clip.&lt;/strong&gt; You pick a “model” (a large file trained to create that kind of output), describe what you want, let it run through many small steps, and get a result. I’ll outline that pattern, then walk through running ComfyUI for images on a Mac Mini.&lt;/p&gt;

&lt;h2&gt;
  
  
  One idea for images, music, and video
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Choose a model&lt;/strong&gt; — a big file trained on lots of data to make that type of output. &lt;strong&gt;Describe what you want&lt;/strong&gt; — e.g. “a cat on a sofa” or “upbeat piano, rainy day.” &lt;strong&gt;Run it&lt;/strong&gt; — the model refines things step by step (more steps = often better, but slower; on a Mini, a few minutes per image or track is normal). &lt;strong&gt;Use the result&lt;/strong&gt; — image, audio, or video file. Different tools, same idea. I use ComfyUI for images and the same flow for music: pick model, describe, run, get output.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why run it yourself?
&lt;/h2&gt;

&lt;p&gt;You choose the model (no lock-in to one website). No usage caps. Your prompts and outputs can stay on your machine. The tradeoff: some setup time, and on a Mac Mini each image or clip can take a few minutes instead of seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  What you need
&lt;/h2&gt;

&lt;p&gt;Mac Mini with Apple Silicon (M1–M4). Plan for at least 10–15GB free (app + one image model). ComfyUI Desktop for Mac only runs on Apple Silicon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install and first run
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Download&lt;/strong&gt; the Apple Silicon build from the &lt;a href="https://docs.comfy.org/installation/desktop/macos" rel="noopener noreferrer"&gt;ComfyUI Desktop – MacOS guide&lt;/a&gt; (ARM64 DMG).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install&lt;/strong&gt; — open the DMG and drag ComfyUI into Applications. The guide has a screenshot of this step.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;First launch&lt;/strong&gt; — open ComfyUI from Applications or Spotlight. When asked how to use your Mac’s graphics, choose &lt;strong&gt;MPS&lt;/strong&gt; (correct for Apple Silicon). Pick a folder with several GB free for its files. Let it finish; it may download Python and other bits (can take a while). If something fails, the &lt;a href="https://docs.comfy.org/installation/desktop/macos" rel="noopener noreferrer"&gt;MacOS guide&lt;/a&gt; has troubleshooting and log locations.&lt;/li&gt;
&lt;li&gt;You’ll see a canvas with boxes and lines (nodes). You don’t build from scratch—you load a ready-made “workflow” and type your prompt. The &lt;a href="https://docs.comfy.org/get_started/first_generation" rel="noopener noreferrer"&gt;Getting Started with AI Image Generation&lt;/a&gt; guide has a screenshot of the interface.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Load a workflow and generate an image
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;In the app: &lt;strong&gt;Workflows → Browse example workflows&lt;/strong&gt; (or the folder icon), then select the default &lt;strong&gt;Image Generation&lt;/strong&gt; workflow. Use &lt;strong&gt;Fit View&lt;/strong&gt; if it doesn’t fit the screen.&lt;/li&gt;
&lt;li&gt;The workflow needs an &lt;strong&gt;image model&lt;/strong&gt;. The &lt;a href="https://docs.comfy.org/get_started/first_generation" rel="noopener noreferrer"&gt;first generation guide&lt;/a&gt; suggests one and shows what to do if it’s missing (often a &lt;strong&gt;Download&lt;/strong&gt; button; the file can be several GB). If you add a model from elsewhere, put it in ComfyUI’s “checkpoints” folder, then select it in the “load model” node.&lt;/li&gt;
&lt;li&gt;In the text box (e.g. “CLIP Text Encode”), type what you want—e.g. “a cat on a sofa”—and optionally what to avoid. Click &lt;strong&gt;Queue&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;ComfyUI runs left to right. The “drawing” step takes most of the time on a Mini (one to several minutes). When it’s done, the image appears in the “save image” node; right-click to save.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Why it’s slow:&lt;/strong&gt; The model refines a noisy starting point step by step until it matches your words. More “steps” (often 20–30) = better quality, more time. On a Mini, use smaller sizes (512×512 or 768×768) and fewer steps (15–20) while learning. One image at a time keeps things stable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Music, video, and text: same pattern
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Music:&lt;/strong&gt; ComfyUI now supports audio generation natively with &lt;a href="https://blog.comfy.org/p/ace-step-15-is-now-available-in-comfyui" rel="noopener noreferrer"&gt;ACE-Step 1.5&lt;/a&gt;. Go to &lt;strong&gt;Workflows → Browse Templates → Audio&lt;/strong&gt; and load the ACE-Step workflow. Type a style ("upbeat piano, rainy day") and optional lyrics, hit Queue, and get an audio file. Full songs generate in seconds on a decent GPU.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Video:&lt;/strong&gt; Wan 2.1/2.2/2.6 models run natively in ComfyUI—go to &lt;strong&gt;Workflows → Browse Templates → Video&lt;/strong&gt; for ready-made text-to-video and image-to-video workflows. Bigger files, longer runs than images, but the same "model + prompt + run = result" idea.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Text:&lt;/strong&gt; For text generation (chatbots, writing), &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; lets you run models like Llama, Gemma, and DeepSeek locally. Install it, run &lt;code&gt;ollama run llama3.2&lt;/code&gt; in Terminal, and chat. Apple Silicon's unified memory makes Mac Mini surprisingly capable here.&lt;/p&gt;

&lt;p&gt;Once the pattern clicks, you can switch between images, music, video, and text without relearning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Going further
&lt;/h2&gt;

&lt;p&gt;For more depth and troubleshooting: &lt;a href="https://docs.comfy.org/" rel="noopener noreferrer"&gt;ComfyUI official docs&lt;/a&gt;, &lt;a href="https://stable-diffusion-art.com/comfyui/" rel="noopener noreferrer"&gt;Stable Diffusion Art – ComfyUI&lt;/a&gt;, and &lt;a href="https://github.com/comfyanonymous/ComfyUI" rel="noopener noreferrer"&gt;ComfyUI on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Video tutorials&lt;/strong&gt; — image, video, audio, and text generation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Image generation&lt;/strong&gt; — Sebastian Kamph's complete ComfyUI beginner's guide (install, nodes, workflows, and first image):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=23VkGD-4uwk" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=23VkGD-4uwk&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Video generation (Wan 2.6 in ComfyUI)&lt;/strong&gt; — latest Wan video model with reference-to-video, by Sebastian Kamph:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=dGDIpQz_l-E" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=dGDIpQz_l-E&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Video generation (Wan 2.2 in ComfyUI)&lt;/strong&gt; — character animation, lip-sync, and video workflows:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=woCP1Q_Htwo" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=woCP1Q_Htwo&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audio/music generation&lt;/strong&gt; — ComfyUI now supports &lt;a href="https://blog.comfy.org/p/ace-step-15-is-now-available-in-comfyui" rel="noopener noreferrer"&gt;ACE-Step 1.5&lt;/a&gt;, an open-source music model that generates full songs in seconds on consumer hardware. In ComfyUI, go to &lt;strong&gt;Workflows → Browse Templates → Audio → "ACE-Step 1.5 Music Generation AIO"&lt;/strong&gt;, set a style tag (e.g. "upbeat piano, rainy day") and optional lyrics, then click &lt;strong&gt;Queue&lt;/strong&gt;. Same pattern: pick model, describe, run, get output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Text generation&lt;/strong&gt; — NetworkChuck's walkthrough on running LLMs locally with Ollama and Open WebUI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=Wjrdr0NU4Sk" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=Wjrdr0NU4Sk&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.nvarma.com/blog/2026-02-14-comfyui-mac-mini-laymans-guide/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;. Follow me there for more on software architecture, engineering leadership, and the craft of building things that last.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>ai</category>
      <category>comfyui</category>
      <category>mac</category>
    </item>
    <item>
      <title>Blog Syndication: Cross-Publishing Blog Posts to Dev.to, Hashnode, and Medium</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Wed, 11 Feb 2026 04:36:21 +0000</pubDate>
      <link>https://forem.com/navinvarma/blog-syndication-cross-publishing-blog-posts-to-devto-hashnode-and-medium-1a5d</link>
      <guid>https://forem.com/navinvarma/blog-syndication-cross-publishing-blog-posts-to-devto-hashnode-and-medium-1a5d</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Originally published on &lt;a href="https://www.nvarma.com/blog/2026-02-10-cross-publishing-blog-posts-devto-hashnode-medium/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I recently migrated to Astro based blog that is self-hosted. Most developers have a presence on Dev.to, Hashnode, and Medium. I wanted to syndicate my posts there too and was curious about the automation that exists today in this space.&lt;/p&gt;

&lt;p&gt;So I built a small pipeline that handles it automatically. Push a new post to my Astro site, and GitHub Actions cross-publishes it to Dev.to and Hashnode with the canonical URL pointing back to my site. Medium is a different story, which I'll get to.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why canonical URLs matter
&lt;/h2&gt;

&lt;p&gt;Before getting into the code, this is the one thing you should care about if you cross-publish anything. Every platform lets you set a canonical URL — &lt;code&gt;canonical_url&lt;/code&gt; on Dev.to, &lt;code&gt;originalArticleURL&lt;/code&gt; on Hashnode. It's basically a pointer that says "the original lives on my site." If you don't set it, Google sees three copies and will probably rank the platform version higher than yours.&lt;/p&gt;

&lt;p&gt;Set the canonical URL. Every time. No exceptions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dev.to has a straightforward REST API
&lt;/h2&gt;

&lt;p&gt;Dev.to is the simplest one. You generate an API key at &lt;a href="https://dev.to/settings/extensions"&gt;dev.to/settings/extensions&lt;/a&gt;, and then it's just a POST request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;payload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;article&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;body_markdown&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;published&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;[^&lt;/span&gt;&lt;span class="sr"&gt;a-z0-9&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;/g&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="na"&gt;canonical_url&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;canonicalUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://dev.to/api/articles&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;api-key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DEVTO_API_KEY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;payload&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Dev.to, you need make sure your posts are limited to 4 tags, all lowercase, no special characters. You should also ensure you don't hit their rate limit with too many requests (HTTP 429). The response payload has the article ID and URL. I store this in a tracking file so I don't publish the same post twice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hashnode uses GraphQL
&lt;/h2&gt;

&lt;p&gt;Hashnode's API is GraphQL-based. You need a Personal Access Token from &lt;a href="https://hashnode.com/settings/developer" rel="noopener noreferrer"&gt;hashnode.com/settings/developer&lt;/a&gt; and your publication ID. If you know your blog URL, you can get the publication ID without even logging in:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://gql.hashnode.com &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"query":"{ publication(host:\"yourblog.hashnode.dev\") { id title } }"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The publish mutation looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mutation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`
  mutation PublishPost($input: PublishPostInput!) {
    publishPost(input: $input) {
      post { id url }
    }
  }
`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;variables&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;contentMarkdown&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;publicationId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;tags&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;slug&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="se"&gt;[^&lt;/span&gt;&lt;span class="sr"&gt;a-z0-9&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;+/g&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;-&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;})),&lt;/span&gt;
    &lt;span class="na"&gt;originalArticleURL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;canonicalUrl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;slug&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hashnode tags are objects with both a &lt;code&gt;name&lt;/code&gt; and a &lt;code&gt;slug&lt;/code&gt;, which is a little more involved than Dev.to's plain strings. The &lt;code&gt;originalArticleURL&lt;/code&gt; field is their version of the canonical URL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Medium dropped support for API tokens
&lt;/h2&gt;

&lt;p&gt;You can't programmatically publish to Medium anymore. At least not through any official channel.&lt;/p&gt;

&lt;p&gt;But you can still get your posts on Medium manually with the canonical URL intact. Medium has an "Import a story" feature that does exactly this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Go to &lt;a href="https://medium.com/me/stories" rel="noopener noreferrer"&gt;medium.com/me/stories&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Click the &lt;strong&gt;Import a story&lt;/strong&gt; button.&lt;/li&gt;
&lt;li&gt;Paste your post's URL (e.g. &lt;code&gt;https://www.nvarma.com/blog/your-post-slug/&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Medium will import the content and automatically set the canonical URL to point back to your blog post.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That last part is the important bit. Medium links to the original URL when you use the import tool, so search engines still know where the post originated. While this manual part sucks, I'm glad they still have a way to do this less painfully.&lt;/p&gt;

&lt;p&gt;For older posts you want to bring over, same process. Just import each one by URL. Medium will pull the content, preserve your formatting reasonably well, and set the canonical link. You might need to clean up some formatting afterward, so proof read them before publishing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sanitizing MDX for other platforms
&lt;/h2&gt;

&lt;p&gt;Some of my posts use custom image components and are written in MDX. MDX generated content does not work on Dev.to or Hashnode.&lt;/p&gt;

&lt;p&gt;I wrote a sanitizer that transforms the content into portable markdown:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Strip MDX imports&lt;/span&gt;
&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/^import&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;+.*$/gm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Convert &amp;lt;figure&amp;gt;/&amp;lt;img&amp;gt;/&amp;lt;figcaption&amp;gt; to markdown&lt;/span&gt;
&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="sr"&gt;/&amp;lt;figure&lt;/span&gt;&lt;span class="se"&gt;[^&lt;/span&gt;&lt;span class="sr"&gt;&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;*&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;*&amp;lt;img&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;+src="&lt;/span&gt;&lt;span class="se"&gt;([^&lt;/span&gt;&lt;span class="sr"&gt;"&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;+alt="&lt;/span&gt;&lt;span class="se"&gt;([^&lt;/span&gt;&lt;span class="sr"&gt;"&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;)&lt;/span&gt;&lt;span class="sr"&gt;"&lt;/span&gt;&lt;span class="se"&gt;[^&lt;/span&gt;&lt;span class="sr"&gt;&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;\/?&lt;/span&gt;&lt;span class="sr"&gt;&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;\s&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;(?:&lt;/span&gt;&lt;span class="sr"&gt;&amp;lt;figcaption&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;([\s\S]&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;?)&lt;/span&gt;&lt;span class="sr"&gt;&amp;lt;&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;figcaption&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;)?\s&lt;/span&gt;&lt;span class="sr"&gt;*&amp;lt;&lt;/span&gt;&lt;span class="se"&gt;\/&lt;/span&gt;&lt;span class="sr"&gt;figure&amp;gt;/g&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;_match&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;caption&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`![&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;](https://www.nvarma.com/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;resolveUrl&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;&lt;span class="s2"&gt;)`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;caption&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="s2"&gt;`\n*&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;caption&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;*`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Replace Astro components with "see original" links&lt;/span&gt;
&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="sr"&gt;/&amp;lt;&lt;/span&gt;&lt;span class="se"&gt;([&lt;/span&gt;&lt;span class="sr"&gt;A-Z&lt;/span&gt;&lt;span class="se"&gt;][&lt;/span&gt;&lt;span class="sr"&gt;A-Za-z&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)[^&lt;/span&gt;&lt;span class="sr"&gt;&amp;gt;&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;\/?&lt;/span&gt;&lt;span class="sr"&gt;&amp;gt;/g&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;_match&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;componentName&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`*[Interactive &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;componentName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; — see original post](https://www.nvarma.com/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;canonicalUrl&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;)*`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Resolve relative paths to absolute URLs&lt;/span&gt;
&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="sr"&gt;/!&lt;/span&gt;&lt;span class="se"&gt;\[([^\]]&lt;/span&gt;&lt;span class="sr"&gt;*&lt;/span&gt;&lt;span class="se"&gt;)\]\((?!&lt;/span&gt;&lt;span class="sr"&gt;https&lt;/span&gt;&lt;span class="se"&gt;?&lt;/span&gt;&lt;span class="sr"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\/\/)([^&lt;/span&gt;&lt;span class="sr"&gt;)&lt;/span&gt;&lt;span class="se"&gt;]&lt;/span&gt;&lt;span class="sr"&gt;+&lt;/span&gt;&lt;span class="se"&gt;)\)&lt;/span&gt;&lt;span class="sr"&gt;/g&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;_match&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`![&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;alt&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;](https://www.nvarma.com/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;SITE_URL&lt;/span&gt;&lt;span class="p"&gt;}${&lt;/span&gt;&lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;)`&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Astro component replacement is my favorite part. If I have a &lt;code&gt;*[Interactive BeforeAfterCarousel — see original post](https://www.nvarma.com/blog/2026-02-10-cross-publishing-blog-posts-devto-hashnode-medium/)*&lt;/code&gt; in my Astro rebuild post, the cross-published version gets a link that says "Interactive BeforeAfterCarousel - see original post" instead of broken HTML. Not perfect, but it's honest and sends people to the real thing.&lt;/p&gt;

&lt;p&gt;I also prepend each post with a "Originally published on nvarma.com" header and append a footer with a link back. A little self-promotional, but that's kind of the whole point of cross-publishing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating it with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;The workflow triggers whenever I push changes to &lt;code&gt;src/content/blog/**&lt;/code&gt; on main:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Cross-Publish Blog Posts&lt;/span&gt;

&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;main&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;src/content/blog/**'&lt;/span&gt;
  &lt;span class="na"&gt;workflow_dispatch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;post_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Specific&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;post&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;ID&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;to&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;publish&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;(filename&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;without&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;extension)'&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;workflow_dispatch&lt;/code&gt; trigger lets me manually publish a specific post if I need to. The script reads all blog posts and then checks a tracking JSON file to see what's already been published. After that, it only processes new posts. It also skips posts older than 30 days to avoid flooding those syndication sites with old content.&lt;/p&gt;

&lt;p&gt;The tracking file gets committed back to the repo automatically, so there's a record of what went where:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"2026-02-09-manager-ic-pendulum"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The Manager-IC Pendulum..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"firstPublishedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-10T03:33:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"platforms"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"devto"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"3245645"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://dev.to/navinvarma/the-manager-ic-pendulum..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"publishedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-10T03:33:00Z"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"hashnode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"abc123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://navinvarma.hashnode.dev/the-manager-ic-pendulum..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"publishedAt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-02-10T04:00:00Z"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setting it up yourself
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Dev.to&lt;/strong&gt;: Generate an API key at Settings &amp;gt; Extensions. Store it as &lt;code&gt;DEVTO_API_KEY&lt;/code&gt; in your repo's GitHub Actions secrets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hashnode&lt;/strong&gt;: Get a Personal Access Token from Settings &amp;gt; Developer. Look up your publication ID with the curl command from above. Store them as &lt;code&gt;HASHNODE_PAT&lt;/code&gt; and &lt;code&gt;HASHNODE_PUBLICATION_ID&lt;/code&gt; respectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Medium&lt;/strong&gt;: Import stories manually using Medium's import tool. Paste the canonical URL and Medium imports the content of your post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Actions secrets&lt;/strong&gt;: Go to your repo Settings &amp;gt; Secrets and variables &amp;gt; Actions, and add each one. The workflow only runs when blog content changes, so it won't burn through your Actions minutes.&lt;/p&gt;

&lt;p&gt;If you already have posts on a platform (like I did with Dev.to), make sure the tracking JSON has those entries before your first run. Be careful here as the script will try to publish duplicates and APIs will likely reject them or create duplicates.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reflections
&lt;/h2&gt;

&lt;p&gt;Setting this up took an evening. Not too much work, but you need to know your way around how Astro builds work and some general GitHub Actions, API &amp;amp; Integrations knowledge.&lt;/p&gt;

&lt;p&gt;The nice part is that I now have a single workflow: write in markdown, push to git, and my post shows up on three platforms with proper canonical URLs. Medium requires a manual import but at least the process is simple and the canonical URL is preserved.&lt;/p&gt;

&lt;p&gt;I'll probably add more platforms later if they have decent APIs. This was a fun weekend project to automate away some of my toil, hoping you might find this useful.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://www.nvarma.com/blog/2026-02-10-cross-publishing-blog-posts-devto-hashnode-medium/" rel="noopener noreferrer"&gt;nvarma.com&lt;/a&gt;. Follow me there for more on software architecture, engineering leadership, and the craft of building things that last.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tech</category>
      <category>webdev</category>
      <category>software</category>
      <category>ai</category>
    </item>
    <item>
      <title>The Manager–IC Pendulum and the Rise of the “Builder with Taste”</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Tue, 10 Feb 2026 03:33:00 +0000</pubDate>
      <link>https://forem.com/navinvarma/the-manager-ic-pendulum-and-the-rise-of-the-builder-with-taste-1e7b</link>
      <guid>https://forem.com/navinvarma/the-manager-ic-pendulum-and-the-rise-of-the-builder-with-taste-1e7b</guid>
      <description>&lt;p&gt;*These are my personal thoughts, experiences, and opinions, and they do not reflect the views of the company I work for.*&lt;/p&gt;

&lt;p&gt;I've been reflecting a lot lately about the pendulum swing between Engineering Manager and Individual Contributor. For those who have not read it, I would suggest reading &lt;a href="https://charity.wtf/" rel="noopener noreferrer"&gt;Charity Majors&lt;/a&gt;' &lt;a href="https://charity.wtf/2017/05/11/the-engineer-manager-pendulum/" rel="noopener noreferrer"&gt;multiple&lt;/a&gt; &lt;a href="https://charity.wtf/2019/01/04/engineering-management-the-pendulum-or-the-ladder/" rel="noopener noreferrer"&gt;posts&lt;/a&gt; about this &lt;a href="https://charity.wtf/tag/pendulum/" rel="noopener noreferrer"&gt;topic&lt;/a&gt; and coming back here. It's an emotional space I've been in for the last five years as a hands-on manager, but the last twelve months have completely changed the physics of that swing.&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>management</category>
      <category>engineering</category>
      <category>ai</category>
    </item>
    <item>
      <title>The Builder's Guilt: AI saturation makes me sad</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Sat, 31 Jan 2026 21:00:00 +0000</pubDate>
      <link>https://forem.com/navinvarma/the-builders-guilt-ai-saturation-makes-me-sad-5d70</link>
      <guid>https://forem.com/navinvarma/the-builders-guilt-ai-saturation-makes-me-sad-5d70</guid>
      <description>&lt;p&gt;One of my plans in 2026 is to generate more original content.&lt;/p&gt;

&lt;p&gt;In the age of AI saturation, original content is something that takes you back in time - nostalgic, almost. It's like, you know when you're a kid growing up and you have TV, radio and so many other forms of entertainment, but you still write your daily journal at night or listen to your favorite songs on your cassette tapes. I did for a while, until I moved on to something else that made me happy. The memories of that daily journal habit are still very fresh in my memories, sitting down with a diary and writing what was on my mind to this unknown person. The frustrations, the joy, the doubt and the dreams of better days ahead.&lt;/p&gt;

</description>
      <category>life</category>
      <category>tech</category>
      <category>ai</category>
    </item>
    <item>
      <title>Adding Social Share Buttons to Your Astro Blog</title>
      <dc:creator>Navin Varma</dc:creator>
      <pubDate>Mon, 26 Jan 2026 09:00:00 +0000</pubDate>
      <link>https://forem.com/navinvarma/adding-social-share-buttons-to-your-astro-blog-2hj5</link>
      <guid>https://forem.com/navinvarma/adding-social-share-buttons-to-your-astro-blog-2hj5</guid>
      <description>&lt;p&gt;I tried to share one of my posts on X after my website rewrite this week and... my face showed up as the preview image. Not the article, not a nice graphic - just my profile photo staring back at everyone. Embarrassing, honestly. And then I noticed there's no way for anyone reading my new blog to share it without copying the URL like it's early days of the internet.&lt;/p&gt;

&lt;p&gt;Both of these bugged me enough that I spent a Sunday night fixing them.&lt;/p&gt;

</description>
      <category>tech</category>
      <category>webdev</category>
      <category>software</category>
    </item>
  </channel>
</rss>
