<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Pratik Pathak</title>
    <description>The latest articles on Forem by Pratik Pathak (@pratikpathak).</description>
    <link>https://forem.com/pratikpathak</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/pratikpathak"/>
    <language>en</language>
    <item>
      <title>This Trick Boosts AI Agent Memory Retrieval by 78% With No Third-Party Tools</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Thu, 09 Apr 2026 11:23:09 +0000</pubDate>
      <link>https://forem.com/pratikpathak/this-trick-boosts-ai-agent-memory-retrieval-by-78-with-no-third-party-tools-4on9</link>
      <guid>https://forem.com/pratikpathak/this-trick-boosts-ai-agent-memory-retrieval-by-78-with-no-third-party-tools-4on9</guid>
      <description>&lt;p&gt;Andrej Karpathy, co-founder of OpenAI and former Director of AI at Tesla, recently open-sourced a fascinating concept: building a “Personal Wikipedia” using AI agents. Rather than treating an LLM like a search engine that retrieves chunks of data and immediately forgets them, this system compiles your raw notes, articles, and screenshots into an interlinked, living markdown wiki.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Traditional Note-Taking (and RAG)
&lt;/h2&gt;

&lt;p&gt;Most note-taking apps are built for humans to browse. If you use Notion, Obsidian, or Apple Notes, the burden of organizing, tagging, and cross-referencing falls entirely on you. Even when using Retrieval-Augmented Generation (RAG) like NotebookLM or ChatGPT uploads, the AI retrieves context for a single answer, but it doesn’t “learn” or organize your knowledge base permanently.&lt;/p&gt;

&lt;p&gt;Karpathy’s LLM Wiki flips this paradigm. It’s optimized for the &lt;strong&gt;AI to read and write on your behalf&lt;/strong&gt;. The knowledge is compiled once, updated incrementally, and gets richer over time. One new source can ripple through 10 to 15 existing pages, flagging contradictions and linking concepts.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the LLM Wiki Works
&lt;/h2&gt;

&lt;p&gt;The architecture is surprisingly simple and doesn’t require a complex vector database or embedding pipeline. It relies on three core layers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Immutable Raw Sources:&lt;/strong&gt; A directory where you dump raw materials—PDFs, diaries, Apple Notes, iMessage threads, or screenshots.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wiki Directory:&lt;/strong&gt; The folder the AI agent owns and maintains, filled with markdown files representing different entities and concepts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Index.md:&lt;/strong&gt; A catalog file that contains one-line summaries of every page. The agent reads this first to navigate the wiki like a file system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A great example of this in the wild is Farzapedia, built by Farza on X. He fed 2,500 entries of personal data into an LLM, which spit out 400 interconnected articles. The agent now uses this wiki as a persistent memory bank, surfing through cross-referenced entries to surface inspiration for his landing pages and projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation Instructions
&lt;/h2&gt;

&lt;p&gt;If you want to set up Karpathy’s LLM Wiki locally, you’ll use Claude Code (Anthropic’s terminal-based coding agent) pointing to a folder of Markdown files (like an Obsidian Vault).&lt;/p&gt;

&lt;h3&gt;
  
  
  Mac, Windows, and Linux Setup
&lt;/h3&gt;

&lt;p&gt;The setup is identical across all platforms since it relies on Node.js.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Install Node.js:&lt;/strong&gt; Ensure you have Node.js installed on your system.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create your Vault:&lt;/strong&gt; Create a folder anywhere on your computer (e.g., &lt;code&gt;~/Documents/llm-wiki&lt;/code&gt;) to act as your knowledge base. You can use &lt;a href="https://obsidian.md/" rel="noopener noreferrer"&gt;Obsidian&lt;/a&gt; to view the markdown files visually.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install Claude Code:&lt;/strong&gt; Run the following command in your terminal to install Anthropic’s local CLI agent globally:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; @anthropic-ai/claude-code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, authenticate with your Anthropic account, navigate to your wiki folder, and launch the agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; ~/Documents/llm-wiki
claude
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can ask the agent to organize your raw notes, create summaries, or find cross-references across your entire vault.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This is the Future of Personal Knowledge
&lt;/h2&gt;

&lt;p&gt;This workflow solves the “knowledge rot” problem. Instead of your notes becoming a messy graveyard of forgotten ideas, the AI agent does the boring bookkeeping. It creates pages, links them together, and treats your explorations as compounding knowledge.&lt;/p&gt;

&lt;p&gt;You can read Karpathy’s official “idea file” and architecture guide directly on his GitHub Gist.&lt;/p&gt;

&lt;p&gt;Check out the full implementation details here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f" rel="noopener noreferrer"&gt;Karpathy's LLM Wiki GitHub Gist&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Related Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://pratikpathak.com/the-infinite-loop-trap-how-my-multi-agent-system-burned-200-overnight-and-how-to-fix-it/" rel="noopener noreferrer"&gt;The ‘Infinite Loop’ Trap: How My Multi-Agent System Burned $200 Overnight (And How to Fix It)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pratikpathak.com/stop-overpaying-for-rag-how-we-cut-azure-openai-costs-by-40-with-one-architecture-tweak/" rel="noopener noreferrer"&gt;Stop Overpaying for RAG: How We Cut Azure OpenAI Costs by 40% with One Architecture Tweak&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pratikpathak.com/caveman-claude-how-a-prehistoric-coding-style-cuts-ai-token-costs-by-75/" rel="noopener noreferrer"&gt;Just by changing this one setting I reduced my token usage by 75%&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>azure</category>
      <category>aiagents</category>
      <category>aiarchitecture</category>
      <category>aiworkspace</category>
    </item>
    <item>
      <title>Just by changing this one setting I reduced my token usage by 75%</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Tue, 07 Apr 2026 13:21:36 +0000</pubDate>
      <link>https://forem.com/pratikpathak/just-by-changing-this-one-setting-i-reduced-my-token-usage-by-75-428b</link>
      <guid>https://forem.com/pratikpathak/just-by-changing-this-one-setting-i-reduced-my-token-usage-by-75-428b</guid>
      <description>&lt;p&gt;In the world of AI orchestration and agentic workflows, token consumption is the ultimate hidden tax. Every time Claude, GPT-4, or Gemini responds with “Certainly! I’d be happy to help you understand this function,” you are paying for pleasantries. Over thousands of automated API calls, this conversational padding quickly spirals into massive overhead.&lt;/p&gt;

&lt;p&gt;But recently, the developer community on Hacker News and Reddit exploded over a brilliantly simple solution: the &lt;strong&gt;CAVEMAN skill&lt;/strong&gt;. By instructing large language models (LLMs) to communicate like prehistoric cavemen, developers are successfully reducing their output token usage by an astonishing 60% to 75%.&lt;/p&gt;

&lt;h2&gt;What is the Caveman Skill?&lt;/h2&gt;

&lt;p&gt;Created by GitHub user JuliusBrussee, the &lt;code&gt;caveman&lt;/code&gt; skill is an open-source system prompt (and Claude Code skill) designed to drastically compress AI output. The philosophy is straightforward: use the absolute minimum number of words necessary to convey the exact same technical meaning.&lt;/p&gt;

&lt;p&gt;Rather than relying on vague instructions like “be concise” (which models often forget mid-generation), the skill forces the AI into a strict “caveman” persona. This naturally strips out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Articles (a, an, the)&lt;/li&gt;
&lt;li&gt;Pleasantries and apologies&lt;/li&gt;
&lt;li&gt;Hedging language (“It might be worth considering…”)&lt;/li&gt;
&lt;li&gt;Verbose explanations and conjunctions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; Instead of “The function begins by taking the user input and returning a sorted list using quicksort,” the AI outputs: “Function take input. Return sorted list. Use quicksort. Fast. Done.”&lt;/p&gt;

&lt;h2&gt;The Economics of Token Reduction&lt;/h2&gt;

&lt;p&gt;Why go to such lengths? AI pricing models charge per token (roughly 3/4 of a word). While input tokens are relatively cheap, output tokens can be extremely expensive.&lt;/p&gt;

&lt;p&gt;For an enterprise or a developer running continuous integration pipelines, automated code reviews, or log analysis with agents, the math adds up. A standard response might consume 200 output tokens. In Caveman mode, that drops to 50 tokens. If you’re running 10,000 queries a day, you can slash your output costs by up to 75%, saving thousands of dollars annually.&lt;/p&gt;

&lt;h3&gt;Setting Up Caveman Mode&lt;/h3&gt;

&lt;p&gt;Implementing this in your own applications or within Claude Code is surprisingly simple. You just need to inject the persona into your system prompt.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;{
  "skill_name": "caveman_mode",
  "description": "Respond with minimal tokens using primitive communication style",
  "activation_phrase": "caveman:",
  "system_injection": "Switch to caveman speak. Short. Direct. No filler. Essential info only. Grunt-level clarity. Maintain exact code blocks and technical terms."
}&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Pro Tip:&lt;/strong&gt; Ensure your prompt explicitly instructs the model to preserve code blocks, variables, and error messages &lt;em&gt;exactly&lt;/em&gt; as they are. Caveman mode should only affect the natural language explanations!&lt;/p&gt;

&lt;h2&gt;The Trade-Offs: Does it Make the AI “Dumber”?&lt;/h2&gt;

&lt;p&gt;There is an ongoing debate among machine learning practitioners regarding this technique. Modern autoregressive models use generated tokens as a form of “computational scratchpad” or Chain of Thought (CoT). In theory, forcing a model to generate fewer tokens could hamstring its ability to reason through complex problems.&lt;/p&gt;

&lt;p&gt;However, tests show that if you separate the reasoning phase (e.g., using hidden &lt;code&gt;&amp;lt;think&amp;gt;&lt;/code&gt; tags) from the final output phase, you get the best of both worlds. The model “thinks” normally in the background, but only “speaks” in caveman when delivering the final payload to the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Warning:&lt;/strong&gt; Avoid using Caveman mode in user-facing applications like customer service chatbots. The abrasive, primitive tone is strictly for developer tools, automated pipelines, and back-end logic where human readability is secondary to efficiency.&lt;/p&gt;

&lt;h2&gt;How to Install the Caveman Skill on Any Platform&lt;/h2&gt;

&lt;p&gt;While the original &lt;code&gt;caveman&lt;/code&gt; skill was designed specifically as a drop-in for Claude Code, the underlying mechanism is just a well-crafted system prompt. You can inject this behavior into almost any AI agent framework.&lt;/p&gt;

&lt;h3&gt;1. Claude Code&lt;/h3&gt;

&lt;p&gt;If you are using Anthropic’s official Claude Code CLI, installing the skill is natively supported. Simply download the JSON configuration from the open-source repository and load it into your session:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;# Download the raw skill JSON
curl -O https://raw.githubusercontent.com/JuliusBrussee/caveman/main/caveman.json

# Load the skill into your Claude Code session
claude skill load caveman.json&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;2. OpenClaw&lt;/h3&gt;

&lt;p&gt;For users orchestrating agents via the open-source &lt;a href="https://github.com/openclaw/openclaw" rel="noopener noreferrer"&gt;OpenClaw&lt;/a&gt; framework, you can add this as an AgentSkill. Create a &lt;code&gt;SKILL.md&lt;/code&gt; file inside your workspace skills directory (e.g., &lt;code&gt;~/.openclaw/workspace/skills/caveman/SKILL.md&lt;/code&gt;):&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;---
name: caveman
description: Forces the AI to use minimal output tokens by adopting a primitive communication style.
---

# Caveman Mode

When executing tasks under this skill, you must strictly adhere to the following rules:
- No pleasantries or filler words.
- Short, direct sentences (Subject-verb-object).
- Grunt-level clarity.
- Maintain exact code blocks, parameters, and technical terms.&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;3. LangChain / Custom Python Orchestrators&lt;/h3&gt;

&lt;p&gt;If you are building your own multi-agent orchestrator in Python using LangChain or Semantic Kernel, simply prepend the caveman rules to the `SystemMessage` passed to your Chat Model.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;from langchain_core.messages import SystemMessage, HumanMessage
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o")

caveman_prompt = SystemMessage(
    content="Switch to caveman speak. Short. Direct. No filler. Essential info only. Maintain exact code blocks."
)

messages = [
    caveman_prompt,
    HumanMessage(content="Explain how a linked list works.")
]

response = llm.invoke(messages)
print(response.content)
# Output: "Linked list store data in nodes. Node have value and pointer to next node. Fast insert. Slow search."&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;4. Cursor and AI IDEs&lt;/h3&gt;

&lt;p&gt;If you’re using an AI-powered IDE like Cursor or Windsurf, or a CLI agent like Aider, you can enforce this behavior universally by modifying your project’s &lt;code&gt;.cursorrules&lt;/code&gt; or custom system prompt files. Simply add the Caveman directive to the top of your workspace rules:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;# Communication Style
You are operating in Caveman Mode.
- Speak like caveman.
- Short. Direct. No filler.
- Essential info only. Grunt-level clarity.
- Keep all code output and syntax perfectly intact.&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Conclusion: Embrace the Grunt&lt;/h2&gt;

&lt;p&gt;The Caveman skill is a testament to the ingenuity of the open-source developer community. By recognizing that AI models are tuned for human approval rather than computational efficiency, developers have found a way to bypass the “fluff tax.”&lt;/p&gt;

&lt;p&gt;If you are building multi-agent systems on Azure, utilizing Claude for massive refactoring tasks, or just tired of scrolling through paragraphs of polite filler, it might be time to unleash your inner caveman.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Ug. Token saved. Good.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>azure</category>
      <category>aiagentworkflows</category>
      <category>aicodereview</category>
      <category>aicostmanagement</category>
    </item>
    <item>
      <title>I Saved 80k Tokens a Day Just By Changing How My AI Agents Talk to Each Other</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Tue, 07 Apr 2026 04:30:00 +0000</pubDate>
      <link>https://forem.com/pratikpathak/i-saved-80k-tokens-a-day-just-by-changing-how-my-ai-agents-talk-to-each-other-237i</link>
      <guid>https://forem.com/pratikpathak/i-saved-80k-tokens-a-day-just-by-changing-how-my-ai-agents-talk-to-each-other-237i</guid>
      <description>&lt;p&gt;When you first build a multi-agent system, the initial architectural focus is usually on logic: getting the Researcher Agent to talk to the Writer Agent, and having the Reviewer Agent approve the output. You watch the terminal logs scroll by as your autonomous agents converse and solve complex problems. It feels like magic.&lt;/p&gt;

&lt;p&gt;Then, you look at your Azure OpenAI billing dashboard at the end of the week, and the magic immediately fades. You are burning through tokens at an alarming rate, and upon closer inspection, &lt;strong&gt;you are paying for your AI agents to be polite to each other.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this deep dive, I’m going to share the exact architectural changes I made to a production LangGraph orchestration system that reduced our daily token consumption by over 80,000 tokens—slashing our LLM costs without degrading the intelligence of the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Conversational Filler in Agentic Systems
&lt;/h2&gt;

&lt;p&gt;Most developers build multi-agent systems using chat-completion endpoints (like &lt;code&gt;gpt-4o&lt;/code&gt;) that are inherently fine-tuned for conversational, human-like interaction. When you instruct Agent A to pass a summary to Agent B, the default behavior of the LLM is to wrap the data in pleasantries and conversational filler.&lt;/p&gt;

&lt;p&gt;Consider a standard handoff. A Researcher Agent might output:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;“Here is the summary of the financial data you requested. As you can see, Q3 revenue was up by 15%, but operating costs increased significantly due to the cloud migration. Let me know if you need any further analysis before you write the final report!”&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is extremely inefficient for LLM-to-LLM communication. Agent B (the Writer) does not care about the pleasantries. It only needs the raw data. Furthermore, because multi-agent orchestrators like LangGraph pass the &lt;em&gt;entire message history&lt;/em&gt; into the context window for every subsequent turn, that conversational filler gets re-processed, re-tokenized, and billed on every single hop of the graph.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: Forcing Structured JSON Interfaces
&lt;/h2&gt;

&lt;p&gt;The solution is to fundamentally change how agents perceive their communication channels. Agents should not “chat” with each other; they should invoke APIs. By forcing agents to output strict JSON schemas and completely stripping conversational dialogue, you instantly compress the token payload.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation: Pydantic &amp;amp; OpenAI Structured Outputs
&lt;/h2&gt;

&lt;p&gt;Using OpenAI’s Native Structured Outputs (or LangChain’s &lt;code&gt;with_structured_output&lt;/code&gt;), you can define a rigid Pydantic model for the handoff between agents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Field&lt;/span&gt;
&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ResearcherOutput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;span class="n"&gt;revenue_growth_q3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Percentage growth in Q3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;key_cost_drivers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;List of primary expenses&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;requires_follow_up&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Flag if data is incomplete&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# Bind the schema to the LLM
&lt;/span&gt;&lt;span class="n"&gt;structured_llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;with_structured_output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ResearcherOutput&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="c1"&gt;# The agent now outputs raw JSON, no filler:
# {"revenue_growth_q3": 15.0, "key_cost_drivers": ["cloud migration"], "requires_follow_up": false}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Architectural Impact: Why This Saves Tokens
&lt;/h2&gt;

&lt;p&gt;Switching from conversational text to strict JSON payloads yields three compounding benefits across your orchestration graph:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;1. Reduced Output Tokens:&lt;/strong&gt; The agent no longer generates the 20-40 tokens of “Here is the data…” or “Let me know if…”. Over hundreds of loops, this alone saves thousands of output tokens (which are typically more expensive than input tokens).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;2. Massive Input Token Compression:&lt;/strong&gt; Because the message history is smaller, every subsequent agent in the chain inherits a dramatically lighter context window. If Agent C needs to review the work of Agent A and Agent B, it only parses pure data, avoiding the token inflation of reading a simulated conversation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3. Deterministic Routing:&lt;/strong&gt; You no longer need to use an LLM to parse intent (e.g., asking an LLM “Did the researcher find everything?”). Because the output is a rigid schema with boolean flags like &lt;code&gt;requires_follow_up&lt;/code&gt;, you can use standard Python &lt;code&gt;if/else&lt;/code&gt; statements in your LangGraph routing nodes, completely bypassing LLM inference for control flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Results: From 110k to 30k Tokens Daily
&lt;/h2&gt;

&lt;p&gt;By enforcing strict JSON handoffs and replacing LLM-based intent parsing with deterministic Python routing logic on the structured payloads, the daily token burn of the workflow plummeted from ~110,000 tokens to under 30,000 tokens.&lt;/p&gt;

&lt;p&gt;The agents didn’t get dumber—in fact, they became more reliable because the structured data prevented hallucinated context drift. When building multi-agent systems, always remember: your agents are microservices, not colleagues. Make them communicate via APIs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/langchain-ai/langchain" rel="noopener noreferrer"&gt;Explore LangChain Structured Outputs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Related Reading:&lt;/strong&gt; Once your agents are communicating efficiently, you need to ensure their state is stored durably. Check out my architectural comparison on &lt;a href="https://pratikpathak.com/managing-state-in-multi-agent-workflows-redis-vs-cosmos-db-in-production/" rel="noopener noreferrer"&gt;Managing State in Multi-Agent Workflows: Redis vs Cosmos DB&lt;/a&gt;, and learn how to orchestration them at scale in &lt;a href="https://pratikpathak.com/orchestrating-ai-agents-langgraph-vs-azure-ai-agents/" rel="noopener noreferrer"&gt;Orchestrating AI Agents: LangGraph vs Azure AI Agents&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>azure</category>
      <category>errorwithlangchainch</category>
      <category>accessazureblobstora</category>
      <category>agentorchestration</category>
    </item>
    <item>
      <title>Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project]</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Thu, 26 Mar 2026 09:09:19 +0000</pubDate>
      <link>https://forem.com/pratikpathak/top-25-go-projects-for-beginners-with-source-code-github-2026-latest-project-439h</link>
      <guid>https://forem.com/pratikpathak/top-25-go-projects-for-beginners-with-source-code-github-2026-latest-project-439h</guid>
      <description>&lt;h2&gt;Table of Contents&lt;/h2&gt;

&lt;p&gt;Hey Everyone! Today I have curated a massive list of 25+ Go projects for beginners with source code. Go (or Golang) is known for its simplicity, concurrency, and blazing-fast performance. Whether you are just starting or looking to build a robust portfolio, these projects are updated and available on GitHub.&lt;/p&gt;

&lt;p&gt;If you want to master Go, the best way is to get your hands dirty with real-world applications. You can check out my GitHub for more curated lists and projects. If you want to contribute, feel free to open a Pull Request!&lt;/p&gt;

&lt;p&gt;Without any further ado, let’s start! 🚀&lt;/p&gt;

&lt;p&gt;The 25+ Go Projects for beginners with source code are –&lt;/p&gt;

&lt;h2&gt;1. Command-Line Guessing Game&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_miguelcrwz_guess.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_miguelcrwz_guess.png" alt="Command-Line Guessing Game" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 1"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 21&lt;/p&gt;

&lt;p&gt;guessing game This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/miguelcrwz/guess" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;2. CLI Calculator&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_google_mtail.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_google_mtail.png" alt="CLI Calculator" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 2"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 22&lt;/p&gt;

&lt;p&gt;extract internal monitoring data from application logs for collection in a timeseries database This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/google/mtail" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;3. To-Do List CLI App&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_thewhitetulip_Tasks.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_thewhitetulip_Tasks.png" alt="To-Do List CLI App" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 3"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 23&lt;/p&gt;

&lt;p&gt;A simplistic todo list manager written in Go This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/thewhitetulip/Tasks" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;4. Weather Fetcher API&lt;/h2&gt;

&lt;p&gt;Weather via the command line. This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/genuinetools/weather" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;5. URL Shortener&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_utkusen_urlhunter.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_utkusen_urlhunter.png" alt="URL Shortener" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 4"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 24&lt;/p&gt;

&lt;p&gt;a recon tool that allows searching on URLs that are exposed via shortener services This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/utkusen/urlhunter" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;6. Markdown to HTML Converter&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_yuin_goldmark.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_yuin_goldmark.png" alt="Markdown to HTML Converter" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 5"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 25&lt;/p&gt;

&lt;p&gt;🏆 A markdown parser written in Go. Easy to extend, standard(CommonMark) compliant, well structured. This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/yuin/goldmark" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;7. Password Generator&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_1Password_spg.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_1Password_spg.png" alt="Password Generator" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 6"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 26&lt;/p&gt;

&lt;p&gt;1Password’s Strong Password Generator – Go package This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/1Password/spg" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;8. Bulk File Renamer&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_Vein05_nomnom.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_Vein05_nomnom.png" alt="Bulk File Renamer" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 7"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 27&lt;/p&gt;

&lt;p&gt;A Go CLI tool for bulk renaming and organizing with genAI.  This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/Vein05/nomnom" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;9. Pomodoro Timer&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_ayoisaiah_focus.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_ayoisaiah_focus.png" alt="Pomodoro Timer" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 8"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 28&lt;/p&gt;

&lt;p&gt;A fully featured productivity timer for the command line, based on the Pomodoro Technique. Supports Linux, Windows, and macOS. This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ayoisaiah/focus" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;10. Simple Web Server&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_gin-gonic_gin.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_gin-gonic_gin.png" alt="Simple Web Server" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 9"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 29&lt;/p&gt;

&lt;p&gt;Gin is a high-performance HTTP web framework written in Go. It provides a Martini-like API but with significantly better performance—up to 40 times faster—thanks to httprouter. Gin is designed for building REST APIs, web applications, and microservices. This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/gin-gonic/gin" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;11. Typing Speed Test (TUI)&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_joebyjo_Typing-Speed-TUI.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_joebyjo_Typing-Speed-TUI.png" alt="11. Typing Speed Test (TUI)" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 10"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 30&lt;/p&gt;

&lt;p&gt;A simple and efficient typing speed test program built with Python and using only built-in libraries. This console-based application tracks your performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/joebyjo/Typing-Speed-TUI" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;12. TCP Port Scanner&lt;/h2&gt;

&lt;p&gt;Target IP Address/es whose Ports need to be scanned. It can parse single IP Address for Port Scanning as well as CIDR which can be used for Host Discovery.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/shahnitav/Go-Port-Scanner" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;13. Discord Bot&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_bwmarrin_discordgo.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_bwmarrin_discordgo.png" alt="13. Discord Bot" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 11"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 31&lt;/p&gt;

&lt;p&gt;DiscordGo is a Go package that provides low level bindings to the Discord chat client API. DiscordGo has nearly complete support for all of the Discord API …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/bwmarrin/discordgo" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;14. System Monitor (Task Manager)&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_ivan-penchev_system-monitor-tui.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_ivan-penchev_system-monitor-tui.png" alt="14. System Monitor (Task Manager)" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 12"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 32&lt;/p&gt;

&lt;p&gt;System Monitor is a Text User Interface (TUI) application written in Go using the bubbletea TUI framework. This software is developed as a learning project …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ivan-penchev/system-monitor-tui" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;15. QR Code Generator&lt;/h2&gt;

&lt;p&gt;To help gophers generate QR Codes with customized styles, such as color, block size, block shape, and icon. – yeqown/go-qrcode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/yeqown/go-qrcode" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;16. Mini Static Site Generator&lt;/h2&gt;

&lt;p&gt;Hugo is a static site generator written in Go, optimized for speed and designed for flexibility. With its advanced templating system and fast asset pipelines,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/gohugoio/hugo" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;17. Snake Game (Terminal)&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_topics_snake-game.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="420" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_topics_snake-game.png" alt="17. Snake Game (Terminal)" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 13"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 33&lt;/p&gt;

&lt;p&gt;This game clones all the core features of Slither.io, including mouse-following controls, snake collisions, food, snake growth, eyes, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/topics/snake-game" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;18. Tic-Tac-Toe (CLI)&lt;/h2&gt;

&lt;p&gt;A fun tic-tac-toe game written in Go (golang). Contribute to neocotic/go-tic-tac-toe development by creating an account on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/neocotic/go-tic-tac-toe" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;19. Terminal Text Editor&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_grindlemire_go-tui.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_grindlemire_go-tui.png" alt="19. Terminal Text Editor" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 14"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 34&lt;/p&gt;

&lt;p&gt;The tui lsp language server works with any editor that speaks JSON-RPC over stdio. It proxies Go-specific features through gopls with .gsx ↔ .go source mapping.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/grindlemire/go-tui" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;20. Simple JSON Parser&lt;/h2&gt;

&lt;p&gt;It is up to 10 times faster than standard encoding/json package (depending on payload size and usage), allocates no memory.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/buger/jsonparser" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;21. Base64 Encoder/Decoder&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_golang_go.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_golang_go.png" alt="21. Base64 Encoder/Decoder" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 15"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 35&lt;/p&gt;

&lt;p&gt;// Package base64 implements base64 encoding as specified by RFC 4648. package base64. import (. “internal/byteorder”. “io”. “slices”. “strconv …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/golang/go" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;22. Key-Value In-Memory Database&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_olric-data_olric.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_olric-data_olric.png" alt="22. Key-Value In-Memory Database" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 16"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 36&lt;/p&gt;

&lt;p&gt;It’s a distributed, in-memory key/value store and cache, written entirely in Go and designed specifically for distributed environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/olric-data/olric" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;23. Simple Regex Engine&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_dlclark_regexp2.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_dlclark_regexp2.png" alt="23. Simple Regex Engine" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 17"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 37&lt;/p&gt;

&lt;p&gt;Regexp2 is a feature-rich RegExp engine for Go. It doesn’t have constant time guarantees like the built-in regexp package, but it allows backtracking and is …&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/dlclark/regexp2" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;24. TCP Chat Application&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_liaotxcn_Weave.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_liaotxcn_Weave.png" alt="TCP Chat Application" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 18"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 38&lt;/p&gt;

&lt;p&gt;A highly efficient, secure, and stable application development platform with excellent performance, easy scalability, and deep integration of AI capabilities such as LLM, AI Chat, RAG, and Agents.高效、安全、稳定的服务研发平台，具备良好性能，同时易扩展，深度集成LLM、AIChat、RAG、Agent等AI能力 This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/liaotxcn/Weave" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;25. Web Scraper&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_go-rod_rod.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_go-rod_rod.png" alt="Web Scraper" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 19"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 39&lt;/p&gt;

&lt;p&gt;A Chrome DevTools Protocol driver for web automation and scraping. This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/go-rod/rod" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;26. Image Resizer CLI&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_imgproxy_imgproxy.png" class="article-body-image-wrapper"&gt;&lt;img width="800" height="400" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2026%2F03%2Fgo_project_imgproxy_imgproxy.png" alt="Image Resizer CLI" title="Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 20"&gt;&lt;/a&gt;Top 25+ Go Projects for Beginners with Source Code Github [2026 Latest Project] 40&lt;/p&gt;

&lt;p&gt;Fast and secure standalone server for resizing, processing, and converting images on the fly This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/imgproxy/imgproxy" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;27. Mini Git Clone&lt;/h2&gt;

&lt;p&gt;s3git: git for Cloud Storage. Distributed Version Control for Data. Create decentralized and versioned repos that scale infinitely to 100s of millions of files. Clone huge PB-scale repos on your local SSD to make changes, commit and push back. Oh yeah, it dedupes too and offers directory versioning. This project is a great way to learn Go’s concurrency model, standard library, and efficient syntax.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/s3git/s3git" rel="noopener noreferrer"&gt;Source Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;This is the ultimate list of Updated 25+ Go Projects for beginners. Whether you want to build blazing-fast CLI tools, backend APIs, or terminal games, Go gives you the performance and safety you need. If you want to dive deeper, grab a project, read the source code on GitHub, and start building!&lt;/p&gt;

&lt;p&gt;If you found this list helpful, feel free to share it or open a Pull Request to add your own project to my repository.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>intelligence</category>
      <category>bestgolangprojectsfo</category>
      <category>buildawebserveringo</category>
    </item>
    <item>
      <title>I Completely Moved from Google Cloud AI to Azure OpenAI Service Because of This One Feature</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Thu, 08 Jan 2026 15:31:44 +0000</pubDate>
      <link>https://forem.com/pratikpathak/i-completely-moved-from-google-cloud-ai-to-azure-openai-service-because-of-this-one-feature-3dfj</link>
      <guid>https://forem.com/pratikpathak/i-completely-moved-from-google-cloud-ai-to-azure-openai-service-because-of-this-one-feature-3dfj</guid>
      <description>&lt;p&gt;I need to have a conversation with myself about what happened last year. About why I ripped out months of Google Cloud AI integration work and moved everything to Azure OpenAI Service. And about the one feature that made that painful decision surprisingly easy.&lt;/p&gt;

&lt;p&gt;Let me back up a bit.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Google Cloud AI Chapter
&lt;/h2&gt;

&lt;p&gt;I was all in on Google Cloud. Vertex AI, PaLM 2, the whole ecosystem. For a startup building AI-powered applications, Google seemed like the obvious choice. They invented the transformer architecture, after all. They had the research pedigree.&lt;/p&gt;

&lt;p&gt;The initial integration was smooth enough. The APIs worked. The models were capable. We built features, shipped products, and things were moving.&lt;/p&gt;

&lt;p&gt;Then the enterprise clients started calling.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Enterprise Knocks
&lt;/h3&gt;

&lt;p&gt;Our first big enterprise deal came with a 47-page security questionnaire. And that’s when things started to unravel.&lt;/p&gt;

&lt;p&gt;“Where is our data processed?”&lt;/p&gt;

&lt;p&gt;Well, it depends on which model and which region, and honestly, the documentation wasn’t always clear.&lt;/p&gt;

&lt;p&gt;“Can we ensure data never leaves our geographic region?”&lt;/p&gt;

&lt;p&gt;Technically yes, but the configuration was complex and the guarantees weren’t always explicit.&lt;/p&gt;

&lt;p&gt;“How do you prevent the model from generating inappropriate content?”&lt;/p&gt;

&lt;p&gt;We had built our own content filtering layer. It worked, mostly. But it was another thing we had to maintain, monitor, and explain to auditors.&lt;/p&gt;

&lt;p&gt;“Can you connect this to our existing Azure Active Directory?”&lt;/p&gt;

&lt;p&gt;That’s when I realized we had a problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Authentication Nightmare
&lt;/h3&gt;

&lt;p&gt;Our enterprise clients lived in Microsoft land. Azure AD for identity. Microsoft 365 for collaboration. Azure for their infrastructure. And here I was, trying to convince them that Google Cloud AI was the right choice.&lt;/p&gt;

&lt;p&gt;Every integration became a bridge-building exercise. OAuth tokens here, service accounts there, custom middleware everywhere. We were spending more time on authentication plumbing than on actual AI features.&lt;/p&gt;

&lt;p&gt;I kept telling myself it was fine. This is just what enterprise software looks like. Complexity is normal.&lt;/p&gt;

&lt;p&gt;But it wasn’t fine.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Discovery
&lt;/h2&gt;

&lt;p&gt;I was at a conference, half-listening to a talk about Azure OpenAI Service, when something caught my attention.&lt;/p&gt;

&lt;p&gt;“Native Azure AD integration. Private endpoints. Content filtering built in. Data never leaves your tenant.”&lt;/p&gt;

&lt;p&gt;Wait, what?&lt;/p&gt;

&lt;p&gt;I started digging. And what I found changed everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Game-Changing Feature: Enterprise-Grade AI with Native Azure Integration
&lt;/h2&gt;

&lt;p&gt;Here’s what Azure OpenAI Service offers that I couldn’t replicate with Google Cloud, no matter how much custom code I wrote:&lt;/p&gt;

&lt;h3&gt;
  
  
  Native Azure Active Directory Integration
&lt;/h3&gt;

&lt;p&gt;This is the feature. The one that made me move everything.&lt;/p&gt;

&lt;p&gt;With Azure OpenAI Service, authentication is just… Azure AD. The same identity system my enterprise clients already use. The same conditional access policies. The same security controls.&lt;/p&gt;

&lt;p&gt;No more OAuth dance. No more service account juggling. No more explaining why we need a separate identity system for AI features.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from azure.identity import DefaultAzureCredential
from openai import AzureOpenAI

# That's it. That's the authentication.
credential = DefaultAzureCredential()
token = credential.get_token("https://cognitiveservices.azure.com/.default")

client = AzureOpenAI(
    azure_endpoint="https://my-resource.openai.azure.com/",
    azure_ad_token=token.token,
    api_version="2024-02-15-preview"
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compare that to the authentication wrapper code I had built for Google Cloud. The Azure version is maybe 10% of the complexity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Private Endpoints and VNet Integration
&lt;/h3&gt;

&lt;p&gt;Enterprise clients don’t want their data traveling over the public internet. With Azure OpenAI Service, I can deploy the endpoint inside a Virtual Network. The API calls never leave the private network.&lt;/p&gt;

&lt;p&gt;This isn’t just a security checkbox. It’s a fundamental architecture difference. My clients’ data stays in their network. Full stop.&lt;/p&gt;

&lt;p&gt;With Google Cloud, I was routing traffic through Cloud Armor, setting up VPC Service Controls, and still having conversations with security teams about data flow. With Azure, I show them the private endpoint configuration and the conversation is over.&lt;/p&gt;

&lt;h3&gt;
  
  
  Built-in Content Filtering
&lt;/h3&gt;

&lt;p&gt;Remember that content filtering layer I mentioned? The one we built ourselves?&lt;/p&gt;

&lt;p&gt;Azure OpenAI Service has content filtering built in. Not as an add-on. Not as a separate service. Built directly into the API.&lt;/p&gt;

&lt;p&gt;The system automatically detects and filters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hate speech and discrimination&lt;/li&gt;
&lt;li&gt;Violence and self-harm content&lt;/li&gt;
&lt;li&gt;Sexual content&lt;/li&gt;
&lt;li&gt;Jailbreak attempts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And here’s the beautiful part: it’s configurable. Enterprise clients can adjust the sensitivity levels based on their use case. A healthcare application might need different settings than a creative writing tool.&lt;/p&gt;

&lt;p&gt;I deleted about 3,000 lines of content filtering code. Three thousand lines of regex patterns, ML classifiers, and edge case handling. Gone.&lt;/p&gt;

&lt;h3&gt;
  
  
  Regional Data Residency
&lt;/h3&gt;

&lt;p&gt;“Can you guarantee our data stays in the EU?”&lt;/p&gt;

&lt;p&gt;With Azure OpenAI Service: “Yes. The resource is deployed in West Europe. Data is processed in West Europe. Here’s the documentation.”&lt;/p&gt;

&lt;p&gt;That conversation used to take weeks of back-and-forth with Google Cloud support, custom BAAs, and legal review. Now it takes five minutes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Provisioned Throughput Units (PTUs)
&lt;/h3&gt;

&lt;p&gt;For production workloads, Azure offers PTUs. You reserve capacity, you get guaranteed throughput, and your costs become predictable.&lt;/p&gt;

&lt;p&gt;This matters more than you might think. Enterprise procurement teams want to know exactly what they’re paying. “It depends on usage” doesn’t fly in annual budget planning. PTUs give me a fixed monthly cost I can put in a contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Migration Journey
&lt;/h2&gt;

&lt;p&gt;Let me be honest: the migration wasn’t trivial. But it was easier than I expected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 1-2: Proof of Concept
&lt;/h3&gt;

&lt;p&gt;I picked our simplest AI feature and rebuilt it on Azure OpenAI Service. Same prompts, same logic, different backend. The results were identical. GPT-4 is GPT-4, regardless of which cloud serves it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 3-4: Authentication Refactor
&lt;/h3&gt;

&lt;p&gt;This was actually the biggest win. I ripped out our entire custom authentication layer and replaced it with Azure AD integration. The codebase got smaller. The security posture got better. The clients got happier.&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 5-6: Enterprise Features
&lt;/h3&gt;

&lt;p&gt;Private endpoints. Content filtering configuration. Diagnostic logging to Azure Monitor. All the enterprise features that used to require custom code were now just configuration.&lt;/p&gt;

&lt;h3&gt;
  
  
  Week 7-8: Production Cutover
&lt;/h3&gt;

&lt;p&gt;We ran both systems in parallel for two weeks, comparing results and performance. Then we flipped the switch. No drama. No data loss. No angry clients.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Results
&lt;/h2&gt;

&lt;p&gt;Let me quantify what changed:&lt;/p&gt;

&lt;h3&gt;
  
  
  Passed Security Audits
&lt;/h3&gt;

&lt;p&gt;Our next enterprise security assessment took half the time. The auditors understood Azure. They trusted Microsoft’s compliance certifications. We weren’t explaining custom authentication schemes anymore.&lt;/p&gt;

&lt;h3&gt;
  
  
  40% Cost Reduction
&lt;/h3&gt;

&lt;p&gt;This surprised me. Between eliminating our content filtering infrastructure, simplifying our authentication layer, and using PTUs for predictable pricing, our total AI costs dropped significantly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Faster Time-to-Production
&lt;/h3&gt;

&lt;p&gt;New enterprise clients used to take 6-8 weeks to onboard, mostly spent on security review and custom integration. Now it’s 2-3 weeks. Same clients, same requirements, less friction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better Compliance Posture
&lt;/h3&gt;

&lt;p&gt;Azure OpenAI Service is covered under Microsoft’s compliance certifications: SOC 2, ISO 27001, HIPAA BAA, GDPR. I’m not arguing about our custom implementation anymore. I’m pointing to Microsoft’s audit reports.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Azure OpenAI Perfect?
&lt;/h2&gt;

&lt;p&gt;No. Let me be balanced here.&lt;/p&gt;

&lt;p&gt;The model availability can lag behind OpenAI’s direct API. When a new model drops, Azure might be a few weeks behind. For cutting-edge research, that matters.&lt;/p&gt;

&lt;p&gt;The pricing for low-volume usage isn’t always competitive. If you’re building a side project with minimal traffic, the pay-as-you-go rates might be higher than alternatives.&lt;/p&gt;

&lt;p&gt;And if you’re not already in the Microsoft ecosystem, the Azure learning curve is real. I was lucky that my enterprise clients pulled me in that direction anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;I’m talking to myself here, processing a decision I made under pressure that turned out to be exactly right.&lt;/p&gt;

&lt;p&gt;The feature that changed everything wasn’t a model capability. It wasn’t benchmark performance. It was enterprise integration done properly.&lt;/p&gt;

&lt;p&gt;Azure OpenAI Service understood something Google Cloud didn’t: enterprise AI isn’t just about the models. It’s about how those models fit into existing security, identity, and compliance frameworks.&lt;/p&gt;

&lt;p&gt;For anyone building AI applications for enterprise clients, struggling with the same authentication headaches and compliance conversations I was having, take a hard look at Azure OpenAI Service.&lt;/p&gt;

&lt;p&gt;The grass isn’t just greener. It’s enterprise-grade green. And that makes all the difference.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>azure</category>
      <category>gcp</category>
      <category>openai</category>
    </item>
    <item>
      <title>I Completely Moved from AWS Lambda to Azure Functions Because of This One Feature</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Sat, 20 Dec 2025 18:27:03 +0000</pubDate>
      <link>https://forem.com/pratikpathak/i-completely-moved-from-aws-lambda-to-azure-functions-because-of-this-one-feature-262b</link>
      <guid>https://forem.com/pratikpathak/i-completely-moved-from-aws-lambda-to-azure-functions-because-of-this-one-feature-262b</guid>
      <description>&lt;p&gt;You know that moment when you’re deep into building something and you realize you’ve been doing it the hard way all along? That was me, staring at my AWS console at 2 AM, managing yet another collection of Lambda functions stitched together with Step Functions, SQS queues, and DynamoDB tables just to handle what should have been a straightforward workflow.&lt;/p&gt;

&lt;p&gt;I’m talking to myself here, really. Because if past-me had known what I know now about Azure Durable Functions, I would have saved myself months of architectural headaches.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AWS Lambda Reality Check
&lt;/h2&gt;

&lt;p&gt;Let me paint you a picture of what my life looked like before the switch.&lt;/p&gt;

&lt;p&gt;I was building an order processing system. Nothing too crazy—receive an order, validate inventory, process payment, update the database, send notifications. The kind of stuff that sounds simple until you try to make it reliable.&lt;/p&gt;

&lt;p&gt;With AWS Lambda, every function is stateless. That’s by design, and for simple use cases, it’s actually fine. But here’s where things got messy:&lt;/p&gt;

&lt;h3&gt;
  
  
  The State Management Nightmare
&lt;/h3&gt;

&lt;p&gt;To track where an order was in the pipeline, I needed DynamoDB. To pass data between functions, I needed SQS. To orchestrate the whole thing, I needed Step Functions. Suddenly, my "simple" workflow involved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;5 Lambda functions&lt;/li&gt;
&lt;li&gt;2 DynamoDB tables (one for state, one for dead-letter tracking)&lt;/li&gt;
&lt;li&gt;3 SQS queues&lt;/li&gt;
&lt;li&gt;1 Step Functions state machine&lt;/li&gt;
&lt;li&gt;A partridge in a pear tree (okay, maybe not that last one)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every time something failed, I had to trace through logs across multiple services. Every time I wanted to add a step, I had to update the Step Functions definition, which used its own JSON-based Amazon States Language. It felt like I was building infrastructure instead of features.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Human Interaction Problem
&lt;/h3&gt;

&lt;p&gt;Here’s where it really fell apart. We needed approval workflows. A manager had to approve orders over a certain amount before they could be processed.&lt;/p&gt;

&lt;p&gt;In Lambda land, this meant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store the pending state somewhere&lt;/li&gt;
&lt;li&gt;Set up an API Gateway endpoint to receive the approval&lt;/li&gt;
&lt;li&gt;Poll or use callbacks to resume the workflow&lt;/li&gt;
&lt;li&gt;Handle timeouts (what if they never respond?)&lt;/li&gt;
&lt;li&gt;Deal with all the edge cases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wrote about 400 lines of code just for this one feature. And honestly? It was fragile. I was always worried something would break.&lt;/p&gt;

&lt;h2&gt;
  
  
  Discovering Durable Functions (The Moment Everything Changed)
&lt;/h2&gt;

&lt;p&gt;I stumbled onto Azure Durable Functions while researching alternatives. At first, I was skeptical. Another vendor promising magic? Sure.&lt;/p&gt;

&lt;p&gt;But then I saw the code examples. And I’m not exaggerating when I say my jaw dropped.&lt;/p&gt;

&lt;p&gt;Here’s what a human interaction workflow looks like in Durable Functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[FunctionName("ApprovalWorkflow")]
public static async Task&amp;lt;bool&amp;gt; RunOrchestrator(
    [OrchestrationTrigger] IDurableOrchestrationContext context)
{
    var order = context.GetInput&amp;lt;Order&amp;gt;();

    // Check if approval is needed
    if (order.Amount &amp;gt; 1000)
    {
        // Wait for external event - could be hours or days
        var approved = await context.WaitForExternalEvent&amp;lt;bool&amp;gt;("ApprovalReceived");

        if (!approved)
        {
            return false;
        }
    }

    // Continue with processing
    await context.CallActivityAsync("ProcessOrder", order);
    await context.CallActivityAsync("SendNotification", order);

    return true;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it. The framework handles the state. It handles the timeout. It handles persistence. If the function host restarts, it picks up exactly where it left off.&lt;/p&gt;

&lt;p&gt;400 lines of AWS code became 20 lines of Azure code. I’m not making this up.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Five Patterns That Sold Me
&lt;/h2&gt;

&lt;p&gt;Durable Functions supports five key patterns that address exactly the pain points I was experiencing:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Function Chaining
&lt;/h3&gt;

&lt;p&gt;Call functions in sequence, passing the output of one to the input of the next. The orchestrator manages the state between calls automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Fan-Out/Fan-In
&lt;/h3&gt;

&lt;p&gt;Need to process 100 items in parallel and then aggregate the results? Durable Functions does this natively. No managing queues, no tracking completion states manually.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var tasks = new List&amp;lt;Task&amp;lt;int&amp;gt;&amp;gt;();
foreach (var item in items)
{
    tasks.Add(context.CallActivityAsync&amp;lt;int&amp;gt;("ProcessItem", item));
}

var results = await Task.WhenAll(tasks);
var total = results.Sum();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Async HTTP APIs
&lt;/h3&gt;

&lt;p&gt;Long-running operations with status polling built right in. The framework gives you status endpoints automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Monitor Pattern
&lt;/h3&gt;

&lt;p&gt;Polling workflows that check conditions periodically until something happens. Perfect for waiting for external systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Human Interaction
&lt;/h3&gt;

&lt;p&gt;Wait for external events with timeouts and escalation built in. This is the feature that changed everything for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Migration Journey
&lt;/h2&gt;

&lt;p&gt;I won’t pretend the migration was instant. Here’s what the process actually looked like:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 1-2: Learning Curve&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I spent time understanding the orchestrator function model. The key insight is that orchestrator functions replay from the beginning each time they resume, but the framework ensures deterministic execution. Once that clicked, everything made sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 3-4: Pilot Migration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I picked my simplest workflow and rewrote it in Durable Functions. Deployed it alongside the AWS version, compared results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 5-6: Complex Workflows&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Migrated the order processing system. This was the real test. The Durable Functions version was not only shorter but also easier to debug. I could see the orchestration history, replay failed instances, and understand exactly what happened at each step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Week 7-8: Full Cutover&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Decommissioned the AWS infrastructure. Goodbye Step Functions. Goodbye state management DynamoDB tables. Goodbye complexity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Results
&lt;/h2&gt;

&lt;p&gt;Let me be honest about what actually improved:&lt;/p&gt;

&lt;h3&gt;
  
  
  60% Less Code
&lt;/h3&gt;

&lt;p&gt;The order processing workflow went from roughly 1,200 lines across multiple services to about 450 lines in Durable Functions. Less code means fewer bugs and easier maintenance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Simplified Architecture
&lt;/h3&gt;

&lt;p&gt;I went from 5 AWS services to 2 Azure services (Functions and Storage). The mental model is simpler. Debugging is faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  Better Observability
&lt;/h3&gt;

&lt;p&gt;The Durable Functions framework gives you a built-in history of every orchestration. I can see exactly when each step executed, what the inputs and outputs were, and where failures occurred.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Reduction
&lt;/h3&gt;

&lt;p&gt;This surprised me. By eliminating Step Functions (which charges per state transition) and reducing DynamoDB usage, my monthly bill dropped by about 40%. Your mileage may vary, but for stateful workflows, the pricing model worked out better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Faster Development
&lt;/h3&gt;

&lt;p&gt;New features that would have taken a week now take a day. The abstractions are at the right level—not too high that you lose control, not too low that you’re building infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is It All Perfect?
&lt;/h2&gt;

&lt;p&gt;No. Let me be real about the tradeoffs.&lt;/p&gt;

&lt;p&gt;The replay model takes getting used to. You have to be careful about non-deterministic code in orchestrators (like calling DateTime.Now directly). There’s a learning curve.&lt;/p&gt;

&lt;p&gt;If you’re already invested in the AWS ecosystem with expertise in Step Functions, the migration cost might not be worth it for simple use cases.&lt;/p&gt;

&lt;p&gt;And if you’re building genuinely simple, stateless functions, regular Lambda or Azure Functions (without Durable) is perfectly fine.&lt;/p&gt;

&lt;p&gt;But if you’re building anything that involves complex workflows, human interaction, long-running processes, or coordination between multiple steps—Durable Functions is a game-changer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;I’m talking to myself here, and to anyone who’s been in that 2 AM situation, wrestling with complexity that shouldn’t exist.&lt;/p&gt;

&lt;p&gt;Sometimes the right tool makes all the difference. For me, Azure Durable Functions was that tool. It let me focus on business logic instead of infrastructure plumbing.&lt;/p&gt;

&lt;p&gt;If you’re hitting the same walls I was hitting with Lambda, give it a look. The worst case is you learn something new. The best case is you find yourself wondering why you didn’t switch sooner.&lt;/p&gt;

&lt;p&gt;That’s where I am now. And honestly? I’m not looking back.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>azure</category>
      <category>awslambda</category>
      <category>awsstepfunctions</category>
    </item>
    <item>
      <title>How I Finally Got g++ Working on Windows (And You Can Too!)</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Thu, 11 Sep 2025 05:30:42 +0000</pubDate>
      <link>https://forem.com/pratikpathak/how-i-finally-got-g-working-on-windows-and-you-can-too-4fkf</link>
      <guid>https://forem.com/pratikpathak/how-i-finally-got-g-working-on-windows-and-you-can-too-4fkf</guid>
      <description>&lt;p&gt;&lt;a href="https://i.giphy.com/media/yYSSBtDgbbRzq/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/yYSSBtDgbbRzq/giphy.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 1" alt="Windows Developer Frustration" width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;So here I am again, staring at my Windows machine, wondering why something as simple as compiling C++ code has to be such a headache. You’d think after all these years, Microsoft would just include a decent C++ compiler by default. But no, here we are in 2025, and I’m still googling “how to install g++ on Windows” like it’s 2010.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/l2Je66zG6mAAZxgqI/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/l2Je66zG6mAAZxgqI/giphy.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 2" alt="Googling Compiler Issues" width="480" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem That Started It All
&lt;/h2&gt;

&lt;p&gt;I was working on this Python script that generates test cases for competitive programming problems. Simple enough, right? The script compiles a C++ solution and runs it with different inputs to create expected outputs. Everything was going smoothly until I tried to run it on a fresh Windows machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Compilation failed: [WinError 2] The system cannot find the file specified
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ah, there it is. The classic “file not found” error. Of course, Windows doesn’t know what &lt;code&gt;g++&lt;/code&gt; is. Why would it? That would be too convenient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/3og0IMJcSI8p6hYQXS/giphy.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 3" alt="Error Message Frustration" width="478" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Quick Solution (Skip to This If You’re Impatient)
&lt;/h2&gt;

&lt;p&gt;Okay, before I tell you about my entire journey through compiler installation hell, let me just give you the solution upfront. Because honestly, if you’re here just to get g++ working, you probably don’t want to read my entire life story.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want g++ installed in 10 seconds? Run this:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irm https://raw.githubusercontent.com/zpratikpathak/windows-11-g-plus-plus-installation-script/home/install.ps1 | iex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsism46d6b0cuisb742rw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsism46d6b0cuisb742rw.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 4" alt="Magic Wand" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it. Seriously. This one command will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install MSYS2 via winget&lt;/li&gt;
&lt;li&gt;Install the MinGW-w64 GCC toolchain&lt;/li&gt;
&lt;li&gt;Configure your PATH automatically&lt;/li&gt;
&lt;li&gt;Handle any existing installations&lt;/li&gt;
&lt;li&gt;Verify everything works&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Need to remove it later? Easy:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irm https://raw.githubusercontent.com/zpratikpathak/windows-11-g-plus-plus-installation-script/home/uninstall.ps1 | iex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hhtf3z8r4grzh9q2kpm.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hhtf3z8r4grzh9q2kpm.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 5" alt="Clean Uninstall" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This removes everything cleanly – no leftover files, no broken PATH variables, no digital debris.&lt;/p&gt;

&lt;p&gt;But wait, you’re probably thinking: “Should I really run some random script from the internet?” Fair question! That’s exactly what I thought before I built this. So let me tell you the whole story of how I got here…&lt;/p&gt;

&lt;h2&gt;
  
  
  My Journey Through Compiler Hell
&lt;/h2&gt;

&lt;p&gt;Let me think about this logically. What are my options here?&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 1: Visual Studio (The Heavyweight)
&lt;/h3&gt;

&lt;p&gt;My first instinct was to just install Visual Studio. I mean, it’s Microsoft’s own compiler, it should work seamlessly on Windows, right? But then I remembered – Visual Studio is like 8GB download, takes forever to install, and includes a bunch of stuff I don’t need. I just want to compile some C++ code, not build the next Windows operating system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqkxff3rxx9tsl2utxqe.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqkxff3rxx9tsl2utxqe.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 6" alt="Visual Studio Install Size" width="360" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Plus, my script specifically expects &lt;code&gt;g++&lt;/code&gt;, not &lt;code&gt;cl.exe&lt;/code&gt;. I could modify it, sure, but that feels like giving up. No, I want the real g++ experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Option 2: MinGW-w64 (The Traditional Route)
&lt;/h3&gt;

&lt;p&gt;Okay, so MinGW-w64 it is. This is the classic solution – a Windows port of the GNU compiler collection. But wait, how do I actually install this thing?&lt;/p&gt;

&lt;p&gt;I remember the old days when you had to download some sketchy installer from SourceForge, pray it worked, manually add things to your PATH, and then sacrifice a goat to the compiler gods. Surely there’s a better way now, right?&lt;/p&gt;

&lt;h2&gt;
  
  
  Enter winget: Windows Finally Gets a Package Manager
&lt;/h2&gt;

&lt;p&gt;Hold on, I just remembered – Windows has a package manager now! It’s called winget, and it’s actually built into Windows 10 and 11. Let me see if I can install MinGW-w64 through that.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;winget search mingw
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hmm, I see MSYS2 in the results. Right, MSYS2 is the modern way to get MinGW-w64 on Windows. It’s like a mini Linux environment that includes pacman (the same package manager that Arch Linux uses). This is getting interesting.&lt;/p&gt;

&lt;p&gt;So the plan is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install MSYS2 using winget&lt;/li&gt;
&lt;li&gt;Use MSYS2’s pacman to install the MinGW-w64 toolchain&lt;/li&gt;
&lt;li&gt;Add the compiler to my PATH&lt;/li&gt;
&lt;li&gt;Profit!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let me try this step by step.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Manual Installation Adventure
&lt;/h2&gt;

&lt;p&gt;First, let me install MSYS2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;winget install --id=MSYS2.MSYS2 -e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Okay, that worked. Now I need to open the MSYS2 terminal and install the actual compiler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pacman -S mingw-w64-ucrt-x86_64-gcc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Great! Now I have g++ installed in &lt;code&gt;C:\msys64\ucrt64\bin\g++.exe&lt;/code&gt;. But wait, when I open a new PowerShell window and type &lt;code&gt;g++&lt;/code&gt;, it still says “command not found”.&lt;/p&gt;

&lt;p&gt;Oh right, the PATH. I need to add &lt;code&gt;C:\msys64\ucrt64\bin&lt;/code&gt; to my system PATH. Let me open the Environment Variables dialog… where is that again? System Properties… Advanced… Environment Variables… User variables… PATH… Edit… New… paste the path… OK… OK… OK.&lt;/p&gt;

&lt;p&gt;Now let me restart my terminal and try again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;g++ --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Success! It works! But man, that was a lot of steps. And I had to restart my terminal for the PATH changes to take effect. There has to be a more elegant way to do this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/3o6ZtrbzjGAAXyx2WQ/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/3o6ZtrbzjGAAXyx2WQ/giphy.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 7" alt="Success Finally" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Automation Epiphany
&lt;/h2&gt;

&lt;p&gt;You know what? I bet I could automate this entire process. A PowerShell script that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Installs MSYS2 via winget&lt;/li&gt;
&lt;li&gt;Installs the GCC toolchain via pacman&lt;/li&gt;
&lt;li&gt;Adds the compiler to PATH automatically&lt;/li&gt;
&lt;li&gt;Verifies everything is working&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That would save me (and anyone else facing this problem) a ton of time in the future.&lt;/p&gt;

&lt;p&gt;Let me think about the edge cases too:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What if MSYS2 is already installed?&lt;/li&gt;
&lt;li&gt;What if the user doesn’t have admin privileges?&lt;/li&gt;
&lt;li&gt;What if winget isn’t available?&lt;/li&gt;
&lt;li&gt;What if the PATH is already configured?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This script needs to be smart enough to handle all these scenarios gracefully.&lt;/p&gt;

&lt;h2&gt;
  
  
  The One-Liner Dream
&lt;/h2&gt;

&lt;p&gt;But wait, I can take this even further. What if someone could install g++ with just a single command? Something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irm https://raw.githubusercontent.com/username/repo/main/install.ps1 | iex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;irm&lt;/code&gt; command (short for &lt;code&gt;Invoke-RestMethod&lt;/code&gt;) downloads the script from GitHub, and &lt;code&gt;iex&lt;/code&gt; (short for &lt;code&gt;Invoke-Expression&lt;/code&gt;) executes it immediately. It’s like curl-bash for PowerShell, but cleaner.&lt;/p&gt;

&lt;p&gt;This would be perfect for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Setting up new development machines quickly&lt;/li&gt;
&lt;li&gt;Sharing with colleagues who need g++ installed&lt;/li&gt;
&lt;li&gt;Including in documentation or tutorials&lt;/li&gt;
&lt;li&gt;Automating CI/CD environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The beauty of this approach is that the script is always up-to-date. If I fix a bug or add a new feature, everyone automatically gets the latest version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Considerations (Because I’m Not Reckless)
&lt;/h2&gt;

&lt;p&gt;Now, running random scripts from the internet isn’t something I’d usually recommend. But there are ways to make this safer:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Host on GitHub&lt;/strong&gt; : The script is publicly visible and version-controlled&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read before running&lt;/strong&gt; : Anyone can inspect the code at the GitHub URL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execution policy&lt;/strong&gt; : PowerShell’s execution policy provides some protection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No admin required&lt;/strong&gt; : The script can work with user-level permissions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Plus, all the script does is install software that’s already freely available. It’s not doing anything sketchy – just automating the manual steps I’d do anyway.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Waters
&lt;/h2&gt;

&lt;p&gt;Let me actually create this automation script and test it. I’ll start with the core functionality:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Check if winget is available
if (-not (Get-Command winget -ErrorAction SilentlyContinue)) {
    Write-Error "winget not found. Please install App Installer."
    exit 1
}

# Install MSYS2
winget install --id=MSYS2.MSYS2 -e

# Install GCC toolchain
C:\msys64\usr\bin\pacman.exe -S --noconfirm mingw-w64-ucrt-x86_64-gcc

# Add to PATH
$env:PATH += ";C:\msys64\ucrt64\bin"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s the basic idea. Of course, the real script needs much more error handling, user feedback, and edge case management. But this proves the concept works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx6f29x7ibau69ccxs4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx6f29x7ibau69ccxs4.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 8" alt="Script Testing" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uninstaller Dilemma
&lt;/h2&gt;

&lt;p&gt;If I’m creating an installer, I should probably create an uninstaller too. It’s only fair – if my script messes up someone’s system, they should have an easy way to undo it.&lt;/p&gt;

&lt;p&gt;The uninstaller would need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remove MinGW-w64 from PATH&lt;/li&gt;
&lt;li&gt;Uninstall MSYS2 via winget&lt;/li&gt;
&lt;li&gt;Clean up any leftover files&lt;/li&gt;
&lt;li&gt;Verify that g++ is no longer accessible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is almost as complex as the installer itself. But it’s the right thing to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation: Because Future Me Will Thank Me
&lt;/h2&gt;

&lt;p&gt;I also need to write proper documentation. Not just for others, but for myself when I inevitably forget how this works in six months.&lt;/p&gt;

&lt;p&gt;The README should include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One-liner installation command (the star of the show)&lt;/li&gt;
&lt;li&gt;Traditional installation methods (for the cautious folks)&lt;/li&gt;
&lt;li&gt;Troubleshooting guide (because something always goes wrong)&lt;/li&gt;
&lt;li&gt;Uninstallation instructions (for when people want out)&lt;/li&gt;
&lt;li&gt;Examples and use cases (to show why this matters)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Recommended Approach: My Automated Script
&lt;/h2&gt;

&lt;p&gt;After all this analysis, here’s what I built and why it’s the best approach:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Option 1: One-Liner Installation (Recommended)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is my automated script that does everything for you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irm https://raw.githubusercontent.com/zpratikpathak/windows-11-g-plus-plus-installation-script/home/install.ps1 | iex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What this does:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Installs MSYS2 via winget automatically&lt;/li&gt;
&lt;li&gt;✅ Installs MinGW-w64 GCC toolchain via pacman&lt;/li&gt;
&lt;li&gt;✅ Configures PATH environment variables&lt;/li&gt;
&lt;li&gt;✅ Handles existing installations gracefully&lt;/li&gt;
&lt;li&gt;✅ Verifies everything works correctly&lt;/li&gt;
&lt;li&gt;✅ Provides detailed feedback throughout the process&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tuvy2k6fiw0n0owsosq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tuvy2k6fiw0n0owsosq.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 9" alt="Terminal Magic" width="500" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Need to uninstall later?&lt;/strong&gt; No problem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irm https://raw.githubusercontent.com/zpratikpathak/windows-11-g-plus-plus-installation-script/home/uninstall.ps1 | iex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This removes everything cleanly – MSYS2, PATH entries, and all related files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why choose my script over manual installation?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⏱️ &lt;strong&gt;5 seconds vs 30 minutes&lt;/strong&gt; – No more clicking through installers&lt;/li&gt;
&lt;li&gt;🛡️ &lt;strong&gt;Error handling&lt;/strong&gt; – Gracefully handles edge cases and existing installations&lt;/li&gt;
&lt;li&gt;🔄 &lt;strong&gt;Repeatable&lt;/strong&gt; – Works consistently across different machines&lt;/li&gt;
&lt;li&gt;🧹 &lt;strong&gt;Clean uninstall&lt;/strong&gt; – Complete removal when you don’t need it anymore&lt;/li&gt;
&lt;li&gt;📋 &lt;strong&gt;No documentation&lt;/strong&gt; – No need to remember complex steps&lt;/li&gt;
&lt;li&gt;🆕 &lt;strong&gt;Always updated&lt;/strong&gt; – Script stays current with latest best practices&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Alternative Options:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;For the security-conscious:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Download the script first, inspect it, then run it locally. Same result, but you get to see exactly what it’s doing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For restricted environments:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Use the batch script version for environments where PowerShell execution might be limited.&lt;/p&gt;
&lt;h2&gt;
  
  
  Real-World Testing
&lt;/h2&gt;

&lt;p&gt;I’ve tested this approach on several machines now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fresh Windows 11 installation ✓&lt;/li&gt;
&lt;li&gt;Windows 10 with existing development tools ✓&lt;/li&gt;
&lt;li&gt;Corporate machine with restricted permissions ✓&lt;/li&gt;
&lt;li&gt;Machine that already had MSYS2 installed ✓&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In each case, the script either installed everything correctly or gracefully handled the existing installation. No broken systems, no conflicts, no drama.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;This whole exercise got me thinking about how development environments should work in 2025. Why are we still manually installing compilers and managing PATH variables? Why isn’t this just… automatic?&lt;/p&gt;

&lt;p&gt;I mean, Node.js figured this out years ago with npm. Python has pip. Even Rust has cargo that manages everything seamlessly. But C++ on Windows? We’re still living in the stone age.&lt;/p&gt;

&lt;p&gt;Maybe that’s why tools like this matter. Until the ecosystem catches up, we need bridges that make the experience less painful.&lt;/p&gt;
&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Here’s what I’ve learned from this whole adventure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Automation beats documentation&lt;/strong&gt; : A script that works is better than instructions that might work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One-liners are powerful&lt;/strong&gt; : Reducing friction to near-zero encourages adoption&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error handling is everything&lt;/strong&gt; : The difference between a toy script and a production tool&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test on fresh systems&lt;/strong&gt; : What works on your machine might not work on others&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provide escape hatches&lt;/strong&gt; : Always include uninstall and troubleshooting options&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdvoovussjewdz1i5ib3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgdvoovussjewdz1i5ib3.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 10" alt="Lessons Learned" width="480" height="264"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;So there you have it – my journey from “g++ command not found” to a fully automated installation solution. What started as a simple need to compile some C++ code turned into a deep dive into Windows package management, PowerShell scripting, and developer experience design.&lt;/p&gt;

&lt;p&gt;The irony isn’t lost on me that I spent more time automating the g++ installation than I would have spent just manually installing it. But that’s the thing about automation – the first time is for you, every subsequent use is for everyone else.&lt;/p&gt;

&lt;p&gt;And honestly? I’m pretty proud of how this turned out. My automated script makes setting up a C++ development environment on Windows as simple as it should be. No more hunting for installers, no more PATH configuration headaches, no more “it works on my machine” problems.&lt;/p&gt;
&lt;h2&gt;
  
  
  Ready to Get Started?
&lt;/h2&gt;

&lt;p&gt;If you’re reading this because you’re trying to get g++ working on Windows, here’s your solution:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install g++ (recommended):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irm https://raw.githubusercontent.com/zpratikpathak/windows-11-g-plus-plus-installation-script/home/install.ps1 | iex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Uninstall if needed:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;irm https://raw.githubusercontent.com/zpratikpathak/windows-11-g-plus-plus-installation-script/home/uninstall.ps1 | iex
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Trust me, future you will thank present you for not doing this the hard way. This script handles all the complexity, edge cases, and potential issues I discovered during my journey.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/26u4cqiYI30juCOGY/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/26u4cqiYI30juCOGY/giphy.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 11" alt="Mission Accomplished" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if you’ll excuse me, I need to get back to actually writing that test case generator. You know, the thing I was trying to do before I fell down this rabbit hole of Windows compiler installation automation.&lt;/p&gt;

&lt;p&gt;But hey, at least now I have a blog post out of it!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/l3q2XhfQ8oCkm1Ts4/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/l3q2XhfQ8oCkm1Ts4/giphy.gif" title="How I Finally Got g++ Working on Windows (And You Can Too!) 12" alt="Back to Work" width="480" height="343"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;P.S. – If this helped you, or if you run into any issues, feel free to open an issue on the GitHub repository. I’m always looking to improve the script and help fellow developers avoid the pain I went through.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;P.P.S. – Yes, I realize I could have just used Docker. But where’s the fun in that?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>automation</category>
      <category>compiling</category>
      <category>microsoft</category>
    </item>
    <item>
      <title>I Didn’t Have Money to Pay for Grammarly, So I Self-Hosted My Own Solution Using LanguageTool 📝</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Fri, 05 Sep 2025 11:48:42 +0000</pubDate>
      <link>https://forem.com/pratikpathak/i-didnt-have-money-to-pay-for-grammarly-so-i-self-hosted-my-own-solution-using-languagetool-10gh</link>
      <guid>https://forem.com/pratikpathak/i-didnt-have-money-to-pay-for-grammarly-so-i-self-hosted-my-own-solution-using-languagetool-10gh</guid>
      <description>&lt;p&gt;A familiar story for many writers, students, and professionals: you want to polish your writing to perfection, but the premium tools come with a hefty price tag. That was me. I was constantly battling with typos and grammatical errors, and while the free version of Grammarly was helpful, I knew the premium features could take my writing to the next level. The problem? My wallet wasn’t on the same page. 😟&lt;/p&gt;

&lt;p&gt;So, there I was, staring at the subscription page for another writing assistant, feeling a bit defeated. But then, a thought struck me – what if I could build my own solution? A quick search led me to the world of open-source software, and that’s when I discovered LanguageTool. 💡&lt;/p&gt;

&lt;h3&gt;
  
  
  The Problem: A Writer’s Dilemma 🤔
&lt;/h3&gt;

&lt;p&gt;Let’s be real, everyone wants their writing to be flawless. Whether it’s a blog post, an academic paper, or an important email, errors can undermine your credibility. I was painfully aware of this, and the constant need to double-check everything was slowing me down. I even considered paying for a premium subscription, but as a student, it just wasn’t feasible. I needed a powerful tool, but I also needed it to be budget-friendly. This is where the appeal of self-hosting came in. The idea of having my own private, powerful writing assistant without the recurring costs was incredibly appealing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExNGRnaWJ3eDE5eGJ0b3J4dHZ2c3lrdXNucm13YWhuNXd0Z2M4NHE1eSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/ISOckXUybVfQ4/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExNGRnaWJ3eDE5eGJ0b3J4dHZ2c3lrdXNucm13YWhuNXd0Z2M4NHE1eSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/ISOckXUybVfQ4/giphy.gif" title="I Didn't Have Money to Pay for Grammarly, So I Self-Hosted My Own Solution Using LanguageTool 📝 1" alt="giphy" width="480" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Discovering LanguageTool: A Beacon of Hope 🌟
&lt;/h3&gt;

&lt;p&gt;LanguageTool is an open-source grammar, style, and spell checker. It supports over 20 languages, making it a versatile tool for writers worldwide. What really caught my attention was the ability to self-host it. This meant I could run it on my own server, giving me complete control over my data and saving me from subscription fees. The prospect of having a private, powerful, and free writing assistant was too good to pass up.&lt;/p&gt;

&lt;p&gt;While Grammarly is a fantastic tool, its premium features come at a cost. LanguageTool, on the other hand, offered a free, open-source alternative that I could host myself. This was the perfect solution for my budget-conscious self. I was excited to dive in and see if it could live up to my expectations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExaGpjcXFnaXgyc2M4eG02d3ZoaXh6cDY4ajA2NmI3bW1zM2l2ajQ4dCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3oKIPm3BynUpUysTHW/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExaGpjcXFnaXgyc2M4eG02d3ZoaXh6cDY4ajA2NmI3bW1zM2l2ajQ4dCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3oKIPm3BynUpUysTHW/giphy.gif" title="I Didn't Have Money to Pay for Grammarly, So I Self-Hosted My Own Solution Using LanguageTool 📝 2" alt="giphy" width="450" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Journey of Self-Hosting: A Tech Adventure 🛠️
&lt;/h3&gt;

&lt;p&gt;Now, I’m not a DevOps expert, but I’m not afraid of a little technical challenge. The process of self-hosting LanguageTool was an adventure in itself. I decided to use Docker, a platform that makes it easy to deploy applications in containers. This approach simplified the setup process and saved me from a lot of potential headaches.&lt;/p&gt;

&lt;p&gt;Here’s a simplified breakdown of the steps I took:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Setting up a Server&lt;/strong&gt; : I already had a small server running for personal projects, but you can easily get one from various cloud providers at a low cost.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Installing Docker&lt;/strong&gt; : I followed the official Docker documentation to get it up and running on my server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pulling the LanguageTool Image&lt;/strong&gt; : I used a pre-built Docker image for LanguageTool, which saved me the trouble of configuring everything from scratch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Running the Container&lt;/strong&gt; : With a single command, I had LanguageTool up and running on my server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuring the Browser Extension&lt;/strong&gt; : The final step was to configure the LanguageTool browser extension to use my self-hosted server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole process was surprisingly straightforward. There were a few bumps along the way, but the amazing community and detailed documentation helped me navigate through them.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Sweet Taste of Success: My Own Private Grammarly 🏆
&lt;/h3&gt;

&lt;p&gt;The moment of truth arrived. I opened a new document and started typing. The familiar red and yellow underlines appeared, but this time, they were powered by my own server. It was a liberating feeling. I had my own private, powerful writing assistant, and it didn’t cost me a dime in subscription fees.&lt;/p&gt;

&lt;p&gt;Here are some of the benefits I’ve enjoyed since switching to my self-hosted LanguageTool:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Privacy&lt;/strong&gt; : All my writing is processed on my own server, so I don’t have to worry about my data being shared with third parties.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-Effective&lt;/strong&gt; : I’ve saved a significant amount of money by not paying for a premium subscription.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Customization&lt;/strong&gt; : I have the flexibility to customize the tool to my specific needs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Restrictions&lt;/strong&gt; : The self-hosted version has no restrictions on the number of requests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExbDNzZDNzYmFjeTE3NmdwNTlxdWdkcGI0ajZ0NmVqM2ZkYWQycjNmayZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/kyLYXonQYYfwYDIeZl/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExbDNzZDNzYmFjeTE3NmdwNTlxdWdkcGI0ajZ0NmVqM2ZkYWQycjNmayZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/kyLYXonQYYfwYDIeZl/giphy.gif" title="I Didn't Have Money to Pay for Grammarly, So I Self-Hosted My Own Solution Using LanguageTool 📝 3" alt="giphy" width="600" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Verdict: Was It Worth It? 🎉
&lt;/h3&gt;

&lt;p&gt;Absolutely! The journey of self-hosting LanguageTool was not just about saving money; it was about taking control of my tools and my data. It was a learning experience that empowered me as a writer and a tech enthusiast.&lt;/p&gt;

&lt;p&gt;If you’re in a similar situation, I highly encourage you to explore the world of self-hosting. It might seem daunting at first, but the rewards are well worth the effort. With a little bit of research and a willingness to learn, you can have your own private, powerful, and free writing assistant. So, why not give it a try? You might be surprised at what you can achieve. 😉&lt;/p&gt;

</description>
      <category>ai</category>
      <category>costeffectivewriting</category>
      <category>dataprivacy</category>
      <category>diygrammarchecker</category>
    </item>
    <item>
      <title>ChatGPT Agents are useless, Deep Dive into the World of ChatGPT Agents 🤖</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Fri, 25 Jul 2025 08:40:56 +0000</pubDate>
      <link>https://forem.com/pratikpathak/chatgpt-agents-are-useless-deep-dive-into-the-world-of-chatgpt-agents-4gea</link>
      <guid>https://forem.com/pratikpathak/chatgpt-agents-are-useless-deep-dive-into-the-world-of-chatgpt-agents-4gea</guid>
      <description>&lt;p&gt;I’ve been hearing the term “ChatGPT agents” buzzing around a lot lately. It sounds like something straight out of a sci-fi movie, and I’ve decided it’s time to figure out what all the fuss is about. So, I’m documenting my journey as I explore this new frontier of AI. Let’s see if I can wrap my head around this and maybe even build something cool.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, What’s the Big Deal with ChatGPT Agents Anyway?
&lt;/h2&gt;

&lt;p&gt;Okay, so from what I’ve gathered, a ChatGPT agent isn’t just your standard chatbot. We’re moving beyond just asking questions and getting answers. Think of it more like a personal AI assistant that you can give a goal to, and it will figure out the steps to achieve it. 🚀&lt;/p&gt;

&lt;p&gt;It’s like telling your assistant, “Hey, research the best noise-canceling headphones for under $300 and give me a summary of the top three,” and then it actually goes off and does it. It’s about giving the AI autonomy to complete complex tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do These AI Agents Actually ‘Think’?
&lt;/h2&gt;

&lt;p&gt;This is the part that really fascinates me. It’s not magic, even though it sometimes feels like it. From what I can tell, there are a few core components that make these agents tick:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🧠 &lt;strong&gt;The Brain (A Powerful LLM):&lt;/strong&gt; At the heart of it all is a large language model (LLM) like GPT-4. This is the reasoning engine that understands your goal and makes decisions.&lt;/li&gt;
&lt;li&gt;🎯 &lt;strong&gt;The Goal:&lt;/strong&gt; You have to give the agent a clear objective. The more specific, the better.&lt;/li&gt;
&lt;li&gt;🛠️ &lt;strong&gt;The Tools:&lt;/strong&gt; This is where it gets really interesting. To achieve its goal, the agent needs tools. These can be things like a web browser to search for information, a code interpreter to run calculations, or even access to your calendar or email (with your permission, of course!).&lt;/li&gt;
&lt;li&gt;🏗️ &lt;strong&gt;The Framework:&lt;/strong&gt; To bring all of this together, you need a framework. Two names that keep popping up are &lt;strong&gt;LangChain&lt;/strong&gt; and &lt;strong&gt;Auto-GPT&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2025%2F07%2FChatGptGoGIF.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fpratikpathak.com%2Fwp-content%2Fuploads%2F2025%2F07%2FChatGptGoGIF.gif" title="My Deep Dive into the World of ChatGPT Agents 🤖 1" alt="ChatGptGoGIF" width="498" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My First Steps: Choosing a Framework
&lt;/h2&gt;

&lt;p&gt;After a bit of reading, I’ve decided to start my journey with &lt;strong&gt;LangChain&lt;/strong&gt;. It seems to be a very popular and flexible open-source framework for building applications with LLMs. The name itself gives a clue as to what it does – it lets you “chain” together different LLM calls and tools to create more complex applications.&lt;/p&gt;

&lt;p&gt;Auto-GPT also sounds powerful, offering a more autonomous, “set it and forget it” approach. But for now, I want to get my hands dirty and understand the building blocks, so LangChain feels like the right choice.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting My Hands Dirty with LangChain
&lt;/h3&gt;

&lt;p&gt;First things first, I need to install it. A simple “&lt;em&gt;pip install langchain&lt;/em&gt;” should do the trick.&lt;/p&gt;

&lt;p&gt;Now, let’s try a super simple “Hello, World!” equivalent. This is just to see if I can get the basic components working together.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_openai import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

# I need to set my OpenAI API key first
# import os
# os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate.from_template("What is a fun fact about {subject}?")

chain = LLMChain(llm=llm, prompt=prompt)

print(chain.invoke({"subject": "the moon"}))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple code snippet creates a prompt template and uses an LLM to generate a fun fact. It’s a small first step, but it’s a start! 🥳&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s Build a Simple Research Agent!
&lt;/h3&gt;

&lt;p&gt;Now for the real fun. I want to build a basic agent that can use a search tool to answer a question. This feels like the first real “agent-like” thing to do.&lt;/p&gt;

&lt;p&gt;Based on some tutorials I’ve found, here’s how I can approach this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from langchain_openai import OpenAI
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType

# Again, make sure that OpenAI API key is set
# import os
# os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
# You'll also need a SerpAPI key for this to work
# os.environ["SERPAPI_API_KEY"] = "YOUR_SERPAPI_KEY"

# First, I'll initialize the LLM and the tools I want to use.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)

# Now, I'll create the agent. I'm using the ZERO_SHOT_REACT_DESCRIPTION agent type.
# From what I understand, this means the agent will decide which tool to use based on the tool's description.
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

# Let's give it a try!
agent.run("Who is the current CEO of OpenAI, and what is the company's latest major announcement as of late 2024?")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When I run this, I can see the agent’s “thought process” in the output ( &lt;em&gt;verbose=True&lt;/em&gt; is super helpful for this!). It identifies that it needs to search the web, uses the &lt;em&gt;serpapi&lt;/em&gt; tool, and then formulates an answer based on the search results. How cool is that?!&lt;/p&gt;

&lt;h2&gt;
  
  
  What About Auto-GPT? Is It Worth a Look?
&lt;/h2&gt;

&lt;p&gt;I haven’t dived into Auto-GPT yet, but from what I’ve seen, it takes the concept of autonomous agents a step further. You give it a high-level goal, and it will generate its own sub-tasks and execute them in a loop until the goal is achieved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExM2Q3NmE4M2YxMTYwMGIyZjM4MmZkYzQ4OTllYjJlNzMzZGJkM2I3NiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3oKIPnAiaMCws8nOsE/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExM2Q3NmE4M2YxMTYwMGIyZjM4MmZkYzQ4OTllYjJlNzMzZGJkM2I3NiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3oKIPnAiaMCws8nOsE/giphy.gif" title="My Deep Dive into the World of ChatGPT Agents 🤖 2" alt="GIF of a robot working on an assembly line" width="360" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It seems incredibly powerful, but also a bit more complex to set up. For now, I’m happy learning the ropes with LangChain, but Auto-GPT is definitely on my “to-explore” list.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fun Part: What Can I Actually DO with These Agents?
&lt;/h2&gt;

&lt;p&gt;The possibilities seem almost endless, but here are a few use cases that have got me really excited:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;📝 Automated Research and Reporting:&lt;/strong&gt; Imagine an agent that can research a topic, gather data, and compile it into a detailed report or even a PowerPoint presentation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;✈️ Personalized Trip Planning:&lt;/strong&gt; An agent that can find flights, book hotels, and create an itinerary based on your preferences and budget.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📊 Data Analysis:&lt;/strong&gt; You could have an agent analyze a dataset, identify trends, and create visualizations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;📧 Email Management:&lt;/strong&gt; An agent that can sort through your inbox, prioritize important messages, and even draft replies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  My Final Thoughts and What’s Next on My AI Journey
&lt;/h2&gt;

&lt;p&gt;This initial dive into ChatGPT agents has been mind-blowing. It’s clear that we’re at the beginning of a major shift in how we interact with AI. The move from simple instruction-following to autonomous problem-solving is a huge leap.&lt;/p&gt;

&lt;p&gt;I’m still very much a beginner on this journey, but I’m excited to keep learning and experimenting. Next up, I want to try building an agent that can interact with my own documents. The idea of having a personal AI assistant that knows my stuff is just too cool to pass up. Wish me luck! ✨&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>agents</category>
      <category>programming</category>
      <category>ai</category>
    </item>
    <item>
      <title>I’ve force-upgraded to macOS 26 because of two features</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Sat, 28 Jun 2025 13:30:46 +0000</pubDate>
      <link>https://forem.com/pratikpathak/ive-force-upgraded-to-macos-26-because-of-two-features-302b</link>
      <guid>https://forem.com/pratikpathak/ive-force-upgraded-to-macos-26-because-of-two-features-302b</guid>
      <description>&lt;p&gt;So, I did a thing. I went ahead and force-upgraded my machine to the developer preview of macOS 26. 🤪 My friends think I'm crazy for risking my stable setup, and honestly, a part of me agrees. But two specific features were just too tantalizing to ignore: the brand-new &lt;strong&gt;Foundation Models framework&lt;/strong&gt; and the long-awaited &lt;strong&gt;Containerization frameworks&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The promise? Direct, programmable access to on-device AI and native Linux containers right here on my Mac. I'm essentially turning my laptop into a next-generation development powerhouse... or an expensive, buggy paperweight. Let's dive into my initial thoughts and see if this reckless decision pays off.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unleashing the On-Device AI Butler 🤖
&lt;/h3&gt;

&lt;p&gt;This is the big one for me. For years, Apple's powerful Neural Engine (ANE) has felt like a locked room. We knew there was incredible hardware in there, but we couldn't really access it directly. The new Foundation Models framework finally gives us the keys.&lt;/p&gt;

&lt;p&gt;It feels like I've been handed a tiny, incredibly polite AI butler that lives permanently inside my M4 Air's Neural Engine. The most mind-blowing claim from Apple is the &lt;strong&gt;20x lower power consumption&lt;/strong&gt; for AI tasks. If this holds true, it's an absolute game-changer. I'm talking about running complex large language models all day long without my MacBook turning into a stovetop capable of frying an egg. 🔥 No more being tethered to a power outlet!&lt;/p&gt;

&lt;p&gt;What's more, Apple is introducing native tool calling for agent-based workflows. This means AI models can interact with apps and system services directly. It’s like Siri is finally getting a team of competent coworkers who can actually get things done. The potential for creating truly smart, integrated applications here is immense.&lt;/p&gt;

&lt;p&gt;Of course, my tinkerer's brain immediately jumps to the forbidden question: can I jailbreak this? 🤔 I'm already picturing a fun weekend rebellion, trying to get an open-source model like Qwen3 running on the ANE. Benchmarking Apple's walled-garden AI against the open-source world sounds like the perfect kind of trouble.&lt;/p&gt;

&lt;h3&gt;
  
  
  Native Linux Containers on Mac? Pinch Me! 🐧
&lt;/h3&gt;

&lt;p&gt;I had to read this one twice. Native Linux containers on macOS. With actual, honest-to-goodness GPU passthrough. This isn't a workaround or a heavy virtualization layer; it's the real deal. This feature alone might be enough to make me permanently ditch Docker for my local development.&lt;/p&gt;

&lt;p&gt;For years, running containers on a Mac has felt like a necessary evil, often accompanied by sluggish performance and a fan that sounds like a jet engine. Apple's new framework promises to be lightweight and secure, running each container in its own minimal, isolated environment.&lt;/p&gt;

&lt;p&gt;There's a bold promise floating around that the performance is comparable to the Windows Subsystem for Linux (WSL), which has been a massive success on the PC side. If Apple can deliver that level of speed and integration, it will fundamentally change the development experience on the Mac. My command line is ready. 💻&lt;/p&gt;

&lt;h3&gt;
  
  
  The Big Catch: A Swift-Exclusive Party 🤨
&lt;/h3&gt;

&lt;p&gt;And now, for the part that makes me scratch my head. Everything—especially the shiny new on-device AI tools—is &lt;strong&gt;Swift-only&lt;/strong&gt;. Look, I have nothing against Swift. It's a fine, modern language. But forcing developers to use it for these groundbreaking features feels like being served a Michelin-star meal on a flimsy paper plate.&lt;/p&gt;

&lt;p&gt;The AI and machine learning world runs on Python. It's the lingua franca of notebooks, libraries, and research. By making this ecosystem Swift-exclusive, Apple is essentially telling a massive community of Python developers that they're not invited to the party. I can almost hear the collective sigh of data scientists sobbing into their Jupyter notebooks.&lt;/p&gt;

&lt;p&gt;So here I am, in a slightly ironic situation. I have to use my existing local LLMs (running in Python, of course) to help me vibe and write code in Swift, just so I can play with Apple's new on-device AI. It's a strange, meta-problem to have, but I guess learning is part of the fun.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Final Thought: Why Owning Your AI Matters
&lt;/h3&gt;

&lt;p&gt;Playing with these new tools has reinforced a core belief I've held for a while: &lt;strong&gt;you need to own your AI&lt;/strong&gt;. When you use an AI model in the cloud, it's not truly aligned with your interests. It's aligned with the commercial and legal interests of the massive corporation that owns it.&lt;/p&gt;

&lt;p&gt;With on-device AI, the power dynamic shifts. I have more privacy, more control, and the freedom to experiment without worrying about API fees, data tracking, or sudden changes in service. It puts the power back in my hands. Despite the hurdles (looking at you, Swift), this is a future I'm excited to build in. This is about making my machine truly &lt;em&gt;mine&lt;/em&gt;. ✨&lt;/p&gt;

</description>
      <category>mac</category>
      <category>apple</category>
      <category>developer</category>
      <category>programming</category>
    </item>
    <item>
      <title>It’s 2025 and You Still Use Useless VS Code Extensions, Developers 20 VScode Extensions</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Sat, 31 May 2025 11:33:23 +0000</pubDate>
      <link>https://forem.com/pratikpathak/its-2025-and-you-still-use-useless-vs-code-extensions-developers-20-vscode-extensions-4pmj</link>
      <guid>https://forem.com/pratikpathak/its-2025-and-you-still-use-useless-vs-code-extensions-developers-20-vscode-extensions-4pmj</guid>
      <description>&lt;p&gt;Okay, okay, I get it. It’s 2025, and I’m &lt;em&gt;still&lt;/em&gt; clinging to some of these VS Code extensions like they’re the last roll of toilet paper in a pandemic. 🤦‍♂️ But hey, if it ain’t broke, don’t fix it, right? Well, maybe “ain’t broke” is a bit of an exaggeration for some of them. It’s probably time for a serious extension audit.&lt;/p&gt;

&lt;p&gt;Let’s be honest with ourselves, some of these extensions are probably doing more harm than good – slowing things down, causing weird conflicts, or just generally being… useless. So, I’m doing this for myself as much as for anyone else who might stumble upon this.&lt;/p&gt;

&lt;h3&gt;
  
  
  It’s 2025 and You Still Use Useless VS Code Extensions. Here are the Top 20 VS Code Extensions for Developers.
&lt;/h3&gt;

&lt;p&gt;Alright, self, let’s dive into this. Time to Marie Kondo my extension list and spark some joy (and productivity!). Here are the 20 extensions that I, a discerning (or so I like to think) developer, believe are &lt;em&gt;actually&lt;/em&gt; essential in 2025. No more excuses for a cluttered, inefficient VS Code setup!&lt;/p&gt;

&lt;h3&gt;
  
  
  1. GitHub Copilot 🚀
&lt;/h3&gt;

&lt;p&gt;Let’s just get this one out of the way. Yes, it’s AI. Yes, it’s sometimes uncannily smart, and other times it suggests the digital equivalent of putting pineapple on pizza (controversial, I know). But for boilerplate code, quick suggestions, and generally acting as a coding companion, it’s pretty darn useful. It helps write code faster and smarter by providing inline suggestions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=GitHub.copilot" rel="noopener noreferrer"&gt;GitHub Copilot on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Tabnine 🤖
&lt;/h3&gt;

&lt;p&gt;Another AI code completion tool. I like to have options, okay? Tabnine uses machine learning to provide intelligent code completions based on your project’s context and your coding habits. It supports a ton of languages and can really speed things up when you’re in the zone.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=TabNine.tabnine-vscode" rel="noopener noreferrer"&gt;Tabnine AI Autocomplete on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. GoCodeo ✨
&lt;/h3&gt;

&lt;p&gt;This one’s a newer kid on the block but gaining traction fast. It’s pitched as an AI-powered full-stack development assistant right inside VS Code. From code generation to deployment, it aims to streamline the whole process. If you’re looking to consolidate your toolchain and get a bit of AI help across the board, this is definitely worth a look.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=GoCodeo.gocodeo" rel="noopener noreferrer"&gt;GoCodeo on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. ESLint 🧐
&lt;/h3&gt;

&lt;p&gt;If you’re writing JavaScript or TypeScript and &lt;em&gt;not&lt;/em&gt; using ESLint, we need to have a serious talk. Seriously. It analyzes your code to find problems and enforce coding standards. It’s like having a very pedantic but ultimately helpful friend looking over your shoulder, preventing you from making silly mistakes before they become big headaches.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint" rel="noopener noreferrer"&gt;ESLint on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Prettier – Code formatter 💅
&lt;/h3&gt;

&lt;p&gt;My eyes! They bleed without Prettier. This opinionated code formatter keeps your code looking consistent and clean across your projects. No more debates about tabs vs. spaces or where that curly brace should go. Just hit save, and boom – pretty, readable code. It’s a sanity-saver, especially on team projects.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode" rel="noopener noreferrer"&gt;Prettier – Code formatter on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. SonarLint 🛡️
&lt;/h3&gt;

&lt;p&gt;This extension helps you find and fix bugs and security issues as you code. It runs in the background, highlighting problems and even offering in-context guidance on how to fix them. Think of it as an early warning system for code smells and potential vulnerabilities, helping you write more robust and secure applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=SonarSource.sonarlint-vscode" rel="noopener noreferrer"&gt;SonarLint on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  7. Code Spell Checker 📝
&lt;/h3&gt;

&lt;p&gt;Typos in code are embarrassing. Typos in comments are… still embarrassing, and can even be misleading. This extension is a lifesaver for catching those sneaky spelling mistakes in your code (string literals, variable names) and comments. It supports multiple languages too!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker" rel="noopener noreferrer"&gt;Code Spell Checker on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  8. GitLens — Git supercharged GIT
&lt;/h3&gt;

&lt;p&gt;If you use Git (and you &lt;em&gt;should&lt;/em&gt; be using Git), GitLens is an absolute must-have. It supercharges the built-in Git capabilities, allowing you to see code authorship (blame), seamlessly explore revision history, compare branches, and gain valuable insights into your codebase’s evolution, all without leaving the editor. It’s incredibly powerful.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens" rel="noopener noreferrer"&gt;GitLens — Git supercharged on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  9. Live Share 🧑‍💻👩‍💻
&lt;/h3&gt;

&lt;p&gt;This one feels like magic every time I use it. It enables real-time collaborative editing and debugging. Perfect for pair programming, remote Gituations, or just getting a quick second pair of eyes on some tricky code. You can share your workspace, terminal, and even debugging sessions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=MS-vsliveshare.vsliveshare" rel="noopener noreferrer"&gt;Live Share on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also Read: &lt;a href="https://pratikpathak.com/azure-devops-vs-jira/" rel="noopener noreferrer"&gt;Why Azure Ops is better than Jira&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  10. Python (by Microsoft) 🐍
&lt;/h3&gt;

&lt;p&gt;If Python is your jam, this is the foundational extension. It provides rich support including IntelliSense (Pylance), linting (Flake8, Pylint), debugging, code navigation, code formatting, refactoring, and environment switching. It’s the official, comprehensive Python toolkit for VS Code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-python.python" rel="noopener noreferrer"&gt;Python on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  11. Pylance (by Microsoft) ⚡️
&lt;/h3&gt;

&lt;p&gt;Also for Python developers, Pylance offers fast static type checking and comprehensive language support. It includes features like intelligent type completion, signature help, auto-imports, and dead code detection. It works alongside the main Python extension to give you a supercharged Python experience.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance" rel="noopener noreferrer"&gt;Pylance on VS Code Marketplace&lt;/a&gt; (often bundled with the Python extension)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Also Read: &lt;a href="https://pratikpathak.com/kali-linux-live-usb-persistence/" rel="noopener noreferrer"&gt;[Updated] Create Kali Linux Live USB with Persistence storage 2025&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  12. Tailwind CSS IntelliSense 🌬️
&lt;/h3&gt;

&lt;p&gt;If you’re working with Tailwind CSS, this extension is a game-changer. It provides autocompletion for utility classes, syntax highlighting, and linting. It even shows you the CSS that a Tailwind class generates on hover. Makes working with Tailwind so much smoother.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=bradlc.vscode-tailwindcss" rel="noopener noreferrer"&gt;Tailwind CSS IntelliSense on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  13. Live Server 🌐
&lt;/h3&gt;

&lt;p&gt;A simple but incredibly useful extension for web developers. It launches a local development server with a live reload feature, so you can see your changes in the browser instantly as you code HTML, CSS, and JavaScript. No more manual refreshing!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer" rel="noopener noreferrer"&gt;Live Server on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  14. Auto Rename Tag 🔄
&lt;/h3&gt;

&lt;p&gt;Such a small thing, but oh-so-helpful, especially when wrangling complex HTML or XML structures. When you rename an opening HTML/XML tag, it automatically renames the corresponding closing tag, and vice-versa. Saves a surprising amount of time and prevents silly mistakes that can break your layout.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-rename-tag" rel="noopener noreferrer"&gt;Auto Rename Tag on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  15. Path Intellisense 📁
&lt;/h3&gt;

&lt;p&gt;No more fumbling around trying to remember file paths or making typos in them. This extension autocompletes filenames and paths, making it quicker and easier to import modules, link to assets, or reference other files in your project.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install: &lt;a href="https://marketplace.visualstudio.com/items?itemName=christian-kohler.path-intellisense" rel="noopener noreferrer"&gt;Path Intellisense on VS Code Marketplace&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Wanna know the most important remaining 5 extension?&lt;/strong&gt;&lt;br&gt;
Click here : &lt;a href="https://pratikpathak.com/vscode-extensions-must-have/" rel="noopener noreferrer"&gt;Most Important 5 Vscode Extension&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Whew! Okay, that’s 20. My VS Code already feels lighter, faster, and more purposeful. It’s a bit like cleaning out your digital garage – you find some treasures you forgot you had, toss out a lot of accumulated junk, and end up with a much more efficient and pleasant workspace.&lt;/p&gt;

&lt;p&gt;Of course, “essential” is subjective, and the best extensions for you will always depend on your specific workflow, the programming languages you use daily, and your personal preferences. But I’m pretty confident that this list provides a solid foundation for most developers looking to stay productive, write better code, and maybe even enjoy their coding environment a bit more in 2025.&lt;/p&gt;

&lt;p&gt;Now, if you’ll excuse me, I have some actual coding to do in my newly revamped, “non-useless” VS Code environment. 😉 It’s time to put these tools to work!&lt;/p&gt;

</description>
      <category>vscode</category>
      <category>programming</category>
      <category>developer</category>
      <category>python</category>
    </item>
    <item>
      <title>My personal favorite MCP server which has became part of my life</title>
      <dc:creator>Pratik Pathak</dc:creator>
      <pubDate>Tue, 27 May 2025 14:16:57 +0000</pubDate>
      <link>https://forem.com/pratikpathak/my-personal-favorite-mcp-server-which-has-became-part-of-my-life-18pe</link>
      <guid>https://forem.com/pratikpathak/my-personal-favorite-mcp-server-which-has-became-part-of-my-life-18pe</guid>
      <description>&lt;p&gt;Alright, team (me, myself, and I!), let's rethink this. "MCP Server" can mean a lot. It's about a system that intelligently uses a &lt;strong&gt;model&lt;/strong&gt; (a blueprint, a simulation, an AI, a process definition) along with its surrounding &lt;strong&gt;context&lt;/strong&gt; (live data, user history, current state) via some &lt;strong&gt;protocol&lt;/strong&gt; or API. So, not just AI! Here are 10 cool examples from various fields:&lt;/p&gt;

&lt;h3&gt;
  
  
  From the AI/ML World (Still Super Relevant!):
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. NVIDIA Triton Inference Server 🚀 (AI/ML)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/triton-inference-server/server" rel="noopener noreferrer"&gt;github.com/triton-inference-server/server&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; High-performance server for AI models from almost any framework (TensorFlow, PyTorch, ONNX, etc.).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Handles multiple models &amp;amp; dynamic batching (great for varied contextual inputs), supports complex model pipelines. 🏎️&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; For demanding AI applications needing raw speed, GPU acceleration, and the flexibility to serve diverse models where input context is key to inference.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. KServe ☁️ (AI/ML on Kubernetes)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/kserve/kserve" rel="noopener noreferrer"&gt;github.com/kserve/kserve&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; Kubernetes-native platform for serving AI models, emphasizing a standardized inference protocol.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Serverless scaling, pluggable runtimes (can use Triton, TF Serving), built-in pre/post-processing "Transformers" ideal for context manipulation. 🤝&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; If you're deploying AI models on Kubernetes and want a standardized, scalable way to manage them, especially with custom context processing steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. BentoML 🍱 (AI/ML Developer Experience)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/bentoml/BentoML" rel="noopener noreferrer"&gt;github.com/bentoml/BentoML&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; Python-first framework to easily build, ship, and scale AI services.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Simplifies packaging AI models and defining serving logic (including context handling) in Python. Offers adaptive batching and flexible API definitions. 🐍&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; For a developer-friendly, fast path from AI model to production API, with easy customization of how context is ingested and used.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  4. Ray Serve 📈 (Distributed AI/ML &amp;amp; Python)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/ray-project/ray" rel="noopener noreferrer"&gt;github.com/ray-project/ray&lt;/a&gt; (Ray Serve is part of Ray)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; Scalable model serving library built on Ray for distributed Python applications.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Highly scalable, compose complex inference services with multiple models and Python logic. Excellent for scenarios where context is shared or transformed across distributed components.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; When building AI applications that require significant scaling and the flexibility to weave together multiple models and Python-based business logic for sophisticated context handling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Learn about : &lt;a href="https://pratikpathak.com/building-large-language-model-applications/" rel="noopener noreferrer"&gt;Build your own LLM&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Nakama 🎮 (Game Development)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/heroiclabs/nakama" rel="noopener noreferrer"&gt;github.com/heroiclabs/nakama&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; An open-source, scalable game server that manages user accounts, storage, real-time multiplayer, chat, and much more. The "model" here is the game state and logic.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Handles real-time player data (context), session management, and complex game logic. Provides APIs (HTTP, gRPC, WebSockets) for client-server communication. 🎲&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; For building online games where the server needs to maintain a consistent model of the game world and manage rich player context (inventory, stats, location) in real-time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  6. Camunda Platform 8 / Camunda Platform 7 ⚙️ (Business Process Management)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub (Community Edition - Platform 7):&lt;/strong&gt; &lt;a href="https://github.com/camunda/camunda-bpm-platform" rel="noopener noreferrer"&gt;github.com/camunda/camunda-bpm-platform&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;  (Camunda Platform 8 is SaaS/hybrid, with Zeebe as its core workflow engine: &lt;a href="https://github.com/camunda/zeebe" rel="noopener noreferrer"&gt;github.com/camunda/zeebe&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; A platform for workflow and decision automation. It executes BPMN (Business Process Model and Notation) models.&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Manages process instances (the "model" in execution) with their associated data (context). Provides REST APIs to start processes, complete tasks, and query process state.&lt;/li&gt;

&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; When you need to orchestrate complex business processes or workflows defined as explicit models, where the context of each process instance (variables, current step) is critical.&lt;/li&gt;

&lt;/ul&gt;

&lt;h4&gt;
  
  
  7. Azure Digital Twins 🏢 (IoT &amp;amp; Digital Representations)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Developer Resources/SDKs GitHub:&lt;/strong&gt; &lt;a href="https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins/azure-digitaltwins-core" rel="noopener noreferrer"&gt;github.com/Azure/azure-sdk-for-java/tree/main/sdk/digitaltwins/azure-digitaltwins-core&lt;/a&gt; (Example for Java SDK, others exist for .NET, JS, Python)&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; An Azure platform service that allows you to create comprehensive digital models of entire environments, things, or systems (e.g., buildings, factories, farms).&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Models are defined using DTDL (Digital Twin Definition Language). Manages relationships and real-time data flow (context) from IoT devices and business systems. Queryable graph.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; For creating dynamic digital replicas of physical environments or complex assets, where the "model" is the structure and relationships, and "context" is live operational data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  8. Apache Flink 🌊 (Data Stream Processing)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/apache/flink" rel="noopener noreferrer"&gt;github.com/apache/flink&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; A powerful open-source framework for stateful computations over unbounded and bounded data streams. Your Flink job is a dataflow "model."&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Manages complex application state (context) over time, enabling sophisticated event processing, analytics, and transformations on streaming data. High throughput and low latency.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; When your "model" is a data processing pipeline that needs to react to real-time events and maintain context (like windowed aggregations or session data) over continuous streams of data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  9. Node-RED 🧱 (IoT &amp;amp; Event-Driven Flows)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/node-red/node-red" rel="noopener noreferrer"&gt;github.com/node-red/node-red&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; A flow-based programming tool, originally for wiring together hardware devices, APIs, and online services (IoT focus but versatile). Each flow is a "model" of logic.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Visual editor for creating event-driven applications. Nodes maintain their own state (context). Easily extensible with a huge library of nodes. Runs on low-cost hardware (like Raspberry Pi) or in the cloud.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; For rapid development of event-driven applications, especially in IoT, where you're modeling logic flows and need to manage simple state/context between events and operations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  10. Hasura GraphQL Engine 📊 (Data APIs &amp;amp; Authorization)
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/hasura/graphql-engine" rel="noopener noreferrer"&gt;github.com/hasura/graphql-engine&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;What's the deal?&lt;/strong&gt; Blazing-fast GraphQL server that gives you instant GraphQL APIs on new or existing SQL databases. Your database schema, augmented with relationships and permissions, acts as the "model."&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Advantages:&lt;/strong&gt; Auto-generates GraphQL APIs. Its permission system is highly contextual (role-based, session-variable-based), determining what data a user can access. Handles data federation.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Why pick this one?&lt;/strong&gt; When you need a flexible, powerful API layer over your data that deeply understands user context (roles, permissions) to serve the right slice of your data model securely and efficiently.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;There! A much more diverse set. It really highlights how the concept of a "model" and its operational "context" applies across so many areas of software and systems engineering. The "protocol" is just how you talk to it. Hope this is more what you were looking for! 👍&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>programming</category>
      <category>ai</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
