<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Abinand P</title>
    <description>The latest articles on Forem by Abinand P (@abiji-2020).</description>
    <link>https://forem.com/abiji-2020</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/abiji-2020"/>
    <language>en</language>
    <item>
      <title>Embedding AI Inside PostgreSQL : Building a Native C++ Extension.</title>
      <dc:creator>Abinand P</dc:creator>
      <pubDate>Wed, 19 Nov 2025 13:52:21 +0000</pubDate>
      <link>https://forem.com/abiji-2020/embedding-ai-inside-postgresql-building-a-native-c-extension-5b8b</link>
      <guid>https://forem.com/abiji-2020/embedding-ai-inside-postgresql-building-a-native-c-extension-5b8b</guid>
      <description>&lt;h3&gt;
  
  
  The Search for a Native Engine
&lt;/h3&gt;

&lt;p&gt;As a developer who values performance and systems integrations, I felt limited by the common "Chat with DB" approach. That method often involves slow, external wrappers that pull data out of Postgres just to convert natural language into SQL. I wanted to know: why can't the database &lt;em&gt;itself&lt;/em&gt; understand me?? &lt;/p&gt;

&lt;p&gt;My goal was a bit bold: to integrate AI directly into the Postgres kernel, making the database self-aware. This led me to a new domain, inspired by the &lt;a href="https://clickhouse.com/" rel="noopener noreferrer"&gt;ClickHouse&lt;/a&gt; open take-home challenge. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Crucible: C/C++ Interoperability
&lt;/h2&gt;

&lt;p&gt;The first major shock was realizing the true language of PostgreSQL extensions: C. This immediately created a conflict, as the powerful ClickHouse AI SDK I was required to use was written in C++.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Conflict&lt;/strong&gt;: C++ is incompatible with C headers, but Postgres only provides C headers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Magic&lt;/strong&gt;: After some searching, I discovered the powerful solution: using the C/C++ mixing tool, &lt;code&gt;extern "C"&lt;/code&gt;. This allowed my C++ code to safely import and link against the C headers of PostgreSQL. This foundational lesson clarified the true purpose of header files in modular software development.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The package manager problem
&lt;/h2&gt;

&lt;p&gt;With modern languages like TypeScript, Python, and Go, we take package managers for granted. Suddenly, I was back in the C++ world, where I learned that third-party library integration must be done manually.&lt;/p&gt;

&lt;p&gt;To solve this, I created a &lt;code&gt;CMakeLists.txt&lt;/code&gt; file. This configuration file became the linker and builder for my third-party libraries—the equivalent of the &lt;code&gt;package.json&lt;/code&gt; I was used to in the Node.js ecosystem. This streamlined the build process and ensured I could properly package the extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to connect System Tables
&lt;/h2&gt;

&lt;p&gt;The next major hurdle was performance. Initial thoughts (and even AI suggestions) pointed toward using standard SQL functions (&lt;code&gt;SPI_connect()&lt;/code&gt;) to query the database catalog for schema information.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt;: Running SQL queries internally is slow and adds overhead.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Solution&lt;/strong&gt;: I found that by diving into the PostgreSQL documentation, it was possible to access the schema names, table names, and columns directly by opening internal system tables using headers provided by Postgres. This was a massive performance win. By accessing the system catalogs directly, I ensured the AI had the necessary "ground truth" for prompting without adding unnecessary query latency.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AI Engine: With Controlling output
&lt;/h2&gt;

&lt;p&gt;To ensure the system was reliable, I needed the AI output to be stable and predictable.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Structured Output&lt;/strong&gt; : I moved the core logic to structured JSON output, ensuring the SQL was returned in a single, predictable field. This abstracted the AI's "quotes or explanations" and made the application robust.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance and Stability&lt;/strong&gt;: I set the temperature to 0 in the prompt to ensure the output was grounded solely in the provided system context, reducing the chance of irrelevant output.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Distribution to users: A fight with Dockerfile
&lt;/h2&gt;

&lt;p&gt;I created a Docker image to test the extension safely, but the image size ballooned to 1.8GB.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Problem&lt;/strong&gt; : The initial build contained the entire development toolchain and libraries needed to compile the C++ extension.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Solution&lt;/strong&gt;: I created a &lt;code&gt;multi-stage build&lt;/code&gt;. The first stage compiled the extension, and the second, final stage took only the necessary runtime files (&lt;code&gt;pg_ask.so&lt;/code&gt;, &lt;code&gt;.control&lt;/code&gt;, &lt;code&gt;.sql&lt;/code&gt;) and put them onto a fresh base Postgres image. This reduced the final image size back to a lean 500MB. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  To try the extension
&lt;/h2&gt;

&lt;p&gt;The final product, &lt;strong&gt;pg_ask&lt;/strong&gt;, is a powerful extension that allows immediate use after a simple Docker command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Pull the image&lt;/span&gt;
docker pull ghcr.io/abiji-2020/pg_ask:latest

&lt;span class="c"&gt;# Run with your API key&lt;/span&gt;
docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; pg_ask &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;POSTGRES_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mysecretpassword &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;PG_ASK_AI_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your_api_key_here &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 5432:5432 &lt;span class="se"&gt;\&lt;/span&gt;
  ghcr.io/abiji-2020/pg_ask:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After the docker image started running we can query the database&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Connect to your database&lt;/span&gt;
&lt;span class="n"&gt;psql&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="n"&gt;localhost&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;U&lt;/span&gt; &lt;span class="n"&gt;postgres&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;d&lt;/span&gt; &lt;span class="n"&gt;postgres&lt;/span&gt;

&lt;span class="c1"&gt;-- Create the extension&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;EXTENSION&lt;/span&gt; &lt;span class="n"&gt;IF&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;EXISTS&lt;/span&gt; &lt;span class="n"&gt;pg_ask&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;-- Set up sample database to query &lt;/span&gt;

&lt;span class="c1"&gt;-- Example query (shows the power of the extension)&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;pg_gen_query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'count all users created in the last 7 days'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project taught me that "Full Stack" means understanding the system at every layer. My ability to go low-level with C++, manage performance with internal Postgres APIs, and solve deployment issues with multi-stage Docker builds are the core lessons. If you want to build truly performant, integrated AI tools, sometimes you have to stop building wrappers and start building extensions. &lt;/p&gt;

&lt;p&gt;You can also check the Source of my extension at : &lt;a href="https://github.com/Abiji-2020/pg_ask" rel="noopener noreferrer"&gt;https://github.com/Abiji-2020/pg_ask&lt;/a&gt;&lt;/p&gt;

</description>
      <category>postgres</category>
      <category>database</category>
      <category>coding</category>
      <category>ai</category>
    </item>
    <item>
      <title>Mystic Writer : An Experiment on Agentic Development 🤖⚡</title>
      <dc:creator>Abinand P</dc:creator>
      <pubDate>Wed, 22 Oct 2025 10:20:21 +0000</pubDate>
      <link>https://forem.com/abiji-2020/mystic-writer-an-experiment-on-agentic-development-194</link>
      <guid>https://forem.com/abiji-2020/mystic-writer-an-experiment-on-agentic-development-194</guid>
      <description>&lt;h2&gt;
  
  
  The Why ? 🤔
&lt;/h2&gt;

&lt;p&gt;As a software developer exploring backend systems and automation, I wanted to test a bold idea : &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Could I build a &lt;strong&gt;full-stack AI product&lt;/strong&gt; - Backend, Authentication, Database, AI generation entirely through &lt;strong&gt;Agents&lt;/strong&gt;, without writing a single line of code ? &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The question became &lt;strong&gt;MysticWriter 🪶&lt;/strong&gt;, an AI-powered story collaboration web application built in just &lt;strong&gt;two days&lt;/strong&gt; using &lt;strong&gt;&lt;a href="https://insforge.dev/" rel="noopener noreferrer"&gt;InsForge&lt;/a&gt;&lt;/strong&gt;, &lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt;, and &lt;strong&gt;Figma Make&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;My goal was to see what happens when developers stop coding line by line and start &lt;strong&gt;prompting their backends into existence&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I built 🏗️🚧
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;MysticWriter&lt;/strong&gt; lets users collaboratively write stories with AI, where user creates a story and the next line will be generated with AI and then along the way with back and forth communication between the humans and AI a complete story is created. It also generates a visual avatar for characters in the story as per need to have a visual effect of story. &lt;/p&gt;

&lt;p&gt;All backend functionality such as authentication, database, story creation, bucket storage for generated images, and image generation was handled inside &lt;strong&gt;InsForge&lt;/strong&gt;, an &lt;strong&gt;&lt;em&gt;agentic-native BaaS&lt;/em&gt;&lt;/strong&gt; that integrates seamlessly with AI through &lt;strong&gt;MCP (Model Context Protocol)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The frontend was designed in &lt;strong&gt;Figma Make&lt;/strong&gt;, from prompts created the complete design and exported the design code to GitHub directly, just cloned the created Repository and then installed dependencies and started my frontend poof the design from figma is now in my local system. &lt;/p&gt;

&lt;p&gt;The entire build from idea to working prototype happened  over &lt;strong&gt;48 Hours&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here are some screenshots of the application 👇&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foanby1tka6g68htn4u5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foanby1tka6g68htn4u5r.png" alt="Dark theme of MysticWriter " width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ihcyqk2uuz3s5gwbmj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ihcyqk2uuz3s5gwbmj9.png" alt="Light Theme of MysticWriter" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Agentic Workflow 🪢🦾
&lt;/h2&gt;

&lt;p&gt;Here's how the workflow actually unfolded : &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;I created an &lt;strong&gt;Insforge account&lt;/strong&gt; and connected my &lt;strong&gt;MCP server&lt;/strong&gt; from Insforge to Windsurf just with a simple command&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; npx @insforge/install &lt;span class="nt"&gt;--client&lt;/span&gt; windsurf &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--env&lt;/span&gt; &lt;span class="nv"&gt;API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-api-key &lt;span class="se"&gt;\&lt;/span&gt;
 &lt;span class="nt"&gt;--env&lt;/span&gt; &lt;span class="nv"&gt;API_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://xxxxxxxx.insforge.app
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Inside Windsurf, I used &lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt; to issue structured prompts describing what I wanted InsForge to build things such as : &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Create tables for users, stories and images. "&lt;/li&gt;
&lt;li&gt;"Add authentication and bucket storage for story images".&lt;/li&gt;
&lt;li&gt;"Setup AI collaboration for story and image generation".&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Through the MCP connection, Claude communicated directly with InsForge tools to generate these backend resources, everything automatically just with AI-agents.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Whenever something went wrong (like field mismatch or auth misconfiguration), I'd prompt Claude again, and it would iterate or regenerate the setup. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01bgs8zhtmdzsb727yf8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01bgs8zhtmdzsb727yf8.png" alt="Diagramatic workflow" width="800" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because Insforge is &lt;em&gt;agent-native&lt;/em&gt;, everything from database creation to function deployment was prompt-driven!  no SDKs, no CLI, no manual config.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;em&gt;Reflections After 48 Hours&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;This project was an eye-opener on how far &lt;strong&gt;agentic development&lt;/strong&gt; has come.&lt;br&gt;&lt;br&gt;
Here’s what stood out:&lt;/p&gt;

&lt;h3&gt;
  
  
  ✅ What Worked Really Well
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent-native design:&lt;/strong&gt; Insforge’s backend is natively accessible to AI agents through MCP! no hacks, no middle layers.
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Instant visibility:&lt;/strong&gt; The Insforge dashboard shows all created tables, functions, and AI credits, so I could track everything in real time.  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Speed:&lt;/strong&gt; From zero to a functioning app — with authentication, storage, and AI endpoints — in under two days.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What I missed ?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VS code as client&lt;/strong&gt; : If Insforge has a VS code client along with Winsdsurf, Trae, Cursor, Claude Code, Cline it would be a nice to have. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;BYOK&lt;/strong&gt; : Bringing my own API Key for the AI models usage would be a nice to have feature, when I already have Keys for some models. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  5. Tech Stack Overview
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool / Platform&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Notes&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Insforge&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agent-native BaaS&lt;/td&gt;
&lt;td&gt;Handled auth, DB, AI calls, bucket storage, and function execution — all through prompts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MCP (Model Context Protocol)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Communication bridge&lt;/td&gt;
&lt;td&gt;Enabled Claude to interface directly with Insforge from Windsurf&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Windsurf&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agent workspace&lt;/td&gt;
&lt;td&gt;Hosted the connected MCP server where I interacted with Claude&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Claude Haiku 4.5&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;AI development assistant&lt;/td&gt;
&lt;td&gt;Generated backend logic, fixed errors, and handled prompting&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Figma Make&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Frontend builder&lt;/td&gt;
&lt;td&gt;Created the UI and connected it to Insforge APIs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;MysticWriter&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;The final product&lt;/td&gt;
&lt;td&gt;An AI storytelling app built entirely via agentic workflows&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;By the end of Day 2, &lt;strong&gt;MysticWriter&lt;/strong&gt; was live and working:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users could sign up, write stories, and generate illustrations.
&lt;/li&gt;
&lt;li&gt;AI endpoints handled the creative generation.
&lt;/li&gt;
&lt;li&gt;All data and media were securely stored through Insforge’s bucket and DB setup.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🌐 &lt;a href="https://mystic-writer.vercel.app/" rel="noopener noreferrer"&gt;Try the Live Demo&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;💾 &lt;a href="https://github.com/Abiji-2020/MysticWriter" rel="noopener noreferrer"&gt;View the GitHub Repo&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s a small MVP ,but a complete one and built entirely by prompting, not programming.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;em&gt;The Future of Agentic Development&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;Building &lt;strong&gt;MysticWriter&lt;/strong&gt; in just two days fundamentally changed how I view development.  &lt;/p&gt;

&lt;p&gt;This wasn’t just “no-code"! It was &lt;strong&gt;agent-native&lt;/strong&gt;.&lt;br&gt;&lt;br&gt;
With &lt;strong&gt;Insforge&lt;/strong&gt;, &lt;strong&gt;MCP&lt;/strong&gt;, and &lt;strong&gt;AI copilots&lt;/strong&gt; like Claude, developers can focus purely on &lt;em&gt;ideas and intent&lt;/em&gt;, while the agents handle the infrastructure and logic.  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We’re moving from coding systems to &lt;em&gt;collaborating&lt;/em&gt; with them and MysticWriter was my first real glimpse of that future.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;Built between October 21–22, 2025&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;em&gt;Proof that sometimes, two days and a few good agents are all you need.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>fullstack</category>
      <category>automaton</category>
    </item>
    <item>
      <title>PesudoCLI - your AI man pages.</title>
      <dc:creator>Abinand P</dc:creator>
      <pubDate>Sat, 02 Aug 2025 12:50:12 +0000</pubDate>
      <link>https://forem.com/abiji-2020/pesudocli-your-ai-man-pages-104a</link>
      <guid>https://forem.com/abiji-2020/pesudocli-your-ai-man-pages-104a</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/redis-2025-07-23"&gt;Redis AI Challenge&lt;/a&gt;: Beyond the Cache&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;This project is &lt;strong&gt;CLI tool built with Go(Cobra)&lt;/strong&gt; that leverages &lt;strong&gt;Redis Stack&lt;/strong&gt; for vector storage and &lt;strong&gt;Gemini API&lt;/strong&gt;(Google's LLM) for both &lt;em&gt;embedding&lt;/em&gt; and &lt;em&gt;text generation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This CLI tool demonstrates how Redis Stack can serve as a &lt;em&gt;vector database&lt;/em&gt;, powering an intelligent question-answering system - all from terminal. &lt;/p&gt;

&lt;p&gt;Instead of Redis being just a cache or pub-sub tool, I used it for storing high-dimensional embeddings and retrieving semantically relevant results using vector similarity search &lt;em&gt;(KNN+cosine)&lt;/em&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tech Stack:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Go (Cobra) - CLI Interface &lt;/li&gt;
&lt;li&gt;Gemini API - for both generating embedding and responding with LLM. &lt;/li&gt;
&lt;li&gt;Redis Stack 8.x - as a vector store using &lt;code&gt;FT.CREATE&lt;/code&gt;, &lt;code&gt;HNSW&lt;/code&gt; and &lt;code&gt;KNN&lt;/code&gt; retrieval. &lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Redis as a Vector Store
&lt;/h2&gt;

&lt;p&gt;Traditionally, Redis is known as a caching layer. I have also started my Go journey with Redis as a caching layer. But in this project, Redis Stack is used to: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4ywbzyvcvl4wfy3o1w2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr4ywbzyvcvl4wfy3o1w2.png" alt="An image showing about the data flow" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Store vector embeddings with metadata
&lt;/h3&gt;

&lt;p&gt;Each chunk of input text is embedded using Gemini and stored as a document with the following schema:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Command&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;TEXT&lt;/td&gt;
&lt;td&gt;Original CLI Command name&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;os&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;TEXT&lt;/td&gt;
&lt;td&gt;OS type or system&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;text_chunk&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;TEXT&lt;/td&gt;
&lt;td&gt;The actual context (explanation of the commands)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;embedding&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;VECTOR&lt;/td&gt;
&lt;td&gt;Embedded vector (FLOAT32, 3072 dimension)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is the basic schema I have created using the following command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;FT&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;pesudo_index&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;HASH&lt;/span&gt; &lt;span class="k"&gt;PREFIX&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;SCHEMA&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;
          &lt;span class="n"&gt;command&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;
          &lt;span class="n"&gt;os&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;
          &lt;span class="n"&gt;text_chunk&lt;/span&gt; &lt;span class="nb"&gt;TEXT&lt;/span&gt;&lt;span class="err"&gt;\&lt;/span&gt;
          &lt;span class="n"&gt;embedding&lt;/span&gt; &lt;span class="n"&gt;VECTOR&lt;/span&gt; &lt;span class="n"&gt;HNSW&lt;/span&gt; &lt;span class="mi"&gt;6&lt;/span&gt; &lt;span class="k"&gt;TYPE&lt;/span&gt; &lt;span class="n"&gt;FLOAT32&lt;/span&gt; &lt;span class="n"&gt;DIM&lt;/span&gt; &lt;span class="mi"&gt;3072&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;
          &lt;span class="n"&gt;DISTANCE_METRIC&lt;/span&gt; &lt;span class="n"&gt;COSINE&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;This enables semantic search directly in Redis - no separate vector DB required.  &lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Retrieve relevant data via vector search
&lt;/h3&gt;

&lt;p&gt;From the command line application when the user runs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;pesudocli ask &lt;span class="s2"&gt;"How to install packages on Arch?"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The CLI flows as :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Embeds the questions using Gemini&lt;/li&gt;
&lt;li&gt;Performs a &lt;strong&gt;KNN vector search&lt;/strong&gt; using &lt;code&gt;FT.SEARCH&lt;/code&gt; with &lt;code&gt;KNN 3&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Uses &lt;code&gt;cosine similarity&lt;/code&gt; to rank results&lt;/li&gt;
&lt;li&gt;Sends the top 3 context chunks to Gemini chat model as context. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9djttx6laocmvp3lyvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk9djttx6laocmvp3lyvj.png" alt="Complete flow of the CLI" width="800" height="973"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Redis handles the entire retrieval part without latency or external systems.&lt;br&gt;
During testing, vector search queries consistently returned results in around 150ms, demonstrating Redis’s real-time capabilities even for high-dimensional vector data.&lt;/p&gt;


&lt;h2&gt;
  
  
  ⚙️ Behind the scenes: Vector search
&lt;/h2&gt;

&lt;p&gt;Here's an example of a vector search query in Redis, which I have used:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;FT&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;SEARCH&lt;/span&gt; &lt;span class="n"&gt;pesudo_index&lt;/span&gt; &lt;span class="nv"&gt;"*=&amp;gt; [KNN 3 @embedding $vec]"&lt;/span&gt; &lt;span class="err"&gt;\&lt;/span&gt;
     &lt;span class="n"&gt;PARAMS&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="n"&gt;vec&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;binary_value_of_query&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
     &lt;span class="n"&gt;DIALECT&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;KNN 3&lt;/code&gt; : Returns top 3 closest vectors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;COSINE&lt;/code&gt; : Generally used method to find the distance between two vectors &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;DIALECT 2&lt;/code&gt; : Required in Redis for vector support.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Under the hood, Redis uses &lt;strong&gt;HNSW (Hierarchical Navigable Small World)&lt;/strong&gt; algorithm, which reduces the brute force check to reduce the latency and enabling efficient approximate nearest neighbor search in high dimensional space. &lt;/p&gt;




&lt;h2&gt;
  
  
  How to process user query 🧠
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nabom5cr76ci48jkn9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nabom5cr76ci48jkn9y.png" alt="Processing of the user query" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Since this is a CLI I have attached the image of the sample &lt;code&gt;ask&lt;/code&gt; command here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeamctea867ch284cuc8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeamctea867ch284cuc8.png" alt="Image of CLI command ask" width="800" height="375"&gt;&lt;/a&gt;&lt;br&gt;
You can download and try  the CLI from the github repo: &lt;a href="https://github.com/Abiji-2020/PesudoCLI/releases/tag/v0.3.0" rel="noopener noreferrer"&gt;PesudoCLI Release&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  How I Used Redis Stack
&lt;/h2&gt;

&lt;p&gt;Redis Stack was crucial to enable Vector Search in this CLI. Here's how it fits into the pipeline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Used Redis 8(via Redis Stack) to store vector embeddings created from chunks of input data. &lt;/li&gt;
&lt;li&gt;The &lt;code&gt;ingest&lt;/code&gt; command:

&lt;ul&gt;
&lt;li&gt;Retrieves data from embed csv file,&lt;/li&gt;
&lt;li&gt;Uses Gemini to generate embeddings,&lt;/li&gt;
&lt;li&gt;Stores each chunk and its corresponding embedding vector in Redis using &lt;em&gt;RediSearch&lt;/em&gt;, along with &lt;code&gt;metadata&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;ask&lt;/code&gt; command:

&lt;ul&gt;
&lt;li&gt;Converts user input into a vector using Gemini, &lt;/li&gt;
&lt;li&gt;Performs a &lt;strong&gt;KNN vector similarity search&lt;/strong&gt; on Redis, &lt;/li&gt;
&lt;li&gt;Sends the top 3 matching contexts to Gemini chat model for response generation. &lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This combination of semantic vector search + LLM made it possible to build an intelligent CLI assistant. &lt;/p&gt;


&lt;h2&gt;
  
  
  🔧 CLI flow to run
&lt;/h2&gt;

&lt;p&gt;This will give a overall complete run through of the cli code, and displays how the entire program should be run.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;
&lt;span class="c"&gt;# Step 1: set config &lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;pesudocli config &lt;span class="nt"&gt;--gemini-api-key&lt;/span&gt; &amp;lt;your-key&amp;gt;

&lt;span class="c"&gt;# Step 2: Init the index &lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;pesudocli init 

&lt;span class="c"&gt;# Step 3: Ingest data &lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;pesudocli ingest 

&lt;span class="c"&gt;# Step 4 : Ask a question &lt;/span&gt;
&lt;span class="nv"&gt;$pesudocli&lt;/span&gt; ask &lt;span class="s2"&gt;"Explain about podman?"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After doing the initial 3 steps then we can ask any number of questions using the &lt;code&gt;ask&lt;/code&gt; command. &lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion 💻
&lt;/h2&gt;

&lt;p&gt;This project started with a simple idea: build a smart CLI assistant. But along the way, Redis Stack proved to be far more than a cache - it became the core of a semantic search engine. &lt;/p&gt;

&lt;p&gt;By combining vector embeddings, KNN search with cosine similarity, and the Gemini API, I was able to build a fast, local-first and entirely terminal-bed AI assistant - without needing a separate vector database or LLM backend.&lt;/p&gt;

&lt;p&gt;Redis Stack handled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time vector ingestion and search&lt;/li&gt;
&lt;li&gt;Metadata filtering and schema management.&lt;/li&gt;
&lt;li&gt;Seamless performance even with high-dimensional (3072-dim) vectors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All in all, this project shows how Redis Stack can unlock a new class of AI powered applications - from your terminal. &lt;/p&gt;




&lt;p&gt;Thanks for reading - and thanks to the Redis team and Dev.to for hosting this challenge ! 🚀&lt;/p&gt;

&lt;p&gt;Checkout the &lt;a href="https://github.com/Abiji-2020/PesudoCLI" rel="noopener noreferrer"&gt;PesudoCLI project on Github&lt;/a&gt; to try it out or contribute. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cover image is generated with Gemini, other images are generated with the help of napkin.ai, and text is modified using AI for grammar corrections&lt;/em&gt;&lt;/p&gt;

</description>
      <category>redischallenge</category>
      <category>devchallenge</category>
      <category>database</category>
      <category>ai</category>
    </item>
    <item>
      <title>Stop abusing .env files 🔒</title>
      <dc:creator>Abinand P</dc:creator>
      <pubDate>Wed, 30 Jul 2025 10:44:24 +0000</pubDate>
      <link>https://forem.com/abiji-2020/stop-abusing-env-files-3amj</link>
      <guid>https://forem.com/abiji-2020/stop-abusing-env-files-3amj</guid>
      <description>&lt;p&gt;Let's be honest - we've all done it. Thrown a few secrets into &lt;code&gt;.env&lt;/code&gt; file, pushed to github (&lt;em&gt;oops&lt;/em&gt;), or spent 20 minutes debugging a typo like &lt;code&gt;DB_PASSWROD&lt;/code&gt; 😵‍💫. When I started coding, &lt;code&gt;.env&lt;/code&gt; files felt like magic. But when diving deeper into security it turns out, they're more like duct tapes - they work.. until they don't. &lt;/p&gt;

&lt;p&gt;In this blog, I'll walk you through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why &lt;code&gt;.env&lt;/code&gt; files are kind of overrated&lt;/li&gt;
&lt;li&gt;The pain points of managing secrets traditionally&lt;/li&gt;
&lt;li&gt;A better modern approach with tools like Infisical, Hashicorp vault.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔐 The &lt;code&gt;.env&lt;/code&gt; Era
&lt;/h2&gt;

&lt;p&gt;Let's rewind a bit. &lt;br&gt;
The idea of managing environment-specific configuration wasn't mainstream - until Heroku introduced &lt;em&gt;Config Vars.&lt;/em&gt; This was back in the days when deploying with Heroku felt magic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;heroku config:set &lt;span class="nv"&gt;STRIPE_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;super-secret
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Boom 💥 secret added, scoped to your app, environment-specific, and cloud-ready. &lt;/p&gt;

&lt;p&gt;This was the first time developers really &lt;em&gt;felt&lt;/em&gt; how clean and safe it could be to separate code from config. Inspired by this, &lt;code&gt;.env&lt;/code&gt; files started showing up in local development tools to replace that behavior - but only with a plain text format, unencrypted approach. &lt;/p&gt;

&lt;p&gt;Thus was born the &lt;em&gt;env culture&lt;/em&gt; - a local hack around a powerful concept. &lt;/p&gt;




&lt;h2&gt;
  
  
  Why we use &lt;code&gt;.env&lt;/code&gt; 🤔?
&lt;/h2&gt;

&lt;p&gt;They're simple. You toss some key-value pairs in a file, and boom! Your app has access to secrets without hard-coding them. Better than putting your database password right inside your code, right?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DB_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;dbpassword
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then in your application (I have taken Node.js):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DB_PASSWORD&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks neat and easy right? But here comes the &lt;em&gt;but.....&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  😬 What's wrong with &lt;code&gt;.env&lt;/code&gt;?
&lt;/h2&gt;

&lt;p&gt;Let's talk about the downsides. And trust me, they show up the moment you work in a team or try deploying to anything beyond your laptop. &lt;/p&gt;

&lt;h4&gt;
  
  
  1. Accidental Git Commit 🚨
&lt;/h4&gt;

&lt;p&gt;Unless you &lt;em&gt;religiously&lt;/em&gt; &lt;code&gt;.gitignore&lt;/code&gt; that file, one bad &lt;code&gt;git add . &amp;amp;&amp;amp; git commit&lt;/code&gt; can leak secrets to the world. Try a GitHub search on &lt;code&gt;.env&lt;/code&gt; boom we can have many most secure environment variables as plain text accessible to everyone 😟. &lt;/p&gt;

&lt;h4&gt;
  
  
  2. Sharing With your Team = Chaos
&lt;/h4&gt;

&lt;p&gt;When you've got 3 developers, 1 QA, and a CI pipline - how do you make sure everyone has the same &lt;code&gt;.env&lt;/code&gt; and that too the right one? You don't. You just &lt;em&gt;hope&lt;/em&gt; it works. &lt;/p&gt;

&lt;h4&gt;
  
  
  3. No Audit Trail 🕵️
&lt;/h4&gt;

&lt;p&gt;Did someone change the AWS Key last Thursday ? No idea. &lt;code&gt;.env&lt;/code&gt; files have no versioning, no logs and can't even know if someone has tampered it - nada🙂‍↔️.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Copy - Paste Debugging Hell  🐛
&lt;/h4&gt;

&lt;p&gt;Typos like &lt;code&gt;STRIPE_SECRT&lt;/code&gt; will make you question life. No logs nothing, just empty message on the API call, just &lt;code&gt;internal server error&lt;/code&gt;. Plus, some platforms want strings quoted, others don't. Python, Js, Linux - they all play by different rules. &lt;/p&gt;

&lt;h4&gt;
  
  
  5. Manual Updates Waste Time ⏳
&lt;/h4&gt;

&lt;p&gt;The average developer loses &lt;strong&gt;23 minutes&lt;/strong&gt; recovering from a interruption when they are working in a serious mode. Manually syncing secrets across local, staging, production when trying to fix a bug that has been there for a week? That's a big productivity leak. &lt;/p&gt;




&lt;h2&gt;
  
  
  From Local Shell to Cloud-Native Secrets 🚀
&lt;/h2&gt;

&lt;p&gt;This isn't the 2010s anymore. We've moved from FTP deployments to Docker, CI/CD pipelines, server-less, and remote teams. But many of us are still managing secrets like it's 2008. Come-on lets jump to the present 2025.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6yz3hxzjb9k1ngaoqxd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo6yz3hxzjb9k1ngaoqxd.png" alt="Two different images showing how the old and new secret management should be done" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now imagine this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All your secrets in a secure, centralized vault&lt;/li&gt;
&lt;li&gt;Environment-specific configs: dev, staging , prod &lt;/li&gt;
&lt;li&gt;One-click rotation and versioning&lt;/li&gt;
&lt;li&gt;Automatic syncing to your apps via tokens &lt;/li&gt;
&lt;li&gt;Logs of who did what and when &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's where &lt;strong&gt;&lt;a href="https://infisical.com/" rel="noopener noreferrer"&gt;Infisical&lt;/a&gt;&lt;/strong&gt; comes in. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Infisical Feels like superpower 💡
&lt;/h2&gt;

&lt;p&gt;I've been using and playing around Infisical recently. It's just as their tagline - &lt;em&gt;Secrets Management on autopilot&lt;/em&gt;. It solves many &lt;code&gt;.env&lt;/code&gt; pain points without &lt;strong&gt;forcing you to change how you build things.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's what I love:&lt;/p&gt;

&lt;h4&gt;
  
  
  Centralized secrets Management ✅
&lt;/h4&gt;

&lt;p&gt;No more emailing &lt;code&gt;.env&lt;/code&gt; files or sharing over Slack. Just invite your team, and boom - everyone sees the right secrets for their environment. &lt;/p&gt;

&lt;h4&gt;
  
  
  Environment-based separation 🔁
&lt;/h4&gt;

&lt;p&gt;Dev, Staging, Production - all organized separately. No more "wait which &lt;code&gt;.env&lt;/code&gt; file is this?"&lt;/p&gt;

&lt;h4&gt;
  
  
  Tokens, Not Raw Secrets 🔐
&lt;/h4&gt;

&lt;p&gt;Access secrets via &lt;strong&gt;revocable tokens&lt;/strong&gt; - easy to manage, track and audit. More secure than giving full &lt;code&gt;.env&lt;/code&gt; files to every intern. &lt;/p&gt;

&lt;h4&gt;
  
  
  Team Collaboration Built-in 👥
&lt;/h4&gt;

&lt;p&gt;One teammate updates a secret, the whole team gets the latest values (unless you want local overrides - Infisical supports that too. ).&lt;/p&gt;

&lt;h4&gt;
  
  
  Audit logs &amp;amp; Version History 📜
&lt;/h4&gt;

&lt;p&gt;Accidentally deleted a key? Need to know who made a change? Need to know when the change was made ? Infisical has you covered. &lt;/p&gt;




&lt;h2&gt;
  
  
  Ending Note: 🎯
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;.env&lt;/code&gt; files aren't evil - they were just never meant to scale with modern workflows. Remote teams, automated deployments, and containerized environments need something more robust, traceable and collaborative. &lt;/p&gt;

&lt;p&gt;Infisical fills that gap beautifully. &lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR 📚
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.env&lt;/code&gt; files are fine for solo hacks but break at scale.&lt;/li&gt;
&lt;li&gt;They're hard to share, prone to typos, and lack security/audit feature &lt;/li&gt;
&lt;li&gt;Modern apps need a better secret management strategy.&lt;/li&gt;
&lt;li&gt;Infisical offers centralized, secure, team-friendly secret management.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thanks for reading!!!&lt;br&gt;
If this blog helped you think twice about &lt;code&gt;.env&lt;/code&gt; files, or if you've faced similar struggles, drop a comment. I'd love to hear how you manage secrets in your projects. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Images are generated using Gemini 2.5 Pro, and the I have used the help from ChatGPT to check errors and grammar in the blog&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>beginners</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
