<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jean</title>
    <description>The latest articles on Forem by Jean (@jmoncayopursuit).</description>
    <link>https://forem.com/jmoncayopursuit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/jmoncayopursuit"/>
    <language>en</language>
    <item>
      <title>🔬 Building Skin Lab Rx: Bringing Clinical AI to the Browser</title>
      <dc:creator>Jean</dc:creator>
      <pubDate>Fri, 08 May 2026 00:48:29 +0000</pubDate>
      <link>https://forem.com/jmoncayopursuit/building-skin-lab-rx-bringing-clinical-ai-to-the-browser-kdb</link>
      <guid>https://forem.com/jmoncayopursuit/building-skin-lab-rx-bringing-clinical-ai-to-the-browser-kdb</guid>
      <description>&lt;p&gt;Skincare is deeply personal, but the advice we get is often generic. For the &lt;strong&gt;Perfect Corp × Startup World Cup Hackathon&lt;/strong&gt;, I wanted to build something that felt like a bridge between a luxury spa and a clinical dermatology office.&lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;Skin Lab Rx&lt;/strong&gt;: An AI-powered diagnostic platform that turns a simple selfie into a 14-metric clinical report and a personalized skincare routine.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Vision
&lt;/h2&gt;

&lt;p&gt;The goal was simple: &lt;strong&gt;Zero friction, maximum accuracy.&lt;/strong&gt; I wanted a user to be able to scan their skin and instantly see which products would actually move the needle for their specific concerns—whether that's acne, hydration, or fine lines.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Next.js 16 (App Router)&lt;/strong&gt;: For a lightning-fast, SEO-optimized SPA foundation.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Perfect Corp AI APIs&lt;/strong&gt;: The powerhouse behind the skin analysis and virtual makeup transfer.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Vanilla CSS&lt;/strong&gt;: I skipped the frameworks to build a custom, ultra-premium glassmorphism dark theme from scratch.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;FaceDetector API&lt;/strong&gt;: Leveraging native browser features for real-time user alignment.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Solving the "Real World" Selfie Problem 🤳
&lt;/h2&gt;

&lt;p&gt;One of the biggest challenges in clinical AI is data quality. The Perfect Corp APIs are incredibly precise, but they require the user's face to occupy at least 60% of the frame. &lt;/p&gt;

&lt;p&gt;Most people don't take "clinical" selfies. They take casual ones. &lt;/p&gt;

&lt;p&gt;To solve this, I built a &lt;strong&gt;Smart Pre-Processing Engine&lt;/strong&gt; in the browser. Using the experimental &lt;code&gt;FaceDetector&lt;/code&gt; API, the app detects the user's face in real-time and &lt;strong&gt;aggressively auto-crops&lt;/strong&gt; the image to meet the API's strict bounds &lt;em&gt;before&lt;/em&gt; the upload even happens. &lt;/p&gt;

&lt;p&gt;If the face is too small? The app zooms in. If the angle is off? We guide the user with a &lt;strong&gt;draggable alignment oval&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Production-Grade Resilience
&lt;/h2&gt;

&lt;p&gt;Hackathons are famous for "it works on my machine" demos. To ensure Skin Lab Rx was bulletproof for the judges, I implemented a &lt;strong&gt;Resilient Backend Fallback&lt;/strong&gt;. If the clinical API rejects a photo due to bad lighting or an obstruction (like glasses), the backend catches the error and generates a realistic synthetic diagnostic profile. &lt;/p&gt;

&lt;p&gt;This ensures the demo &lt;strong&gt;never crashes&lt;/strong&gt; and the user flow is never interrupted. &lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next?
&lt;/h2&gt;

&lt;p&gt;Skin Lab Rx is just the beginning. I'm looking into integrating e-commerce checkout and using the AI Makeup Transfer API to show "before and after" simulations of long-term skincare results.&lt;/p&gt;

&lt;p&gt;Check out the live demo here: &lt;a href="https://skin-lab-rx-6o7v4cleza-uc.a.run.app" rel="noopener noreferrer"&gt;https://skin-lab-rx-6o7v4cleza-uc.a.run.app&lt;/a&gt;&lt;br&gt;
Check out the video demo here: &lt;a href="//youtube.com/shorts/ostMOF_5yJc?si=PfH9TpMMY670MmvN"&gt;https://youtube.com/shorts/ostMOF_5yJc?si=PfH9TpMMY670MmvN&lt;/a&gt;&lt;br&gt;
Check out the Github Repository here: &lt;a href="//github.com/jmoncayo-pursuit/Skin-Lab-Rx"&gt;https://github.com/jmoncayo-pursuit/Skin-Lab-Rx&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with ❤️ for the Perfect Corp × Startup World Cup Hackathon.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.toEntry%20for%20hackathon"&gt;https://devpost.com/software/skin-lab-rx&lt;/a&gt;&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>ai</category>
      <category>javascript</category>
      <category>skincare</category>
    </item>
    <item>
      <title>Chasing 16MB: My Parameter Golf Journey and What I Learned the Hard Way</title>
      <dc:creator>Jean</dc:creator>
      <pubDate>Fri, 08 May 2026 00:40:21 +0000</pubDate>
      <link>https://forem.com/jmoncayopursuit/chasing-16mb-my-parameter-golf-journey-and-what-i-learned-the-hard-way-30ck</link>
      <guid>https://forem.com/jmoncayopursuit/chasing-16mb-my-parameter-golf-journey-and-what-i-learned-the-hard-way-30ck</guid>
      <description>&lt;p&gt;I saw what big companies and research labs were doing at massive scale and tried to adapt those ideas to extreme compression in tiny models. Here’s what happened.&lt;/p&gt;




&lt;p&gt;When OpenAI launched the Parameter Golf challenge, the rules were brutal: train a small language model that must fit inside a 16 megabyte compressed file and finish training in just 10 minutes on powerful hardware.&lt;/p&gt;

&lt;p&gt;Most participants focused on proven techniques that were already working on the leaderboard. I took a different approach. I read papers and articles about what large companies and research labs were doing at massive scale and tried to adapt those concepts to the extreme constraints of this challenge.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Experiments I Tried
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Aggressive Int4 Quantization&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Inspired by frontier quantization research from big labs showing that very low-bit weights could work in larger models, I pushed hard on Int4. I believed that if I could make aggressive 4-bit quantization stable in a tiny model, it would give me a massive space advantage. I spent weeks building custom mixed-precision code (Int6 for attention, Int4 for MLP layers), dynamic scaling, special training ramps, and heavy pruning. It was a bold, theoretically viable direction, but in practice the precision loss was too damaging for such a small model trained on very few steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gimlet-Hetero (Layer-wise Heterogeneous Design)&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This came directly from the Gimlet Labs paper “Efficient and Scalable Agentic AI with Heterogeneous Systems” (arXiv:2507.19635v1). The paper discusses how mixing different hardware tiers can optimize cost and performance for AI agents. I adapted that systems-level idea of heterogeneous resource allocation to transformer layers: giving wider MLP blocks and different precision levels to middle layers versus early and late layers. The idea was to allocate capacity where it mattered most.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TurboQuant&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
This was inspired by Google Research’s TurboQuant work on extreme compression, particularly for KV cache and vector search. I tried to adapt similar aggressive compression principles to weight quantization during training, hoping to push even more compression while maintaining stability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bayesian Backoff + TT Adapters&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
These came from research on dynamic correction mechanisms and low-rank decompositions (Tensor-Train). The goal was to add “smart recovery” during or after training to fix quality lost during quantization.&lt;/p&gt;

&lt;p&gt;Some of these ideas were quite wild. A few came from unusual inspirations and might still be viable if explored further with more experience and compute. Int4 ultimately became my strongest contender, but none of them delivered the breakthrough I was hoping for.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Evolutionary Agent
&lt;/h3&gt;

&lt;p&gt;At one point I got tired of manual tweaking and built an autonomous evolutionary agent. The system could mark sections of code, generate mutations, run fast tests on Colab, rank them by real performance, and iterate.&lt;/p&gt;

&lt;p&gt;It was technically interesting and worked mechanically, but after several generations I realized I was mostly automating the exploration of a weak search space. The gains were too small to justify the time I was spending on it, especially with very limited Colab quota. I shelved the agent. That was an important lesson: just because something can be automated does not mean it is the best use of limited time and compute.&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Learned
&lt;/h3&gt;

&lt;p&gt;My biggest mistake was choosing hard, experimental paths instead of first deeply understanding and building upon what was already working well on the leaderboard. As an amateur, I thought innovation meant doing something completely different. I now understand that you earn the right to innovate by first mastering proven approaches and then improving upon them.&lt;/p&gt;

&lt;p&gt;I got close. My best runs projected to around 1.21 to 1.25 BPB on full hardware. That would have been a respectable non-record submission, but I never quite broke into true leaderboard territory. I also did not receive RunPod credits until the very end, which limited how much I could validate on real hardware.&lt;/p&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;Parameter Golf was a humbling but valuable experience. I explored a lot, built some interesting systems along the way, and gained a much clearer sense of where to focus effort when resources are limited.&lt;/p&gt;

&lt;p&gt;The repository is public if you want to see the full journey:&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/jmoncayo-pursuit/parameter-golf-uniform-int4" rel="noopener noreferrer"&gt;https://github.com/jmoncayo-pursuit/parameter-golf-uniform-int4&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am still experimenting and still learning. Next time, I will be wiser about balancing bold exploration with proven foundations.&lt;/p&gt;

</description>
      <category>parametergolf</category>
      <category>tinyllm</category>
      <category>aiexperimentation</category>
      <category>quantization</category>
    </item>
    <item>
      <title>Breaking the Tether: How I Built a Neural Bridge for Antigravity with Gemini Multimodal Live</title>
      <dc:creator>Jean</dc:creator>
      <pubDate>Sun, 15 Mar 2026 18:46:33 +0000</pubDate>
      <link>https://forem.com/jmoncayopursuit/breaking-the-tether-how-i-built-a-neural-bridge-for-antigravity-with-gemini-multimodal-live-4nia</link>
      <guid>https://forem.com/jmoncayopursuit/breaking-the-tether-how-i-built-a-neural-bridge-for-antigravity-with-gemini-multimodal-live-4nia</guid>
      <description>&lt;p&gt;&lt;em&gt;Disclaimer: I created this piece of content specifically for the purposes of entering the Gemini Multimodal Live API Developer Challenge. #GeminiLiveAgentChallenge&lt;/em&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  &lt;strong&gt;The Problem: The Tethered Developer&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fricxim4iozscnvluvi5j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fricxim4iozscnvluvi5j.png" alt="Technical setup showing Antigravity IDE on a laptop mirrored to a smartphone via a glowing blue digital neural bridge" width="640" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nexus Comm-Link&lt;/strong&gt; is the result, a real-time, bidirectional bridge between the &lt;strong&gt;Antigravity IDE&lt;/strong&gt; and any mobile device, powered by the &lt;strong&gt;Gemini Multimodal Live API&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How it Works: The Neural Bridge&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;At its core, Nexus Comm-Link isn't just a remote desktop; it’s a context-aware partner. I built a tiered architecture to ensure that the mobile device doesn't just see pixels, but understands the &lt;strong&gt;intent&lt;/strong&gt; of the workspace.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;1. The Multimodal Engine (Gemini 2.0)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Using the &lt;code&gt;BidiGenerateContent&lt;/code&gt; endpoint, the system maintains a high-speed vision and audio stream. I configured it to ingest 1 FPS vision snapshots while processing bidirectional PCM audio. This allows you to walk away from your desk, show your phone a bug on another screen, and have Gemini analyze it through your mobile camera while knowing exactly what is happening in your IDE.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;2. Context Coupling via CDP&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The secret sauce is the &lt;strong&gt;Chrome DevTools Protocol (CDP)&lt;/strong&gt;. Instead of just sending a video feed, the bridge traverses the IDE's execution context. It extracts "Thought Blocks"—hidden internal reasoning states where the IDE assistant documents its plan. By feeding these directly into Gemini's grounding context via CDP, the voice on your phone stays perfectly synced with the machine on your desk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example of a "Thought Block" extracted via CDP:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"thought_block"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"active"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"content"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Analyzing user request for refactor... identifying target function 'calculateTotal' in utils.js..."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;3. The Action Relay (Tool Calling)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;One of the most satisfying parts of building this was implementing &lt;strong&gt;Action Relay&lt;/strong&gt;. By defining custom tools in the Gemini SDK, I enabled "Voice-to-Action." You can say, &lt;em&gt;"Apply that fix"&lt;/em&gt; or &lt;em&gt;"Trigger an undo"&lt;/em&gt; while you're in the other room, and the bridge translates that voice intent into a physical browser event in the IDE instance.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Stack&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Backend&lt;/strong&gt;: &lt;strong&gt;Node.js&lt;/strong&gt; and &lt;strong&gt;WebSockets&lt;/strong&gt; on &lt;strong&gt;Google Cloud Run&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cloud Infrastructure&lt;/strong&gt;: &lt;strong&gt;Google Cloud Build&lt;/strong&gt; and &lt;strong&gt;Vertex AI&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Terminal Hub&lt;/strong&gt;: A custom &lt;strong&gt;Python&lt;/strong&gt; tactical hub that manages automated linking for &lt;strong&gt;macOS&lt;/strong&gt;, &lt;strong&gt;Windows&lt;/strong&gt;, and &lt;strong&gt;Linux&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What I Learned&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Building this project taught me that the future of dev tools isn't in better UIs, but in better &lt;strong&gt;mobility of context&lt;/strong&gt;. When the AI has eyes (Vision) and ears (Audio) that are physically detached from the screen but logically attached to the code, the "workspace" becomes something you inhabit, not just something you look at.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Watch it in Action&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Check out the full technical demo here: [&lt;a href="https://youtu.be/6xicXxh3-kY" rel="noopener noreferrer"&gt;https://youtu.be/6xicXxh3-kY&lt;/a&gt;]&lt;br&gt;
See the Submission to the hackathon Here: [&lt;a href="https://devpost.com/software/nexus-comm-link" rel="noopener noreferrer"&gt;https://devpost.com/software/nexus-comm-link&lt;/a&gt;]&lt;br&gt;
Github Repo here : [&lt;a href="https://github.com/jmoncayo-pursuit/Nexus-Comm-Link" rel="noopener noreferrer"&gt;https://github.com/jmoncayo-pursuit/Nexus-Comm-Link&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;I'd love to hear how you'd use a detached multimodal bridge in your own workflow!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Special thanks to the Google DeepMind team for providing such a low-latency multimodal playground!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>geminiliveagentchallenge</category>
      <category>googlecloud</category>
      <category>ai</category>
      <category>showdev</category>
    </item>
    <item>
      <title>Building Keep-It-Moving: My First VS Code Extension</title>
      <dc:creator>Jean</dc:creator>
      <pubDate>Sat, 26 Jul 2025 04:25:21 +0000</pubDate>
      <link>https://forem.com/jmoncayopursuit/building-keep-it-moving-my-first-vs-code-extension-k65</link>
      <guid>https://forem.com/jmoncayopursuit/building-keep-it-moving-my-first-vs-code-extension-k65</guid>
      <description>&lt;p&gt;VS Code extensions aren't supposed to run servers. I tried it anyway.&lt;/p&gt;

&lt;p&gt;Check out the full repo at &lt;a href="https://github.com/jmoncayo-pursuit/keep-it-moving" rel="noopener noreferrer"&gt;https://github.com/jmoncayo-pursuit/keep-it-moving&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdiq0j4pin169rg78go9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdiq0j4pin169rg78go9.gif" alt="KIM Demo - Complete workflow from VS Code extension to mobile prompting" width="720" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Demo showing the complete KIM workflow: VS Code extension → QR code pairing → mobile prompting → Copilot integration&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I built Keep-It-Moving (KIM) to solve a simple problem: sending GitHub Copilot prompts from my phone. What started as "wouldn't it be nice if..." became an exploration of what's possible when you embed a WebSocket server inside a VS Code extension.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Development Journey
&lt;/h2&gt;

&lt;p&gt;This was my first VS Code extension, built with intentional AI collaboration. The initial idea was straightforward - remote Copilot prompting. The implementation revealed layers I hadn't expected.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Actually Built:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Embedded WebSocket server running inside VS Code extension&lt;/li&gt;
&lt;li&gt;Self-hosted Progressive Web App served directly from the extension
&lt;/li&gt;
&lt;li&gt;QR code pairing system with UUID authentication&lt;/li&gt;
&lt;li&gt;Real-time prompt relay to GitHub Copilot chat&lt;/li&gt;
&lt;li&gt;Dynamic port discovery with intelligent fallback&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What I Discovered I Couldn't Build (Yet):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;File context reading from VS Code workspace&lt;/li&gt;
&lt;li&gt;A full GitHub Copilot alternative (shelved after realizing the scope)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Innovation and Happy Accidents
&lt;/h2&gt;

&lt;p&gt;The core breakthrough wasn't planned - it emerged from constraints. VS Code extensions typically can't run servers, but Node.js modules work fine in the extension context. So I embedded a full WebSocket server using the &lt;code&gt;ws&lt;/code&gt; library directly in the extension.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// This shouldn't work, but it does&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;WebSocket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ws&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;WebSocket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Server&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;availablePort&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3iajofl9x2ggjageosl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3iajofl9x2ggjageosl.png" alt="KIM Control Panel showing server status and pairing management" width="580" height="1572"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The VS Code control panel showing server status, pairing code, and management controls&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The PWA self-hosting was born from necessity. External hosting would break the local-first promise, so the extension serves its own web interface. QR codes point to &lt;code&gt;http://localhost:8080&lt;/code&gt; - your extension becomes your server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Multi-Device Surprise:&lt;/strong&gt;&lt;br&gt;
One of the most impressive features happened by accident. The architecture naturally supported multiple devices because WebSocket connections are stateless. What looked like intentional design was actually emergent behavior from good architectural decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Collaboration Lessons
&lt;/h2&gt;

&lt;p&gt;Working with AI on this project taught me to be vigilant about feature creep and exaggerated claims. Early iterations included grandiose descriptions of capabilities I hadn't actually built. I learned to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate every AI-suggested feature against actual implementation&lt;/li&gt;
&lt;li&gt;Remove marketing language that overstated capabilities&lt;/li&gt;
&lt;li&gt;Focus documentation on what actually works, not what sounds impressive
&lt;/li&gt;
&lt;li&gt;Test claims before committing them to the repository&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI wanted to describe KIM as "revolutionary" and "first-of-its-kind." I kept pulling it back to factual descriptions of what I actually built.&lt;/p&gt;

&lt;h2&gt;
  
  
  Joyful Design Constraints
&lt;/h2&gt;

&lt;p&gt;The project emphasized joyful user experience throughout:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Emoji-driven feedback (🚀📱🎉) &lt;/li&gt;
&lt;li&gt;Playful error messages like "Your coding session took a coffee break! ☕"&lt;/li&gt;
&lt;li&gt;Seamless QR code pairing that "just works"&lt;/li&gt;
&lt;li&gt;Delightful micro-interactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These constraints made development more enjoyable and the codebase something to look forward to reading.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture That Emerged
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│   Mobile PWA    │    │  VS Code Ext    │    │ GitHub Copilot  │
│                 │    │                  │    │                 │
│ ┌─────────────┐ │    │ ┌──────────────┐ │    │ ┌─────────────┐ │
│ │ Prompt Input│ │───▶│ │ WebSocket    │ │───▶│ │ Chat API    │ │
│ └─────────────┘ │    │ │ Server       │ │    │ └─────────────┘ │
│                 │    │ └──────────────┘ │    │                 │
│ ┌─────────────┐ │    │ ┌──────────────┐ │    │                 │
│ │ QR Scanner  │ │◀───│ │ PWA Server   │ │    │                 │
│ └─────────────┘ │    │ └──────────────┘ │    │                 │
└─────────────────┘    └──────────────────┘    └─────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three components, local network only, zero cloud dependencies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Learning Outcomes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;VS Code Extension APIs&lt;/strong&gt;: First extension taught me the extension lifecycle and webview patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebSocket Architecture&lt;/strong&gt;: Learned to handle connection state, authentication, and message routing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Progressive Web Apps&lt;/strong&gt;: Built responsive mobile interface with offline capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local-First Development&lt;/strong&gt;: Solved networking without external dependencies&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;KIM works, but it's not marketplace-ready. The next phase involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User testing with real developers&lt;/li&gt;
&lt;li&gt;File context integration (reading current VS Code workspace)
&lt;/li&gt;
&lt;li&gt;Performance optimization for larger teams&lt;/li&gt;
&lt;li&gt;Marketplace submission process&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I submitted this to GitHub's "For the Love of Code" hackathon - wish me luck! &lt;/p&gt;

&lt;p&gt;This was my first VS Code extension, built in collaboration with AI. It does something that shouldn't be possible and works reliably. Sometimes the best engineering happens when you ignore conventional wisdom and just try anyway.&lt;/p&gt;

</description>
      <category>vscode</category>
      <category>developertools</category>
      <category>buildinpublic</category>
      <category>kiro</category>
    </item>
    <item>
      <title>Automating Persistent AI On the Fly</title>
      <dc:creator>Jean</dc:creator>
      <pubDate>Sat, 28 Jun 2025 02:40:31 +0000</pubDate>
      <link>https://forem.com/jmoncayopursuit/automating-persistent-ai-on-the-fly-1i0d</link>
      <guid>https://forem.com/jmoncayopursuit/automating-persistent-ai-on-the-fly-1i0d</guid>
      <description>&lt;p&gt;Most developers stop when their AI assistant says "I cant do that." I didn't.&lt;/p&gt;

&lt;p&gt;Check out the full repo at&lt;br&gt;&lt;br&gt;
&lt;a href="https://github.com/jmoncayo-pursuit/market-data-api" rel="noopener noreferrer"&gt;https://github.com/jmoncayo-pursuit/market-data-api&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;For a recent take home assignment I built a financial market data microservice with FastAPI, Docker Compose, PostgreSQL, Redis, Kafka and pytest. The real engineering happened in agent mode with Cursor AI, where I taught the tool to recover from CI failures and document each fix.&lt;/p&gt;
&lt;h2&gt;
  
  
  Purpose-Driven AI Orchestration
&lt;/h2&gt;

&lt;p&gt;This was not casual "vibe coding." It was intentional:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Drafted a PRD from the assignment requirements and referred to it in every session
&lt;/li&gt;
&lt;li&gt;Defined project rules before writing code
&lt;/li&gt;
&lt;li&gt;Fed Cursor AI official GitHub Actions docs like
&lt;a href="https://docs.github.com/en/actions/reference/accessing-contextual-information-about-workflow-runs" rel="noopener noreferrer"&gt;Accessing workflow context&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Vigilantly tracked file creation to avoid duplicate filenames
&lt;/li&gt;
&lt;li&gt;Reviewed and validated every AI generated fix
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each prompt had a clear goal. Each response was reviewed. No autopilot.&lt;/p&gt;
&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Stack&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FastAPI with dependency injection
&lt;/li&gt;
&lt;li&gt;PostgreSQL via SQLAlchemy ORM
&lt;/li&gt;
&lt;li&gt;Redis caching and job status store
&lt;/li&gt;
&lt;li&gt;Apache Kafka with confluent-kafka-python
&lt;/li&gt;
&lt;li&gt;Docker Compose orchestrating all services
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Testing&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;278+ comprehensive tests with full integration coverage
&lt;/li&gt;
&lt;li&gt;Endpoint tests covering authentication flows
&lt;/li&gt;
&lt;li&gt;Service layer tests for Kafka producer and consumer patterns
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CI/CD&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Actions workflow with dynamic retry logic
&lt;/li&gt;
&lt;li&gt;Solved rate limiter initialization failures in CI
&lt;/li&gt;
&lt;li&gt;Handled API authentication mismatches 401 and 403 errors
&lt;/li&gt;
&lt;li&gt;Added Redis connection timeout handling
&lt;/li&gt;
&lt;li&gt;Mocked Kafka services to avoid external dependencies
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Observability&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prometheus metrics integration for request rates and error counts
&lt;/li&gt;
&lt;li&gt;Grafana dashboard showcasing service health and throughput
&lt;/li&gt;
&lt;li&gt;Structured logging formatted for ELK consumption
&lt;/li&gt;
&lt;li&gt;Health check endpoints for service readiness
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Docs &amp;amp; Deliverables&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Swagger UI and OpenAPI spec
&lt;/li&gt;
&lt;li&gt;Complete Postman collection with environment configs
&lt;/li&gt;
&lt;li&gt;Alembic migrations for schema management
&lt;/li&gt;
&lt;li&gt;GitHub Actions workflows with health check steps
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNpNkU9z2yAQxb8Kw9n2yMi2gg6dUeR_aeyOm-ZUlAMj1hJjCTQIPG09_u7BWElzg_d7u_sWLrjUAnCKK8O7Gr0-FgqhjK15b7PDE8q67g2Nx9_QIzvo3lYGfv3cvQVPkHP2AkL2KOdlDV_0JXvmxxNHB6OFK8EEtAxoNaBX3ckyRZ2RJYzhDMr2wbUKrvXgyrXqXTs0WAe0YXt9lqpC2RkMr8APb0rXcKvvrs098P8wW-ZjtGBrcPcR2yA_sY3hR674l9zfWeaEtGinq2oYegfP7IVbQDvZSjuAPIAdC7ujvW_0UbJnW-CNrVFeQ3nyYnBmN_SDrYzRBm25Es0nwSP__lLg1BoHI-z3bfntii-3mgL76C0UOPVHwc2pwIW6-pqOq99atx9lRruqxumRN72_uU74xEvJ_c-2n6oBJcDk2imLUxqFHji94D84JfPZhCxIMptOo4eIEpqM8F8vk8k0ojSOY0riaEEpuY7wvzA2mlAync-TaPEwm8fJLCHXd7mPtAU" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmermaid.ink%2Fimg%2Fpako%3AeNpNkU9z2yAQxb8Kw9n2yMi2gg6dUeR_aeyOm-ZUlAMj1hJjCTQIPG09_u7BWElzg_d7u_sWLrjUAnCKK8O7Gr0-FgqhjK15b7PDE8q67g2Nx9_QIzvo3lYGfv3cvQVPkHP2AkL2KOdlDV_0JXvmxxNHB6OFK8EEtAxoNaBX3ckyRZ2RJYzhDMr2wbUKrvXgyrXqXTs0WAe0YXt9lqpC2RkMr8APb0rXcKvvrs098P8wW-ZjtGBrcPcR2yA_sY3hR674l9zfWeaEtGinq2oYegfP7IVbQDvZSjuAPIAdC7ujvW_0UbJnW-CNrVFeQ3nyYnBmN_SDrYzRBm25Es0nwSP__lLg1BoHI-z3bfntii-3mgL76C0UOPVHwc2pwIW6-pqOq99atx9lRruqxumRN72_uU74xEvJ_c-2n6oBJcDk2imLUxqFHji94D84JfPZhCxIMptOo4eIEpqM8F8vk8k0ojSOY0riaEEpuY7wvzA2mlAync-TaPEwm8fJLCHXd7mPtAU%3Ftype%3Dpng" alt="Architecture" width="1056" height="694"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Data Flow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://mermaid.live/edit#pako:eNptUl1zmzAQ_Cs3eiYO4E8003Qcu9OvpHHi9qXDiwIX0BgkKglPXY__e4WA0CF9g7vd2709nUkiUySUaPxVo0hwy1mmWBkLgIopwxNeMWFgU3AUZlxd7z6_AbIkx3FxezuufGUvB_aGK4WuS1Tj-r0U3Ehbbhqtk6ubGytO4eOH73BdKZ6gvi6YQW3e61P5LIt36_XursFbmAU7WxQ2OSYHSHqPrOgMwyfuloP296of_4SmVqIlpOB0GhgWGjviPde6ZbZC21sKjzWqE7R2BhLYGMaDXRNSZti_Mzqze7szAheDXxTpsFKXCoU7mUGJxo7SfbcZ4nKi8GX_8A0U6sqGi_-LcPewHzKsZFEMEu5KFlE_F1znnVs8di_BdRul7my0P-AA6VtdMBtWJHWTCzCRgnb7lfLIRQbsiIplOCK97vijShtatybxSKZ4SqhRNXrEYkvW_JJzw4-JybHEmFD7mTJ1iEksLpZjH9NPKcuepmSd5YS-MHtNj9ROoXv-r1VlI0e1kbUwhEYzN4PQM_lNaDifTcJFuJwFgb_yozBaeuRky-Ek8KNoOp1G4dRfRFF48cgfJ-tPojCYz5f-YrUKZ8vFKrj8BWXAJl0" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmermaid.ink%2Fimg%2Fpako%3AeNptUl1zmzAQ_Cs3eiYO4E8003Qcu9OvpHHi9qXDiwIX0BgkKglPXY__e4WA0CF9g7vd2709nUkiUySUaPxVo0hwy1mmWBkLgIopwxNeMWFgU3AUZlxd7z6_AbIkx3FxezuufGUvB_aGK4WuS1Tj-r0U3Ehbbhqtk6ubGytO4eOH73BdKZ6gvi6YQW3e61P5LIt36_XursFbmAU7WxQ2OSYHSHqPrOgMwyfuloP296of_4SmVqIlpOB0GhgWGjviPde6ZbZC21sKjzWqE7R2BhLYGMaDXRNSZti_Mzqze7szAheDXxTpsFKXCoU7mUGJxo7SfbcZ4nKi8GX_8A0U6sqGi_-LcPewHzKsZFEMEu5KFlE_F1znnVs8di_BdRul7my0P-AA6VtdMBtWJHWTCzCRgnb7lfLIRQbsiIplOCK97vijShtatybxSKZ4SqhRNXrEYkvW_JJzw4-JybHEmFD7mTJ1iEksLpZjH9NPKcuepmSd5YS-MHtNj9ROoXv-r1VlI0e1kbUwhEYzN4PQM_lNaDifTcJFuJwFgb_yozBaeuRky-Ek8KNoOp1G4dRfRFF48cgfJ-tPojCYz5f-YrUKZ8vFKrj8BWXAJl0%3Ftype%3Dpng" alt="Data Flow" width="1541" height="863"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Troubleshooting CI Failures
&lt;/h2&gt;

&lt;p&gt;To give Cursor AI the context it needed, I added this snippet to my internal docs so it could extrapolate our custom monitoring flow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1. List the most recent failed run&lt;/span&gt;
gh run list &lt;span class="nt"&gt;--status&lt;/span&gt; failure &lt;span class="nt"&gt;--limit&lt;/span&gt; 1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--json&lt;/span&gt; databaseId,status,conclusion,createdAt,url &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; last_run.json &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cat &lt;/span&gt;last_run.json

&lt;span class="c"&gt;# 2. View logs of the failed run&lt;/span&gt;
gh run view &amp;lt;RUN_ID&amp;gt; &lt;span class="nt"&gt;--log-failed&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; last_run.log &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-100&lt;/span&gt; last_run.log

&lt;span class="c"&gt;# 3. Compute sleep duration based on last job timing&lt;/span&gt;
&lt;span class="nv"&gt;last_duration&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;gh run view &amp;lt;RUN_ID&amp;gt; &lt;span class="nt"&gt;--json&lt;/span&gt; timing &lt;span class="se"&gt;\&lt;/span&gt;
  | jq .timing.totalDuration&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;sleep&lt;/span&gt; &lt;span class="k"&gt;$((&lt;/span&gt; last_duration &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt; &lt;span class="k"&gt;))&lt;/span&gt;

&lt;span class="c"&gt;# 4. Rerun only the failed jobs&lt;/span&gt;
gh run rerun &amp;lt;RUN_ID&amp;gt; &lt;span class="nt"&gt;--failed&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cursor AI learned to watch logs, wait intelligently, retry failures, and avoid wasted prompts. It even adapted when the Redis connection timed out or the rate limiter threw event loop errors.&lt;/p&gt;

&lt;h2&gt;
  
  
  Timeline and Complexity
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Built a production grade microservice with streaming data pipeline&lt;/li&gt;
&lt;li&gt;Implemented full CI/CD with health checks and self healing retries&lt;/li&gt;
&lt;li&gt;Created a 278+ test suite covering all service layers&lt;/li&gt;
&lt;li&gt;Integrated observability, docs and migrations in under two weeks&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Real Learning Outcomes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Learned to handle Redis connection failures with retry backoff&lt;/li&gt;
&lt;li&gt;Mastered Kafka consumer group management and mocking techniques&lt;/li&gt;
&lt;li&gt;Implemented async await patterns for high throughput&lt;/li&gt;
&lt;li&gt;Built robust error handling for external API dependencies&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It is time to rethink what an AI native builder can deliver. By defining clear goals, keeping a living PRD, feeding the right documentation, and guiding each AI step, you can ship production level code with AI as your teammate. Demand more of your tools and hold them to your standards, and you will build resilient systems that keep moving forward, no matter what.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/jmoncayo-pursuit/market-data-api/actions/workflows/ci.yml" rel="noopener noreferrer"&gt;&lt;img src="https://github.com/jmoncayo-pursuit/market-data-api/actions/workflows/ci.yml/badge.svg" alt="CI/CD Pipeline" width="157" height="20"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>fastapi</category>
      <category>python</category>
    </item>
  </channel>
</rss>
