<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: MOHIT BHAT</title>
    <description>The latest articles on Forem by MOHIT BHAT (@mbcse).</description>
    <link>https://forem.com/mbcse</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/mbcse"/>
    <language>en</language>
    <item>
      <title>Building a Web3 Camera That Proves Photos Are Real, Powered by Gemini CLI</title>
      <dc:creator>MOHIT BHAT</dc:creator>
      <pubDate>Thu, 05 Mar 2026 08:53:04 +0000</pubDate>
      <link>https://forem.com/mbcse/building-a-web3-camera-that-proves-photos-are-real-powered-by-gemini-cli-465j</link>
      <guid>https://forem.com/mbcse/building-a-web3-camera-that-proves-photos-are-real-powered-by-gemini-cli-465j</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mlh-built-with-google-gemini-02-25-26"&gt;Built with Google Gemini: Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built with Google Gemini
&lt;/h2&gt;

&lt;p&gt;So I built this thing called &lt;strong&gt;LensMint&lt;/strong&gt;. It's a physical Web3 camera that solves something I've been thinking about for a while: in a world full of AI-generated images, how do you actually &lt;em&gt;prove&lt;/em&gt; a photo is real?&lt;/p&gt;

&lt;p&gt;The concept is pretty simple, but building it was honestly a wild ride. Picture a Raspberry Pi 4 with a camera module strapped to it, a little touchscreen display, and a whole lot of cryptography running underneath. You snap a photo, and the camera cryptographically signs it with a key that's literally derived from the hardware itself. Then it uploads to Filecoin for permanent storage, mints an ERC-1155 NFT on Ethereum, kicks off a zero-knowledge proof to verify authenticity on-chain, and throws a QR code on screen so anyone nearby can claim their own edition. All of that happens in seconds, right there on the device.&lt;/p&gt;

&lt;p&gt;Now here's the thing. This project ended up being around 7,000+ lines of code across Python, JavaScript, Solidity, and bash scripts. Hardware stuff, blockchain contracts, ZK proof pipelines, decentralized storage, a full-stack web app. That's a LOT of ground to cover for one person.&lt;/p&gt;

&lt;p&gt;The reason I was able to actually ship it? &lt;strong&gt;Google Gemini CLI.&lt;/strong&gt; It became my constant companion throughout this build. Not just for generating code, but for thinking through architecture, debugging weird errors at 2am, and navigating SDKs I'd never touched before. Let me walk you through the whole thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository:&lt;/strong&gt; &lt;a href="https://github.com/mbcse/lensmint-camera-gemini" rel="noopener noreferrer"&gt;lensmint-camera-gemini&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Video Demo:&lt;/strong&gt; &lt;a href="https://youtu.be/P_JeDRHE-kQ" rel="noopener noreferrer"&gt;Watch on YouTube&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  🔧 Hardware Components
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3a3y86ojt0wv15rkz32.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3a3y86ojt0wv15rkz32.JPG" alt="Raspberry Pi 4 Board" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3guu48sfsgikbuafc1rf.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3guu48sfsgikbuafc1rf.JPG" alt="Camera Module" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxt4oxihddsxeel06nco.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxt4oxihddsxeel06nco.JPG" alt="3.5 inch Touchscreen Display" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyaxcnbvrmf5y5ihiic3.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzyaxcnbvrmf5y5ihiic3.JPG" alt="UPS HAT Battery Module" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u8e8bo3mc1hoyhdp3g8.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4u8e8bo3mc1hoyhdp3g8.JPG" alt="Power Supply and Cables" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bkywy3ug0i9zqx7uzei.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bkywy3ug0i9zqx7uzei.JPG" alt="SD Card and Storage" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq65d14culhnsy6ozx6w.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjq65d14culhnsy6ozx6w.JPG" alt="All Components Together" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  💻 Backend Screenshots
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l9gbwfoztiltnxmcb9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3l9gbwfoztiltnxmcb9u.png" alt="Backend Screenshot 1" width="800" height="695"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8b1s5h8cf73pgr99aw5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8b1s5h8cf73pgr99aw5k.png" alt="Backend Screenshot 2" width="800" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsdtlemm46aytbmrpi4r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsdtlemm46aytbmrpi4r.png" alt="Backend Screenshot 3" width="800" height="681"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro6s6sxnpv70xxyqgx71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro6s6sxnpv70xxyqgx71.png" alt="Backend Screenshot 4" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqa4konifhph0vi7f8p60.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqa4konifhph0vi7f8p60.png" alt="Backend Screenshot 5" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feahy5aiinqz8b87cmqbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feahy5aiinqz8b87cmqbl.png" alt="Backend Screenshot 6" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw2xtu3ro0a817smb6mp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw2xtu3ro0a817smb6mp.png" alt="Backend Screenshot 7" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frczef2p3u6s9xsg834ot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frczef2p3u6s9xsg834ot.png" alt="Backend Screenshot 8" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif2rj53zs611ewmuatl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fif2rj53zs611ewmuatl5.png" alt="Backend Screenshot 9" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  📷 Physical Camera Final Images
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3li5f5clbvjdr6i7uyk6.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3li5f5clbvjdr6i7uyk6.JPG" alt="Camera Front View" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno36z5p0297zq9khsslz.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fno36z5p0297zq9khsslz.JPG" alt="Camera Side View" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ysfpfri4v87cret8vdm.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ysfpfri4v87cret8vdm.JPG" alt="Camera with Display" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frl2nwb8iej7x30oqit88.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frl2nwb8iej7x30oqit88.JPG" alt="Camera UI Preview" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xwrnwqdlihx1966y5an.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xwrnwqdlihx1966y5an.JPG" alt="Camera Capture Mode" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuisldu2vez3wxhhcl1fr.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuisldu2vez3wxhhcl1fr.JPG" alt="Camera QR Code Display" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9htd0he6csvz6g4aujvy.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9htd0he6csvz6g4aujvy.JPG" alt="Camera Back View" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvlea7aigi6v3ob9o815.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftvlea7aigi6v3ob9o815.JPG" alt="Camera in Action 1" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzle7rj2evmy0zl11qqtz.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzle7rj2evmy0zl11qqtz.JPG" alt="Camera in Action 2" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmwfhun0b9vhp6pjzpur.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmwfhun0b9vhp6pjzpur.JPG" alt="Camera in Action 3" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnusyyzm8dy7tzmlheadl.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnusyyzm8dy7tzmlheadl.JPG" alt="Camera in Action 4" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferfgnes0spk39p5ccbcx.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ferfgnes0spk39p5ccbcx.JPG" alt="Camera Full Setup" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtht64mtspzgsj9hyzq2.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtht64mtspzgsj9hyzq2.JPG" alt="Camera Final Build" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The Problem LensMint Solves
&lt;/h2&gt;

&lt;p&gt;Think about this for a second. Right now, anyone can fire up Midjourney or DALL-E and generate a photorealistic image of basically anything. That's cool for art, but it's a nightmare for trust. A photojournalist captures something incredible in the field, and people ask "is that even real?" A photographer tries to license their work, and clients wonder if AI made it. Event organizers have no reliable way to prove who actually showed up.&lt;/p&gt;

&lt;p&gt;This bugged me enough to build something about it. LensMint attacks it from four angles:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proof of Authenticity.&lt;/strong&gt; Every photo is signed by a cryptographic key that's derived from the physical hardware itself (the CPU serial number, MAC address, camera sensor properties, and a persisted salt). The signature proves which exact device captured the image. This isn't software that can be spoofed. The key is deterministic to the hardware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Permanent Storage.&lt;/strong&gt; Photos go straight to Filecoin via the Synapse SDK. Not a cloud server. Not someone's S3 bucket. Decentralized, permanent storage that nobody can take down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On-Chain Verification.&lt;/strong&gt; Every photo becomes an ERC-1155 NFT. The device must be registered in our DeviceRegistry smart contract before it can mint. Then vlayer generates a TLS-notarized web proof of the metadata, compresses it into a RISC Zero zero-knowledge proof, and submits it on-chain for verification. The blockchain confirms the photo is real without revealing any private data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easy Distribution.&lt;/strong&gt; After capturing a photo, the camera shows a QR code. Anyone nearby can scan it, enter their wallet address on a clean web page, and receive an edition NFT. No wallet setup on the camera. No complicated flows. Scan, paste, done.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Architecture, Briefly
&lt;/h2&gt;

&lt;p&gt;LensMint has five main components that all talk to each other:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Camera App&lt;/strong&gt; is a Python/Kivy application running on the Raspberry Pi. It handles the live camera preview, photo capture, image signing, and QR code display. The identity system generates a SECP256k1 ECDSA keypair from hardware identifiers. When you capture a photo, it computes a SHA-256 hash and signs it with the device's private key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Hardware Web3 Service&lt;/strong&gt; is a Node.js/Express backend also running on the Pi. It receives captured images from the camera app, uploads them to Filecoin, creates claims on the public server, mints NFTs by calling the smart contracts, and orchestrates the entire ZK proof pipeline. It also polls for pending edition requests and mints them automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Smart Contracts&lt;/strong&gt; on Ethereum Sepolia include the DeviceRegistry (which tracks authorized cameras), the LensMintERC1155 (which handles NFT minting with full provenance metadata), and the LensMintVerifier (which validates RISC Zero ZK proofs on-chain).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Public Claim Server&lt;/strong&gt; runs on Render and serves the claim pages. When someone scans the QR code, they land on a page with an animated NFT card preview where they can enter their wallet address.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Owner Portal&lt;/strong&gt; is a React/Vite web app with Privy wallet integration for device owners to manage their cameras and gallery.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Raspberry Pi 4 (Camera + Touchscreen)
    │
    ├── Camera App (Python/Kivy)
    │     └── Hardware Identity (SECP256k1 keys from hardware)
    │
    └── Web3 Service (Node.js)
          ├── Filecoin (Synapse SDK for permanent storage)
          ├── Ethereum Sepolia (Smart contracts)
          │     ├── DeviceRegistry.sol
          │     ├── LensMintERC1155.sol
          │     └── LensMintVerifier.sol (RISC Zero)
          ├── vlayer (ZK proof generation)
          └── Public Claim Server (Render)
                └── Owner Portal (React/Vite + Privy)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How Gemini CLI Powered the Entire Build
&lt;/h2&gt;

&lt;p&gt;Okay, this is the part I'm most excited to talk about because honestly, without Gemini CLI, I don't think this project would exist.&lt;/p&gt;

&lt;p&gt;Building LensMint meant juggling at least five completely different technology domains at the same time. I2C battery monitoring and camera modules on a Raspberry Pi. Python GUI development with Kivy. Solidity smart contracts. Zero-knowledge proof pipelines (which I'd never touched before). Full-stack web development with React and Express. Any one of these could be its own project. I was trying to do all of them at once.&lt;/p&gt;

&lt;p&gt;Gemini CLI is what made that possible. Let me break down what it is, how it actually works, and then show you what it looked like in practice.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is Google Gemini?
&lt;/h3&gt;

&lt;p&gt;Before I get into the CLI specifically, let me talk about Gemini itself because understanding the model helps you use the tool better.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://deepmind.google/technologies/gemini/" rel="noopener noreferrer"&gt;Google Gemini&lt;/a&gt; is Google's family of multimodal AI models. When people say "Gemini," they're talking about large language models built by Google DeepMind that can understand and generate text, code, images, audio, and video. The model family includes different sizes: Gemini Ultra for the most complex tasks, Gemini Pro for balanced performance, and Gemini Flash for speed-optimized workloads.&lt;/p&gt;

&lt;p&gt;What makes Gemini interesting for developers specifically is a few things:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Massive context window.&lt;/strong&gt; Gemini models support context windows up to 1 million tokens (and Gemini 2.5 Pro pushes even further). For coding, this is huge. It means the model can hold your entire codebase in its working memory and reason about relationships between files that are thousands of lines apart. When I was working on the connection between my Python camera app and my Node.js backend, Gemini could see both codebases simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Native code understanding.&lt;/strong&gt; Gemini was trained on enormous amounts of code across many languages. It doesn't just pattern-match syntax. It understands data flow, error handling patterns, security implications, and architectural trade-offs. When I asked it about my Solidity access control patterns, it caught a reentrancy issue that I had totally missed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multimodal capabilities.&lt;/strong&gt; While I primarily used the text/code capabilities, Gemini can also reason about images, which is relevant for a camera project. The model's ability to understand visual context alongside code context opens up interesting possibilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Function calling and tool use.&lt;/strong&gt; Gemini models can interact with external tools, APIs, and your local environment. This is what makes the CLI integration so powerful. It's not just answering questions in isolation. It can read files, understand project structure, and ground its responses in your actual code.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is Gemini CLI?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/google-gemini/gemini-cli" rel="noopener noreferrer"&gt;Gemini CLI&lt;/a&gt; is Google's open-source command-line tool that brings all of Gemini's capabilities directly into your terminal. Think of it as having a senior developer sitting next to you who can instantly read any file in your project, understand how everything connects, and help you write, debug, or refactor code.&lt;/p&gt;

&lt;p&gt;Here's what makes it different from just chatting with Gemini in a browser:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It lives in your project.&lt;/strong&gt; When you run &lt;code&gt;gemini&lt;/code&gt; in your project directory, it automatically understands your file structure, your dependencies, your existing patterns. You don't have to copy-paste code into a chat window and explain what everything does. It already knows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It remembers context across turns.&lt;/strong&gt; You can have a multi-turn conversation where you start by discussing architecture, move into implementation, hit a bug, debug it, and then continue building. Gemini CLI holds the thread of the entire conversation. This was essential for LensMint because I'd often start with "I need a module that does X," get a first draft, test it, hit an error, paste the error back in, and iterate until it worked.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It can read and reference your files.&lt;/strong&gt; You can say "look at my web3Service.js and create a similar service for vlayer" and it will actually read that file, understand the patterns, and produce code that follows the same conventions. This consistency across a codebase matters way more than people think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting started is dead simple.&lt;/strong&gt; You install it, authenticate with your Google account, and you're ready. No complex configuration. No API key management headaches. Just &lt;code&gt;npm install -g @anthropic/gemini-cli&lt;/code&gt; (or however Google distributes it), authenticate, and go.&lt;/p&gt;

&lt;p&gt;The capabilities I leaned on the most:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code generation that fits your project.&lt;/strong&gt; When I asked Gemini CLI to create my &lt;code&gt;vlayerService.js&lt;/code&gt; module, it had already seen my &lt;code&gt;web3Service.js&lt;/code&gt; and &lt;code&gt;filecoinService.js&lt;/code&gt;. The generated code followed the same error handling patterns, the same logging style, the same module structure. I didn't have to explain any of that. It just picked it up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-language reasoning.&lt;/strong&gt; LensMint has Python talking to Node.js talking to Solidity. I could say something like "the Python hardware identity module exports the private key to a file, and the Node.js service needs to read it, what's the cleanest way to bridge this?" and Gemini CLI gave me a solution that understood both ecosystems properly. The &lt;code&gt;getHardwareKey.js&lt;/code&gt; bridge module came directly from that kind of conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging with real stack traces.&lt;/strong&gt; When stuff broke (and oh man, stuff broke a lot), I'd paste the error right into Gemini CLI and get back debugging advice that referenced my actual file names and function signatures. Not generic "check your imports" advice. Real, contextual diagnosis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture thinking.&lt;/strong&gt; Some of my most valuable Gemini CLI sessions were before I wrote any code at all. I'd describe what I wanted to build and explore different approaches. Should the camera app talk directly to the blockchain? Should there be a backend service in between? How should the claim system work? These conversations saved me from going down dead ends.&lt;/p&gt;

&lt;h3&gt;
  
  
  Concrete Examples from the Build
&lt;/h3&gt;

&lt;p&gt;Alright, theory is nice, but let me show you specific moments where Gemini CLI saved my skin.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Hardware Identity System.&lt;/strong&gt; Generating a deterministic ECDSA keypair from hardware identifiers sounds simple enough on paper. In practice? A dozen edge cases I hadn't thought about. What if someone swaps the camera module? What if wlan0 isn't available on that particular Pi? How do you persist the salt safely so the key survives reboots? I brought all these questions to Gemini CLI and we worked through the fallback chain together: try &lt;code&gt;/proc/cpuinfo&lt;/code&gt; for CPU serial, fall back to a default. Try &lt;code&gt;wlan0&lt;/code&gt; MAC address, then &lt;code&gt;eth0&lt;/code&gt;, then a placeholder. Store the salt at &lt;code&gt;/boot/.device_salt&lt;/code&gt; with restricted permissions, keep a backup at &lt;code&gt;~/.lensmint/.device_salt_backup&lt;/code&gt;. That kind of careful defensive programming came directly from iterating with Gemini. I wouldn't have thought of half those fallback scenarios on my own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solidity Contract Design.&lt;/strong&gt; The interaction between &lt;code&gt;DeviceRegistry&lt;/code&gt;, &lt;code&gt;LensMintERC1155&lt;/code&gt;, and &lt;code&gt;LensMintVerifier&lt;/code&gt; required careful design. The ERC1155 contract needs to verify that the caller is a registered and active device before allowing minting. The Verifier contract needs to validate RISC Zero proofs against expected parameters (notary fingerprint, URL pattern, queries hash). Gemini CLI helped me design the access control patterns and the data structures for storing provenance metadata on-chain. It also caught a reentrancy issue I'd missed in an early draft of the &lt;code&gt;mintEdition&lt;/code&gt; function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The ZK Proof Pipeline.&lt;/strong&gt; This was the most complex part of the build. The flow goes: call the vlayer Web Prover API to create a TLS-notarized web proof of the metadata endpoint, then send that to the ZK Prover API for RISC Zero compression, extract specific fields using JMESPath queries, and finally submit the proof on-chain. Gemini CLI helped me understand vlayer's API structure (which was new to me), design the extraction queries, handle the async proof generation flow, and troubleshoot proof validation failures on-chain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kivy UI on a Small Screen.&lt;/strong&gt; Building a camera interface for a 3.5-inch touchscreen is surprisingly tricky. Everything needs to be touch-friendly with large buttons. The live preview needs to run at 30 FPS without blocking the UI thread. Gemini CLI helped me structure the Kivy layout, implement the texture-based camera preview (using &lt;code&gt;Clock.schedule_interval&lt;/code&gt; for frame updates), and design the gallery view with thumbnail grids. It also helped me figure out the correct approach for overlaying QR codes on the live preview without disrupting the camera feed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridging Python and Node.js.&lt;/strong&gt; The camera app exports the device private key so the Node.js backend can use it for blockchain transactions. This required a careful bridge: the Python side writes the key to a &lt;code&gt;.device_key_export&lt;/code&gt; file, and the Node.js side reads it with multiple fallback strategies (read file, run Python export script, or execute inline Python). Gemini CLI helped me design this bridge robustly with proper error handling and caching (the key is cached for one hour to avoid repeated file reads or Python process spawning).&lt;/p&gt;

&lt;h3&gt;
  
  
  The Development Workflow
&lt;/h3&gt;

&lt;p&gt;My daily rhythm with Gemini CLI settled into something like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Think out loud.&lt;/strong&gt; I'd describe what I needed in plain English ("I need a service that uploads images to Filecoin using the Synapse SDK, handles payment setup, and returns PieceCIDs") and Gemini would help me think through the interface, the error cases, and how it should connect to everything else.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Get a first draft.&lt;/strong&gt; Gemini CLI would produce a working scaffold. Because it could see my other files, the code already felt like it belonged in the project.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Break things, fix things.&lt;/strong&gt; I'd test, hit errors, paste the stack trace back into Gemini CLI, get a fix, test again. This loop was incredibly fast because Gemini already understood the full context.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wire things together.&lt;/strong&gt; When connecting components across languages, Gemini CLI helped me keep data formats, API contracts, and error handling consistent. It would flag things like "hey, your Python side sends &lt;code&gt;image_hash&lt;/code&gt; but your Express endpoint expects &lt;code&gt;imageHash&lt;/code&gt;" before those bugs ever hit runtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Stress test mentally.&lt;/strong&gt; Gemini CLI was great at playing devil's advocate. "What happens if the Filecoin upload succeeds but the NFT mint fails? What if the user scans the QR code after the claim expires? What if the device wallet doesn't have enough ETH for gas?" These questions led to retry logic, error recovery, and UX improvements I wouldn't have thought to add.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tips and Tricks for Getting the Most Out of Gemini CLI
&lt;/h2&gt;

&lt;p&gt;After spending weeks with Gemini CLI on this project, I picked up some patterns that made a real difference. If you're about to start using it, these might save you some time.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Always Start From Your Project Root
&lt;/h3&gt;

&lt;p&gt;This sounds obvious but it matters. When you launch Gemini CLI from your project's root directory, it can traverse your file tree and understand how things connect. If you launch it from some random directory, you lose all that context. I always &lt;code&gt;cd&lt;/code&gt; into &lt;code&gt;lensmint-camera/&lt;/code&gt; before starting a session.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Give It the Big Picture First
&lt;/h3&gt;

&lt;p&gt;Before diving into code, spend your first prompt explaining what the project does. Something like "This is a Web3 camera built on Raspberry Pi. It captures photos, signs them cryptographically, uploads to Filecoin, and mints NFTs on Ethereum. I'm about to work on the ZK proof pipeline." This framing helps Gemini give you much better answers for the rest of the conversation because it understands where each piece fits.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Reference Your Existing Files Explicitly
&lt;/h3&gt;

&lt;p&gt;Instead of describing your code patterns from scratch, just point Gemini at an existing file. "Look at how I structured &lt;code&gt;filecoinService.js&lt;/code&gt; and create a similar service for vlayer integration" produces way better results than "create a vlayer service with good error handling." It mirrors your actual style.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Use It for Architecture Decisions Early
&lt;/h3&gt;

&lt;p&gt;Some of my highest-value Gemini CLI sessions produced zero code. I'd explain what I wanted to build and ask it to help me think through trade-offs. "Should the camera app call the blockchain directly via Python, or should I put a Node.js service in between?" The answer (use Node.js because ethers.js is way more mature than Python's web3 libraries for what I needed) saved me from a painful refactor later.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Keep Conversations Focused
&lt;/h3&gt;

&lt;p&gt;I learned this the hard way. Don't try to solve everything in one massive conversation. If you're debugging a Filecoin upload issue, don't suddenly switch to asking about your Solidity contracts. Start a new session. Gemini CLI holds context within a conversation, but keeping topics focused gets you better answers.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Paste Full Error Messages, Not Summaries
&lt;/h3&gt;

&lt;p&gt;When debugging, paste the complete stack trace. Don't paraphrase it. Don't say "I'm getting a timeout error." Paste the actual error with line numbers, file paths, all of it. Gemini CLI can trace through your code to find root causes, but only if it has the full picture.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Ask It to Review, Not Just Generate
&lt;/h3&gt;

&lt;p&gt;One underused workflow: write your code yourself, then ask Gemini CLI to review it. "Here's my mintEdition function in Solidity. Can you spot any issues?" It caught a reentrancy vulnerability in my contract that I'd completely missed. Code review is where it really shines because it brings a fresh perspective to code you've been staring at for hours.&lt;/p&gt;

&lt;h3&gt;
  
  
  8. Use It to Learn New SDKs
&lt;/h3&gt;

&lt;p&gt;Whenever I hit a new SDK (Filecoin Synapse, vlayer, Privy), my first move was to describe what I wanted to accomplish and ask Gemini CLI for an implementation. Even if the generated code wasn't perfectly up to date with the latest API, it gave me the right mental model and structure. I'd then adjust the specifics by checking the official docs. It's way faster than reading docs cold.&lt;/p&gt;

&lt;h3&gt;
  
  
  9. Ask "What Could Go Wrong?"
&lt;/h3&gt;

&lt;p&gt;After implementing a feature, I'd ask Gemini CLI something like "what are the failure modes for this upload flow?" The answers were consistently useful. It would remind me about network timeouts, partial failures, race conditions, and edge cases that only show up in production. A lot of LensMint's retry logic and error recovery exists because of these conversations.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Looks in Action
&lt;/h2&gt;

&lt;p&gt;The physical camera is a Raspberry Pi 4 in a custom enclosure with a touchscreen showing the live camera preview. When you press the capture button:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The screen briefly flashes to confirm capture&lt;/li&gt;
&lt;li&gt;The image is signed, uploaded, and minted (status indicators show each step)&lt;/li&gt;
&lt;li&gt;A QR code appears on screen&lt;/li&gt;
&lt;li&gt;Anyone nearby scans the QR code on their phone&lt;/li&gt;
&lt;li&gt;They see a claim page with an animated preview of the NFT&lt;/li&gt;
&lt;li&gt;They enter their wallet address and receive an edition NFT&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The entire flow from capture to QR code display takes about 10 to 15 seconds. Edition minting after someone submits their wallet address takes another 15 to 30 seconds depending on network conditions.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;ZK proofs aren't as scary as they look.&lt;/strong&gt; I'll be honest, I went into this project pretty intimidated by zero-knowledge proofs. The math papers are intense. But vlayer's Web Prover and ZK Prover APIs abstract most of that away. You're basically telling it "prove that this server actually returned this data" and "compress that proof for on-chain verification." The hard part was getting the JMESPath extraction queries right and understanding the journal data format the verifier contract expects. Once that clicked, the ZK pipeline became just another API call. If you've been avoiding ZK proofs because they seem too complex, give them a shot. The tooling has gotten really good.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware identity is an underappreciated idea.&lt;/strong&gt; Having a device generate its own deterministic crypto identity from its physical components is elegant in a way that surprised me. The Raspberry Pi literally IS its own Ethereum wallet. No seed phrases. No key files to back up (well, just the salt). Same hardware, same keys, every time. I keep thinking about where else this pattern could work. IoT devices, supply chain tracking, sensor networks. Anytime you need to prove which physical device did something.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bridging languages is harder than it sounds.&lt;/strong&gt; Getting Python and Node.js to play nice on the same Pi took way more thought than I expected. File-based IPC, spawning subprocesses, caching keys, handling errors that cross language boundaries. These problems don't show up in tutorials but they dominate real-world systems. If your project spans multiple languages, budget extra time for the glue code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Small screens force good design.&lt;/strong&gt; Building for a 3.5-inch touchscreen means you literally cannot have a complex UI. Every screen gets one job. The camera preview is full-screen with tiny overlays. The QR code is big and centered. The gallery is a simple grid. Having that constraint actually made the UX way better than if I'd had a big screen to fill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemini CLI compresses the learning curve.&lt;/strong&gt; Before Gemini CLI, running into an unfamiliar SDK meant hours of reading docs, searching for examples, and trial-and-error. With it, I could describe what I needed, get a working implementation, and iterate from there. It doesn't eliminate the learning. You still need to understand what you're doing. But it compresses the time dramatically. Filecoin integration that would have taken a full day took a couple of hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  Google Gemini Feedback
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Worked Really Well
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Cross-language understanding is best in class.&lt;/strong&gt; I could discuss Python, JavaScript, and Solidity in the same conversation, and Gemini CLI tracked the data flow across all three. When I said "the Python module signs the image hash and sends it to the Express endpoint, which then passes it to the Solidity contract's mintOriginal function," it understood the entire chain and could spot inconsistencies I'd missed. That kind of cross-language reasoning saved me hours of debugging integration issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It learns your style.&lt;/strong&gt; Gemini CLI's ability to read my project files and understand my existing patterns meant the code it generated actually felt like mine. It picked up my logging conventions, my error handling style, how I organize modules. This matters more than people realize. When you come back to maintain the code six months later, you don't want half the codebase in one style and half in another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debugging is a superpower.&lt;/strong&gt; Pasting a stack trace and getting back a diagnosis that references my specific file names, variables, and function signatures is night and day compared to generic debugging advice. Multiple times it identified root causes that I'm pretty sure would have cost me hours to find on my own.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture brainstorming is maybe the highest ROI use case.&lt;/strong&gt; The conversations I had before writing code might have been the most valuable of all. Gemini CLI talked me out of doing blockchain calls directly from Python and into building a separate Node.js service instead. That turned out to be exactly the right call because ethers.js is significantly more mature than Python's web3 options for what I needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where It Struggled
&lt;/h3&gt;

&lt;p&gt;I want to be honest here because I think balanced feedback is more useful than hype.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Brand new SDKs are hit or miss.&lt;/strong&gt; The Filecoin Synapse SDK was pretty new when I was building, and Gemini CLI sometimes generated code with outdated API signatures. Totally understandable since training data has a cutoff. I just had to cross-reference with the actual SDK source code and docs in those cases. Not a dealbreaker, but worth knowing about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Raspberry Pi weirdness.&lt;/strong&gt; Some things are just... Pi-specific. Picamera2's quirks with different camera module versions, I2C communication edge cases with the UPS HAT, display driver issues. Gemini CLI gave reasonable approximations but couldn't always nail the Pi-specific details. I ended up writing a &lt;code&gt;fix_picamera2.sh&lt;/code&gt; script for installation edge cases that no AI could fully predict.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Very long sessions can drift.&lt;/strong&gt; In marathon debugging sessions (20+ back and forth exchanges), I occasionally noticed earlier context fading. My workaround was simple: start a new conversation with a fresh summary of where I was. Splitting conversations by component (one for the camera app, one for the blockchain stuff, one for the ZK pipeline) worked better than one giant thread.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Bottom Line
&lt;/h3&gt;

&lt;p&gt;Gemini CLI genuinely changed the speed at which I could build this project. LensMint touches hardware, cryptography, blockchain, decentralized storage, zero-knowledge proofs, and full-stack web dev. That's an absurd amount of ground for one developer. Without Gemini CLI holding context across all those domains simultaneously, this project would have taken months. It took weeks.&lt;/p&gt;

&lt;p&gt;It's not magic. I want to be clear about that. You still need to understand what you're building. You still need to review every line of generated code. You still hit edge cases that you have to solve yourself. But it compresses the distance between having an idea and having working code to a degree that genuinely surprised me.&lt;/p&gt;

&lt;p&gt;If you're working on something ambitious, especially if it crosses multiple tech domains like LensMint does, give Gemini CLI a real shot. Open your terminal, point it at your project, and start talking to it like you'd talk to a teammate. You might be surprised how far you get in an afternoon.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>geminireflections</category>
      <category>gemini</category>
      <category>web3</category>
    </item>
    <item>
      <title>Cloudgen Cloud Platform: Deploy Your App by Talking to It, Built with GitHub Copilot CLI</title>
      <dc:creator>MOHIT BHAT</dc:creator>
      <pubDate>Sun, 15 Feb 2026 17:50:41 +0000</pubDate>
      <link>https://forem.com/mbcse/cloudgen-deploy-your-app-by-talking-to-it-built-with-github-copilot-cli-47mo</link>
      <guid>https://forem.com/mbcse/cloudgen-deploy-your-app-by-talking-to-it-built-with-github-copilot-cli-47mo</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cloudgen&lt;/strong&gt; is a product that turns cloud deployment into a conversation. You describe what you need in plain English, your app, how many users you expect, whether you need a database or cache, and an AI figures out the rest. No config files, no dashboards, no wrestling with infrastructure. You get a clear plan, a cost estimate, and you’re one approval away from your app being live.&lt;/p&gt;

&lt;p&gt;Behind that simple experience is a lot of complexity. Understanding your intent, analyzing your repo, sizing resources, generating a safe execution plan, and then actually provisioning everything and giving you live URLs and connection strings. We built Cloudgen so you never have to touch that complexity yourself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live project:&lt;/strong&gt; &lt;a href="https://cloudgenapp.vercel.app/" rel="noopener noreferrer"&gt;https://cloudgenapp.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Video walkthrough:&lt;/strong&gt; &lt;a href="https://www.loom.com/share/0b5f73a623bb438ebd1ec41053427c21" rel="noopener noreferrer"&gt;https://www.loom.com/share/0b5f73a623bb438ebd1ec41053427c21&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository:&lt;/strong&gt; &lt;a href="https://github.com/mbcse/cloudgen" rel="noopener noreferrer"&gt;https://github.com/mbcse/cloudgen&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📌 The Problem
&lt;/h2&gt;

&lt;p&gt;Deploying even a simple application to the cloud today requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing &lt;strong&gt;Dockerfiles&lt;/strong&gt;, &lt;strong&gt;YAML manifests&lt;/strong&gt;, and &lt;strong&gt;IaC templates&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Navigating complex dashboards across &lt;strong&gt;AWS / GCP / Azure&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Manually provisioning &lt;strong&gt;databases, caches, compute, and networking&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Understanding container orchestration, port mappings, reverse proxies, and health checks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;For a developer who just wants to ship their app, this is too much friction.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Small teams and indie developers often spend more time wrangling infrastructure than building product. The gap between &lt;em&gt;"I have a repo"&lt;/em&gt; and &lt;em&gt;"it's live on the internet"&lt;/em&gt; shouldn't require DevOps expertise.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Our Solution
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cloudgen&lt;/strong&gt; is a &lt;strong&gt;chat-first cloud control plane&lt;/strong&gt; where users describe their infrastructure needs in plain English, and an &lt;strong&gt;agentic AI pipeline&lt;/strong&gt; analyzes their repository, generates a deployment plan, and provisions real infrastructure — all with human-in-the-loop approval.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Deploy my Next.js app from GitHub with a Postgres database for 500 users"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;→ Cloudgen analyzes the repo, generates a resource plan with cost estimates, and after approval, provisions containers, databases, and networking automatically.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Why it matters
&lt;/h3&gt;

&lt;p&gt;Getting from “I have a repo” to “it’s live on the internet” usually means learning orchestration, networking, databases, and billing. Small teams and indie devs end up spending more time on infra than on product. Cloudgen flips that. You say what you want, review a plan the AI proposes, approve it, and we handle the rest. Deployment becomes something you do in a chat, not a checklist of manual steps.&lt;/p&gt;

&lt;h3&gt;
  
  
  ✨ Product Features
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;🗣️ &lt;strong&gt;Chat-to-Deploy&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Natural language interface, describe what you need, get a deployment plan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🔍 &lt;strong&gt;Smart Repo Analysis&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Auto-detects runtime (Node.js/Python), framework, build commands, and Dockerfile presence&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📋 &lt;strong&gt;Plan Review &amp;amp; Approval&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Every deployment requires explicit human approval, see resources, rationale, cost estimates, and YAML steps before anything runs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🐳 &lt;strong&gt;Multi-Resource Provisioning&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Deploy &lt;strong&gt;App Services&lt;/strong&gt;, &lt;strong&gt;Compute Instances&lt;/strong&gt;, &lt;strong&gt;PostgreSQL&lt;/strong&gt;, and &lt;strong&gt;Redis&lt;/strong&gt; from a single conversation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;💰 &lt;strong&gt;Cost Estimation&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;AI-powered resource sizing with tiered pricing (&lt;code&gt;ac.starter&lt;/code&gt;, &lt;code&gt;ac.pro&lt;/code&gt;, &lt;code&gt;ac.business&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;📡 &lt;strong&gt;Live Deployment Logs&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Real-time streaming logs as containers build and start&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🔗 &lt;strong&gt;Endpoint Discovery&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Automatically returns live URLs and connection strings after deployment&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;🧠 &lt;strong&gt;RAG-Powered Context&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Internal docs and repo READMEs are indexed for smarter, context-aware responses&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;⚡ &lt;strong&gt;Graceful Degradation&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Works without an LLM API key using deterministic heuristic fallbacks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🏗️ Architecture Overview
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────────┐
│                        Next.js Frontend                         │
│   Dashboard  ·  Chat UI  ·  Plan Review  ·  Deployment Logs    │
└──────────────────────────────┬──────────────────────────────────┘
                               │ REST API
┌──────────────────────────────▼──────────────────────────────────┐
│                     Fastify API Server                          │
│                                                                 │
│  ┌──────────────────────────────────────────────────────────┐   │
│  │               LangGraph Chat Orchestrator                │   │
│  │                                                          │   │
│  │   Parse Intent ──► RAG Retrieval ──► Plan Generation     │   │
│  │        │                                    │            │   │
│  │        ▼                                    ▼            │   │
│  │   ┌─────────┐   ┌──────────────┐   ┌────────────┐       │   │
│  │   │ Intent  │   │    Repo      │   │  Capacity  │       │   │
│  │   │ Agent   │   │  Inspector   │   │   Agent    │       │   │
│  │   │         │   │    Agent     │   │            │       │   │
│  │   └─────────┘   └──────────────┘   └────────────┘       │   │
│  │        Gemini 2.0 Flash  /  Deterministic Fallback       │   │
│  └──────────────────────────────────────────────────────────┘   │
│                                                                 │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────────────┐   │
│  │   Deployer   │  │  RAG Engine  │  │   Prisma + Postgres  │   │
│  │  (SSH + Docker)│  │  (pgvector)  │  │   (Control Plane DB) │   │
│  └──────┬───────┘  └──────────────┘  └──────────────────────┘   │
└─────────┼───────────────────────────────────────────────────────┘
          │ SSH
┌─────────▼───────────────────────────────────────────────────────┐
│                    EC2 Runtime Host                              │
│                                                                 │
│   ┌─────────┐  ┌──────────┐  ┌─────────┐  ┌────────────────┐   │
│   │ App     │  │ Postgres │  │  Redis  │  │   Compute      │   │
│   │Container│  │Container │  │Container│  │   Instance      │   │
│   └─────────┘  └──────────┘  └─────────┘  └────────────────┘   │
│                        Nginx (reverse proxy)                    │
└─────────────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧠 How It Works — Technical Deep Dive
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Chat Orchestration (LangGraph)
&lt;/h3&gt;

&lt;p&gt;The core of Cloudgen is a &lt;strong&gt;stateful LangGraph pipeline&lt;/strong&gt; (&lt;a href="//apps/api/src/agent.ts"&gt;agent.ts&lt;/a&gt;) that processes every user message through a directed graph of nodes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;START ──► parse_intent ──► retrieve_context ──► route_intent
                                                    │
                                    ┌───────────────┼────────────┐
                                    ▼               ▼            ▼
                              generate_plan    answer_question   ...
                                    │
                                    ▼
                                save_plan ──► respond ──► END
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;parse_intent&lt;/code&gt;&lt;/strong&gt; — Regex + heuristic parser extracts repo URLs, expected user counts, database requirements, and whether the user wants to plan, deploy, or ask a question&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;retrieve_context&lt;/code&gt;&lt;/strong&gt; — Queries the RAG index (pgvector cosine similarity) for relevant internal docs and repo READMEs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;generate_plan&lt;/code&gt;&lt;/strong&gt; — Invokes the &lt;strong&gt;multi-agent planning pipeline&lt;/strong&gt; when a provisioning intent is detected&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;respond&lt;/code&gt;&lt;/strong&gt; — Streams a contextual reply back using Gemini, or falls back to a templated response&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Multi-Agent Planning Pipeline
&lt;/h3&gt;

&lt;p&gt;When a deployment intent is detected, three specialized agents collaborate (&lt;a href="//apps/api/src/planning.ts"&gt;planning.ts&lt;/a&gt;):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Agent&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;th&gt;Output&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Intent Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Extracts high-level goals — expected users, latency sensitivity, which resources are needed (app/postgres/redis/instance)&lt;/td&gt;
&lt;td&gt;&lt;code&gt;IntentAgentOutput&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Repo Inspector Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fetches the GitHub repo structure, &lt;code&gt;package.json&lt;/code&gt;, &lt;code&gt;Dockerfile&lt;/code&gt;, and &lt;code&gt;requirements.txt&lt;/code&gt; to infer runtime, framework, build commands, and app port&lt;/td&gt;
&lt;td&gt;&lt;code&gt;RepoInspectorOutput&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Capacity Agent&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Takes the outputs of the previous two agents and determines resource sizing (CPU, memory, replicas), tier selection, and cost estimation&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CapacityAgentOutput&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each agent calls &lt;strong&gt;Gemini 2.0 Flash&lt;/strong&gt; via a structured JSON prompt (&lt;a href="//apps/api/src/llm.ts"&gt;llm.ts&lt;/a&gt;). If no API key is available, each agent has a &lt;strong&gt;deterministic fallback&lt;/strong&gt; so the system remains fully functional without any LLM.&lt;/p&gt;

&lt;p&gt;The pipeline produces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;code&gt;DeploymentPlan&lt;/code&gt; stored in Postgres&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;YAML step file&lt;/strong&gt; (&lt;code&gt;deployment-plans/&amp;lt;plan-id&amp;gt;.yaml&lt;/code&gt;) describing the exact Docker commands to execute&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. RAG Engine (pgvector)
&lt;/h3&gt;

&lt;p&gt;Cloudgen uses &lt;strong&gt;Retrieval-Augmented Generation&lt;/strong&gt; to ground agent responses in real documentation (&lt;a href="//apps/api/src/rag.ts"&gt;rag.ts&lt;/a&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Internal docs&lt;/strong&gt; (architecture, runbooks) are chunked and embedded into pgvector&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repository READMEs&lt;/strong&gt; from connected GitHub repos are fetched and indexed&lt;/li&gt;
&lt;li&gt;At query time, a deterministic 64-dim embedding is computed and a &lt;strong&gt;cosine similarity search&lt;/strong&gt; retrieves the most relevant chunks&lt;/li&gt;
&lt;li&gt;Retrieved chunks are injected as context into the LLM prompt and returned as &lt;strong&gt;citations&lt;/strong&gt; in the chat response&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Deployment Executor (SSH + Docker)
&lt;/h3&gt;

&lt;p&gt;After plan approval, the &lt;strong&gt;Deployer&lt;/strong&gt; (&lt;a href="//apps/api/src/deployer.ts"&gt;deployer.ts&lt;/a&gt;) triggers the &lt;strong&gt;Executor&lt;/strong&gt; (&lt;a href="//apps/api/src/executor.ts"&gt;executor.ts&lt;/a&gt;):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;SSH tunnel&lt;/strong&gt; to the runtime EC2 host using key-based authentication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clone&lt;/strong&gt; the GitHub repo on the remote host&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auto-generate Dockerfile&lt;/strong&gt; if missing (for Node.js repos)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build&lt;/strong&gt; the Docker image&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provision&lt;/strong&gt; each resource as a Docker container with CPU/memory limits:

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker run&lt;/code&gt; with &lt;code&gt;--cpus&lt;/code&gt;, &lt;code&gt;--memory&lt;/code&gt;, port mappings&lt;/li&gt;
&lt;li&gt;Postgres containers use &lt;code&gt;postgres:16-alpine&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Redis containers use &lt;code&gt;redis:7-alpine&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Compute instances use &lt;code&gt;ubuntu:22.04&lt;/code&gt; with long-running entry points&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Health check&lt;/strong&gt; — polls the container endpoint until it responds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configure Nginx&lt;/strong&gt; reverse proxy route (optional)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Return endpoints&lt;/strong&gt; — live app URL + database/cache connection strings&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  5. Data Model (Prisma + PostgreSQL)
&lt;/h3&gt;

&lt;p&gt;The control plane persists all state via &lt;strong&gt;Prisma ORM&lt;/strong&gt; (&lt;a href="//apps/api/prisma/schema.prisma"&gt;schema.prisma&lt;/a&gt;):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Project&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;A linked GitHub repo with name, slug, branch&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ChatSession&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Conversation thread tied to a project&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ChatMessage&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Individual messages with role and citations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;DeploymentPlan&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;AI-generated plan with inputs, decision, cost, rationale&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;Deployment&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Execution record with status, logs, and live URLs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;RagDocument&lt;/code&gt; / &lt;code&gt;RagChunk&lt;/code&gt; / &lt;code&gt;RagEmbedding&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;RAG corpus with pgvector embeddings&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🛠️ Tech Stack
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Frontend&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Next.js 15, React, Tailwind CSS, NextAuth.js&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Backend&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Fastify 5, TypeScript, Zod validation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Agent Framework&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;LangGraph (stateful graph orchestration)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LLM&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Gemini 2.0 Flash (streaming + structured JSON output)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Database&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;PostgreSQL + pgvector (control plane + RAG)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;ORM&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Prisma with raw SQL for vector operations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Deployment Runtime&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Docker containers on EC2 via SSH&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Monorepo&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;npm workspaces&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🚀 Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Node.js 20+&lt;/li&gt;
&lt;li&gt;PostgreSQL 15+ with &lt;a href="https://github.com/pgvector/pgvector" rel="noopener noreferrer"&gt;pgvector&lt;/a&gt; extension&lt;/li&gt;
&lt;li&gt;(Optional) &lt;a href="https://ai.google.dev/" rel="noopener noreferrer"&gt;Gemini API key&lt;/a&gt; for AI-powered planning&lt;/li&gt;
&lt;li&gt;(Optional) EC2 host with Docker for live deployments&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install dependencies&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Generate Prisma client&lt;/span&gt;
npm run db:generate

&lt;span class="c"&gt;# Run database migrations&lt;/span&gt;
npm run db:migrate &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; init

&lt;span class="c"&gt;# Seed initial data&lt;/span&gt;
npm run db:seed

&lt;span class="c"&gt;# Start development servers&lt;/span&gt;
npm run dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Environment Variables
&lt;/h3&gt;

&lt;p&gt;Create &lt;code&gt;apps/api/.env&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;DATABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"postgresql://postgres:postgres@localhost:5432/cloudgen"&lt;/span&gt;
&lt;span class="nv"&gt;PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;4000

&lt;span class="c"&gt;# LLM (optional — system works without it via fallbacks)&lt;/span&gt;
&lt;span class="nv"&gt;GEMINI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nv"&gt;GEMINI_MODEL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gemini-2.0-flash"&lt;/span&gt;

&lt;span class="c"&gt;# Runtime deployment host (leave empty for local preview mode)&lt;/span&gt;
&lt;span class="nv"&gt;RUNTIME_SSH_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nv"&gt;RUNTIME_SSH_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"ubuntu"&lt;/span&gt;
&lt;span class="nv"&gt;RUNTIME_SSH_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"22"&lt;/span&gt;
&lt;span class="nv"&gt;RUNTIME_SSH_KEY_PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/path/to/key.pem"&lt;/span&gt;
&lt;span class="nv"&gt;RUNTIME_BASE_DIR&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/tmp/cloudgen-apps"&lt;/span&gt;
&lt;span class="nv"&gt;RUNTIME_PUBLIC_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"http://your-ec2-ip"&lt;/span&gt;

&lt;span class="c"&gt;# Resource limits&lt;/span&gt;
&lt;span class="nv"&gt;ACTIVE_APP_CAP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"8"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Services
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Service&lt;/th&gt;
&lt;th&gt;URL&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Web Dashboard&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API Server&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:4000" rel="noopener noreferrer"&gt;http://localhost:4000&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Health Check&lt;/td&gt;
&lt;td&gt;&lt;a href="http://localhost:4000/health" rel="noopener noreferrer"&gt;http://localhost:4000/health&lt;/a&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  📡 API Reference
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Endpoint&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;POST&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/api/projects&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Create a new project (link a GitHub repo)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;GET&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/api/projects/:id&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Get project details&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;GET&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/api/projects/:id/resources&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Get provisioned resources for a project&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;POST&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/api/chat&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Send a chat message (triggers planning if needed)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;GET&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/api/sessions/:id&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Get full chat session history&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;POST&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/api/plans/:id/approve&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Approve a deployment plan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;GET&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/api/plans/:id/steps&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Get YAML deployment steps for a plan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;POST&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/api/deploy&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Trigger deployment of an approved plan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;GET&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;/api/deployments/:id&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Get deployment status and logs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🔄 End-to-End Flow
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User: "Deploy https://github.com/user/app with Postgres for 500 users"
  │
  ├──► Intent Parser: repo URL, 500 users, postgres needed
  ├──► RAG Retrieval: fetch relevant architecture docs
  ├──► Intent Agent: high-level goals + resource list
  ├──► Repo Inspector: Node.js, has Dockerfile, port 3000
  ├──► Capacity Agent: ac.pro tier, 1 CPU, 1GB RAM, ~$34/mo
  │
  ▼
Plan Generated:
  • App container (Node.js, port 3000)
  • Postgres container (postgres:16-alpine)
  • Estimated cost: $34/mo
  • YAML steps file written
  │
  ├──► User reviews plan + rationale
  ├──► User approves
  │
  ▼
Deployment:
  • SSH into EC2 host
  • Clone repo, build Docker image
  • Start app + postgres containers
  • Health check passes
  • Nginx route configured
  │
  ▼
Result: "Your app is live at http://host:21042 🎉"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🧪 Smoke Test
&lt;/h2&gt;

&lt;p&gt;Run the full end-to-end flow against a running instance:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm run smoke
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Screenshots:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F963raiol5kepoknqvyxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F963raiol5kepoknqvyxw.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flblqa182hx9ytrsi7kbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flblqa182hx9ytrsi7kbo.png" alt=" " width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;p&gt;Cloudgen is a complex product. Orchestrated AI agents, a full API, real provisioning, and a dashboard. We used &lt;strong&gt;GitHub Copilot CLI&lt;/strong&gt; from the terminal throughout the build. It didn’t just speed up typing, it helped us design and implement systems we’d have hesitated to tackle alone. Below is how we used it, the prompts that worked, and how you can use Copilot CLI more effectively on your own projects.&lt;/p&gt;




&lt;h3&gt;
  
  
  How we used Copilot CLI: interactive vs one-off
&lt;/h3&gt;

&lt;p&gt;Copilot CLI has two modes we relied on every day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interactive mode&lt;/strong&gt; (default). We’d run &lt;code&gt;copilot&lt;/code&gt; in the project root and stay in a session. That’s where we did most of the work: multi-file changes, refactors, and “add a new node to the graph” style tasks. The back-and-forth let us refine in small steps (“now add error handling for when the repo URL is invalid”) without rewriting long prompts. We’d confirm we trusted the folder when asked, then work in that directory and its subdirectories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Programmatic mode.&lt;/strong&gt; For quick, single-shot tasks we used &lt;code&gt;-p&lt;/code&gt; or &lt;code&gt;--prompt&lt;/code&gt;. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;copilot &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"Add a Zod schema for POST /api/chat: sessionId optional UUID, projectId optional UUID, message required string min 1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That gave us a schema and a route stub we could paste into the Fastify app. We used this for validation schemas, small utilities, and one-off scripts (e.g. “generate a bash script to run the API and web app with one command”). For anything that would modify or run files, Copilot asked for approval first, so we stayed in control.&lt;/p&gt;




&lt;h3&gt;
  
  
  Giving Copilot context: &lt;a class="mentioned-user" href="https://dev.to/file"&gt;@file&lt;/a&gt; and scope
&lt;/h3&gt;

&lt;p&gt;Copilot works better when it sees the exact code you care about. We used &lt;strong&gt;&lt;a class="mentioned-user" href="https://dev.to/file"&gt;@file&lt;/a&gt;&lt;/strong&gt; a lot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Examples we actually used:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Explain @apps/api/src/agent.ts and list all graph nodes and what they write to state&lt;/code&gt;&lt;br&gt;&lt;br&gt;
So we could onboard quickly and later ask for new nodes without breaking the graph.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;In @apps/api/src/intent.ts add detection for "no database" and set databaseRequired to false. Keep the same ParsedIntent shape.&lt;/code&gt;&lt;br&gt;&lt;br&gt;
One file, one behavior change, clear outcome.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;@apps/api/src/planning.ts the Capacity agent output needs a new field 'reasoning: string[]'. Add it to the interface and to the fallback object.&lt;/code&gt;&lt;br&gt;&lt;br&gt;
Copilot had the interfaces and the fallback logic in context, so the change was consistent.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Fix the bug in @apps/api/src/rag.ts: chunkContent is sometimes undefined when we build the citation. Add a filter.&lt;/code&gt;&lt;br&gt;&lt;br&gt;
We pointed at the file and described the symptom; Copilot proposed a safe fix.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We also kept &lt;strong&gt;sessions focused&lt;/strong&gt;. When we switched from “agent graph” work to “frontend” work, we ran &lt;code&gt;/clear&lt;/code&gt; and started a new mental context. That cut down on Copilot suggesting changes to the wrong layer. When we needed to touch both API and web app, we stayed in one session and mentioned both: “Update the API to return estimatedCostUsd in the plan payload and add a cost row in the plan review card in the project page.”&lt;/p&gt;




&lt;h3&gt;
  
  
  Plan before you code: /plan for big features
&lt;/h3&gt;

&lt;p&gt;For larger chunks of work we used &lt;strong&gt;plan mode&lt;/strong&gt;. Instead of “implement this whole thing,” we asked Copilot to design the steps first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example prompts:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;/plan Add a deployment plan approval flow: API endpoint POST /api/plans/:id/approve, update plan status in DB, return updated plan. Frontend: Approve button that calls the endpoint and then shows a “Deployment started” state.&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;/plan Implement streaming chat: the /api/chat response should stream chunks. Frontend should consume the stream and append to the message content. Keep existing non-streaming fallback.&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What we got: a structured plan (often saved to something like &lt;code&gt;plan.md&lt;/code&gt; in the session), with checkboxes and ordered steps. We could review it, ask for edits (“add a step to handle approval rejection”), and only then say “implement this plan.” That reduced wrong turns and kept the codebase consistent. For quick bug fixes or single-file edits we didn’t use &lt;code&gt;/plan&lt;/code&gt;, only for multi-file or multi-layer features.&lt;/p&gt;




&lt;h3&gt;
  
  
  Prompt examples that worked for Cloudgen
&lt;/h3&gt;

&lt;p&gt;Here are real prompt patterns we used, so you can adapt them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orchestration and state (LangGraph)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“In agent.ts we have a state graph. Add a new node &lt;code&gt;retrieve_context&lt;/code&gt; that runs after &lt;code&gt;parse_intent&lt;/code&gt;. It should call retrieveContext from rag.js with the message, put the result in state.chunks, and pass through to the next node. Use the same Annotation pattern as the other nodes.”&lt;/li&gt;
&lt;li&gt;“The respond node should include citations from state.chunks in the streamed reply. Each citation needs source title and chunk text. Update the streamChatReply call to pass citations.”&lt;/li&gt;
&lt;li&gt;“Our graph has a branch: if intent.wantsPlan we go to generate_plan, else if intent.asksQuestion we go to answer_question. Add a default branch that sets reply to a short ‘I didn’t understand’ message and goes to END.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Planning pipeline and types&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“In planning.ts add a fallback for IntentAgentOutput when the LLM is unavailable: expectedUsers 100, databaseRequired true, requestedResources ['app'], confidence 0.5. Match the interface exactly.”&lt;/li&gt;
&lt;li&gt;“We’re adding estimatedCostUsd to the plan. Update: 1) CapacityAgentOutput and the fallback in planning.ts, 2) the place we save the plan to the DB in agent.ts, 3) the DeploymentPlan type in the Prisma schema if needed.”&lt;/li&gt;
&lt;li&gt;“Generate the TypeScript interface for DeployStep: id string, type enum (prepare_workspace, clone_repo, build_image, run_app, run_postgres, health_check, configure_route), target string, description string.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;API and validation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Add a Fastify route POST /api/plans/:id/approve. Body: { approved: true, approvedBy?: string }. Validate with Zod. On success call approvePlan(planId) and return { plan }.”&lt;/li&gt;
&lt;li&gt;“We need GET /api/deployments/:id that returns status, logsJson, appUrl, containerName. Use getDeployment from deployer.js. Return 404 if not found.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Database and RAG&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“Our Prisma schema has Project, ChatSession, ChatMessage, DeploymentPlan, Deployment. Add a RagDocument model: id, sourceType enum (INTERNAL, RUNBOOK, REPO_README), sourceRef unique, title, content, createdAt, updatedAt. Add RagChunk with documentId and content.”&lt;/li&gt;
&lt;li&gt;“In rag.ts the retrieveContext function should take a query string, compute an embedding (use the existing deterministic embedding if no API key), query RagChunk by cosine similarity on the embedding column, and return the top 5 chunks with title and sourceRef.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Frontend&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“In the project page, add a section that shows the latest deployment plan: rationale, resources list, estimated cost. If there’s no plan, show ‘No plan yet.’ Use the existing API types.”&lt;/li&gt;
&lt;li&gt;“Wire the Approve button to POST /api/plans/:id/approve and then POST /api/deploy with projectId and planId. Disable the button while loading and show a toast or inline message on error.”&lt;/li&gt;
&lt;li&gt;“The deployment logs should update in real time. Add a polling loop that calls GET /api/deployments/:id every 2 seconds while status is QUEUED or BUILDING, and append new log entries to the list.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Provisioning and reliability&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“The executor runs a sequence of steps. If any step fails, we should push the error message to logs, set status to FAILED, and stop. Don’t run the next step. Add a try/catch around each step and update the deployment record.”&lt;/li&gt;
&lt;li&gt;“Add a health check step after the app is running: GET the app URL with a 30s timeout, retry up to 5 times with 5s delay. If it never responds, mark deployment as FAILED and add ‘Health check failed’ to logs.”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We were &lt;strong&gt;specific about inputs and outputs&lt;/strong&gt; (e.g. “return the top 5 chunks with title and sourceRef”) and &lt;strong&gt;broke big features into small prompts&lt;/strong&gt; (one node, one endpoint, one UI block). That gave us code that matched our architecture instead of generic snippets.&lt;/p&gt;




&lt;h3&gt;
  
  
  Slash commands we used every day
&lt;/h3&gt;

&lt;p&gt;We leaned on a few slash commands to work faster and keep context clean.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;How we used it&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/clear&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Between unrelated tasks (e.g. after finishing the agent graph and before starting the dashboard). Clears conversation history so Copilot doesn’t drag in old files or decisions.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/plan&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Before implementing a new feature or flow. Gets a step-by-step plan we can edit, then “implement this plan.”&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/cwd&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;When we needed to scope Copilot to &lt;code&gt;apps/api&lt;/code&gt; or &lt;code&gt;apps/web&lt;/code&gt; only. We’d &lt;code&gt;/cwd apps/api&lt;/code&gt; then ask for API-only changes.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/model&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;We switched to a more capable model for the orchestration and planning code (complex state and types), and kept a faster model for simple CRUD and UI.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/help&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;To discover other commands (e.g. &lt;code&gt;/context&lt;/code&gt;, &lt;code&gt;/session&lt;/code&gt;, &lt;code&gt;/delegate&lt;/code&gt;) when we needed them.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/review&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Before committing. “Review the changes in my current branch against main for potential bugs and security issues.”&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Pro tip:&lt;/strong&gt; If you only remember three, use &lt;code&gt;/clear&lt;/code&gt;, &lt;code&gt;/cwd&lt;/code&gt;, and &lt;code&gt;/plan&lt;/code&gt;. They give you control over context, scope, and how much “thinking” Copilot does before writing code.&lt;/p&gt;




&lt;h3&gt;
  
  
  Custom instructions so Copilot matched our stack
&lt;/h3&gt;

&lt;p&gt;We added a &lt;strong&gt;&lt;code&gt;.github/copilot-instructions.md&lt;/code&gt;&lt;/strong&gt; (or repo-level instructions) so Copilot didn’t guess our conventions.&lt;/p&gt;

&lt;p&gt;We wrote things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Build commands:&lt;/strong&gt; &lt;code&gt;npm run dev&lt;/code&gt; (root, runs api + web), &lt;code&gt;npm run db:migrate&lt;/code&gt; (from root, runs API migrations), &lt;code&gt;npm run typecheck&lt;/code&gt; (root).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code style:&lt;/strong&gt; TypeScript strict, ESM (import/export), prefer &lt;code&gt;async/await&lt;/code&gt;, use Zod for request validation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structure:&lt;/strong&gt; Backend in &lt;code&gt;apps/api/src&lt;/code&gt;, frontend in &lt;code&gt;apps/web/app&lt;/code&gt; and &lt;code&gt;components&lt;/code&gt;, shared types in &lt;code&gt;apps/web/lib/types.ts&lt;/code&gt; and API &lt;code&gt;types.ts&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow:&lt;/strong&gt; After adding an API endpoint, export it from the right module and add the route in &lt;code&gt;index.ts&lt;/code&gt;; after changing Prisma schema, run &lt;code&gt;db:generate&lt;/code&gt; and &lt;code&gt;db:migrate&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That way, when we said “add an endpoint to list deployments for a project,” Copilot put it in the right place and used Zod and our existing patterns. Short, actionable instructions worked better than long essays.&lt;/p&gt;




&lt;h3&gt;
  
  
  How to use Copilot CLI more effectively (what we learned)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Break down complex tasks.&lt;/strong&gt; “Implement the full chat flow” is too big. We got better results with: “Add the parse_intent node,” then “Add the retrieve_context node and wire it after parse_intent,” then “Add the branch that routes to generate_plan or answer_question.” Same for the frontend: one component or one API integration per prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Be specific about inputs and outputs.&lt;/strong&gt; “Add a function that fetches the plan” is vague. “Add a function getPlan(planId: string) that calls GET /api/plans/:id and returns the plan JSON or null if 404” is something Copilot can implement accurately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use &lt;a class="mentioned-user" href="https://dev.to/file"&gt;@file&lt;/a&gt; for precision.&lt;/strong&gt; When the change is in one file or you need to avoid touching others, put the path in the prompt. It reduces wrong-file edits and keeps context small.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Plan mode for multi-step work.&lt;/strong&gt; For anything that touches API + DB + frontend, or several modules, we used &lt;code&gt;/plan&lt;/code&gt; first. We reviewed the plan, adjusted it, then said “implement this plan.” Fewer rollbacks and cleaner diffs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Validate Copilot’s output.&lt;/strong&gt; We always ran &lt;code&gt;npm run typecheck&lt;/code&gt; and the relevant tests after accepting changes. We especially reviewed anything that touched deployment or user data. Copilot is a powerful draft, not a substitute for review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clear context when switching tasks.&lt;/strong&gt; &lt;code&gt;/clear&lt;/code&gt; between “backend” and “frontend” or between features kept suggestions relevant. We also closed files we weren’t working on so Copilot didn’t pull them in unnecessarily.&lt;/p&gt;




&lt;h3&gt;
  
  
  What actually changed for us
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;We could think in product terms.&lt;/strong&gt; Describe what should happen next, get code that matched. Less jumping between docs and editor.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex systems felt buildable.&lt;/strong&gt; The orchestration and provisioning layers are the kind of thing that usually take weeks to get right. Copilot CLI gave us a strong first pass so we could refine behavior and edge cases instead of starting from zero.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency across the stack.&lt;/strong&gt; By describing our conventions in natural language (and in copilot-instructions), we kept naming, error handling, and structure consistent across backend, agents, and frontend.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We still review and test everything, especially where real provisioning and user data are involved, but Copilot CLI let us build a product that &lt;em&gt;sells&lt;/em&gt; the idea of “deploy by talking,” instead of getting stuck in the plumbing.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Cloudgen&lt;/strong&gt; is a product that lets you deploy your app by describing what you need in chat. We handle the complexity, planning, sizing, provisioning, and giving you live URLs and connection strings, so you don’t have to. We built it with &lt;strong&gt;GitHub Copilot CLI&lt;/strong&gt;, and it’s what made it realistic to ship something this involved: multi-agent orchestration, a full API, real provisioning, and a polished dashboard, without drowning in boilerplate. We’re excited to keep improving Cloudgen, and we will soon launch it to the public for use. A lot of tools are also coming that help you deploy better. Thanks to copilot we can ship fast!!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thanks to the GitHub Copilot team for running this challenge.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Appwrite Submission Post MerkleWrite- A decentralised blogging website built using ethereum, defi and appwrite</title>
      <dc:creator>MOHIT BHAT</dc:creator>
      <pubDate>Thu, 12 May 2022 19:13:55 +0000</pubDate>
      <link>https://forem.com/mbcse/appwrite-submission-post-merklewrite-a-decentralised-blogging-website-built-using-ethereum-defi-and-appwrite-2</link>
      <guid>https://forem.com/mbcse/appwrite-submission-post-merklewrite-a-decentralised-blogging-website-built-using-ethereum-defi-and-appwrite-2</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;h2&gt;
  
  
  About
&lt;/h2&gt;

&lt;p&gt;This is Decentralized content sharing platform made on top of ipfs, Defi and Ethereum. Nowdays content creation is centralised and are subject to changes or partiality by the site owners. To support the content creators and provide them with equal rights and profits, &lt;strong&gt;merklewrite is made!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In merkleWrite you can write a blog with images and markdown content. The users will need to pay 1 DAI as subscription fee for 30 days to see the blog's. The DAI taken from user's are then sent to Compound for generating Interest(In the Form Of CTokens). The Dai Taken and Interest generated are then equally divided between content creators who has Likes more than specified limit(Currently set at 10). You can transfer Subscription to other ethereum address, get paid once in a month and update your profile's to let world know you!&lt;br&gt;
The Best part Is All is Decentralized with a no code support from appwrite&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Blockchain Technologies used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Ethereum&lt;/li&gt;
&lt;li&gt;Matic Network&lt;/li&gt;
&lt;li&gt;Portis&lt;/li&gt;
&lt;li&gt;DEFI&lt;/li&gt;
&lt;li&gt;Dai Stable coin&lt;/li&gt;
&lt;li&gt;Compound&lt;/li&gt;
&lt;li&gt;Ipfs&lt;/li&gt;
&lt;li&gt;ERC20&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  TechStack Used
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Blockchain&lt;/li&gt;
&lt;li&gt;Nodejs&lt;/li&gt;
&lt;li&gt;HTML/CSS&lt;/li&gt;
&lt;li&gt;Boostrap&lt;/li&gt;
&lt;li&gt;Javascript&lt;/li&gt;
&lt;li&gt;Appwrite&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  User can write blog(MarkDown Supported)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;To write a blog the user needs to go to write section&lt;/li&gt;
&lt;li&gt;He need to login using Portis wallet or create one&lt;/li&gt;
&lt;li&gt;Need's to put title, featured Image and Content(In Markdown) and submit it&lt;/li&gt;
&lt;li&gt;Pay the gas fee, hola! Blog uploaded!!
### Profile/Account Section
It includes:&lt;/li&gt;
&lt;li&gt;Subscribe To see Blog's&lt;/li&gt;
&lt;li&gt;Transfer Your Subscription
&lt;/li&gt;
&lt;li&gt;Update name, Profile Image and Cover Image&lt;/li&gt;
&lt;li&gt;See Subscription Details, No of total likes on the post's, Get paid for content and see all blog that you uploaded &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;## Other features&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can like blog's(thanks to appwrite database api)&lt;/li&gt;
&lt;li&gt;You can see the Author Profile and all his/her Blogs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  To Run The code
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Clone the Repo&lt;/li&gt;
&lt;li&gt;Go to project Directory and open Cmd&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;npm install&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Run &lt;strong&gt;npm start&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Head to &lt;a href="http://127.0.0.1:5000/"&gt;http://127.0.0.1:5000/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How this is built
&lt;/h2&gt;

&lt;p&gt;Blokchain: The blockchian used here is ethereum and polygon(on testnets). The post metadata data is stored on smart contract which is built using solidity. The post content and images and other stuff are first uploaded to ipfs(interplanetary file system) and the hash returned is then stored onto the smart contract. This saves gas cost and provides more security and feasibility.&lt;/p&gt;

&lt;p&gt;The payments are take through portis wallet and sent to compound protocol(which gives us back ctokens) which basically generated interest on it. Ones months end payments are calculated on the basis of how many creators are there, and there likes, views and then the amount is taken back from compound and given to creators. The interest generated is sent to admin(us).&lt;/p&gt;

&lt;p&gt;Frontend: Its built using html, css, javascript, bootstrap. Web3js is used to make communication with web3 wallets, ethereum/polygon blockchain.&lt;/p&gt;

&lt;p&gt;Backend: There is a small backend working for auth and storing likes on post which helps in distribution of creators funds. appwrite database is connected to nodejs backend and routes are created to handle application data. It act as a intermediary node between connecting appwrite, blockchain, ipfs and frontend.&lt;/p&gt;

&lt;p&gt;Appwrite: Appwrite is really awesome. I was able to setup and connect all these things in very short span of time because of appwrite. appwrite is deployed using digital ocean image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qXSqoyOL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a65mn6538w48obsuvhoc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qXSqoyOL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a65mn6538w48obsuvhoc.png" alt="Image description" width="880" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then project is created and web app is added to it and database is created which handles like data. Some apprite functions are also used for data interaction btw blockchain and backend.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Tz0xbo3e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujxbney95qtx0krj477i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tz0xbo3e--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ujxbney95qtx0krj477i.png" alt="Image description" width="880" height="449"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--n3jWy4gL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bqdikmrkupz08brz3rl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--n3jWy4gL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bqdikmrkupz08brz3rl5.png" alt="Image description" width="880" height="449"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---vKEXePY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7oxwqzul0rj75y5wa6dt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---vKEXePY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7oxwqzul0rj75y5wa6dt.png" alt="Image description" width="880" height="451"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0ZnJqDtV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9wtf4ugw833e0k3xcjqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0ZnJqDtV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9wtf4ugw833e0k3xcjqm.png" alt="Image description" width="880" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;Web3 Wunderkinds&lt;/p&gt;

&lt;h3&gt;
  
  
  Links
&lt;/h3&gt;

&lt;p&gt;Website 👉 &lt;a href="https://merklewrite.herokuapp.com/"&gt;https://merklewrite.herokuapp.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Github 👉 &lt;a href="https://github.com/mbcse/app-write-dev"&gt;https://github.com/mbcse/app-write-dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Appwrite Link 👉  &lt;a href="http://68.183.246.221/"&gt;http://68.183.246.221/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;YouTube 👉 &lt;a href="https://youtu.be/qbnQlZge98Q"&gt;https://youtu.be/qbnQlZge98Q&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hola! Decentralized Blogging started&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Deployed On Matic Mumbai Testnet&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Wallet Support by Portis&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Backend Support by Appwrite&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;LIVE WEBSITE LINK: &lt;a href="https://merklewrite.herokuapp.com/"&gt;https://merklewrite.herokuapp.com/&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;IPFS Accessed through Infura&lt;/strong&gt;&lt;br&gt;&lt;/p&gt;

</description>
      <category>appwritehack</category>
      <category>javascript</category>
      <category>web3</category>
      <category>node</category>
    </item>
  </channel>
</rss>
