<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: yousu.be</title>
    <description>The latest articles on Forem by yousu.be (@splmdny).</description>
    <link>https://forem.com/splmdny</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/splmdny"/>
    <language>en</language>
    <item>
      <title>ClickMapper — A Keyboard-Driven Chrome Extension to Eliminate Repetitive Clicks for Content Moderators</title>
      <dc:creator>yousu.be</dc:creator>
      <pubDate>Mon, 16 Feb 2026 07:36:06 +0000</pubDate>
      <link>https://forem.com/splmdny/clickmapper-a-keyboard-driven-chrome-extension-to-eliminate-repetitive-clicks-for-content-9c5</link>
      <guid>https://forem.com/splmdny/clickmapper-a-keyboard-driven-chrome-extension-to-eliminate-repetitive-clicks-for-content-9c5</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;ClickMapper&lt;/strong&gt;, a &lt;em&gt;Chrome extension&lt;/em&gt; designed to help content moderators and reviewers work faster by mapping clickable areas on a webpage and triggering actions using keyboard shortcuts.&lt;/p&gt;

&lt;p&gt;I have experience working as a &lt;strong&gt;content moderator/reviewer&lt;/strong&gt;, and one of the biggest pain points in that role is performing repetitive clicks in the same spots over and over again to review content. This becomes mentally exhausting and inefficient over long sessions. I realized it would be much more productive if these actions could be simplified with keyboard controls instead of constant mouse movement.&lt;/p&gt;

&lt;p&gt;There are existing automation tools like AutoHotkey, but in many companies those tools are banned due to security concerns (malware risks, false positives, or policy restrictions). Additionally, those tools are often not beginner-friendly and require scripting knowledge. Because of that gap, I decided to build ClickMapper as a safe, browser-based moderator companion that works within company restrictions while remaining easy to use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This project focuses on improving productivity, reducing repetitive strain, and making moderation workflows more efficient without requiring advanced technical setup.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Project Link: &lt;a href="https://github.com/splmdny/chrome_ClickBinding" rel="noopener noreferrer"&gt;github.com/splmdny&lt;/a&gt;&lt;br&gt;
[Update 2/22/26] Official Chrome Store Link: &lt;a href="https://chromewebstore.google.com/detail/dnajkjbjlcbbibijanlpakkfkjgehkak?utm_source=item-share-cb" rel="noopener noreferrer"&gt;https://chromewebstore.google.com/detail/dnajkjbjlcbbibijanlpakkfkjgehka&lt;/a&gt;&lt;br&gt;
Video Demo: &lt;a href="https://youtu.be/s2wMHisA7O8" rel="noopener noreferrer"&gt;https://youtu.be/s2wMHisA7O8&lt;/a&gt;&lt;br&gt;
Screenshots:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz21e8dg25t1seja5mgvw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz21e8dg25t1seja5mgvw.jpg" alt="Step-by-step to use it - 1" width="800" height="500"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnvkvk8jyfafypdp7klt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnvkvk8jyfafypdp7klt.jpg" alt="Step-by-step to use it - 2" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;p&gt;GitHub Copilot CLI played a major role in accelerating development. I used it to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generate Chrome Extension Manifest v3 structure&lt;/li&gt;
&lt;li&gt;Scaffold content scripts and background logic&lt;/li&gt;
&lt;li&gt;Quickly prototype DOM-mapping and event-handling functions&lt;/li&gt;
&lt;li&gt;Debug permission issues and refine code patterns&lt;/li&gt;
&lt;li&gt;Explore alternative implementation approaches through prompt-driven iteration&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The biggest impact was speed. Instead of spending time searching documentation or writing boilerplate manually, I could describe what I wanted in natural language and iterate rapidly. It felt like having a pair-programming assistant that understood both the architecture and the intent behind the feature.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;However, from my experience, Copilot still requires manual adjustment—especially when creating UI that is truly usable and favorable for the builder’s specific needs. While it can generate good starting points, refining layout, interaction details, and overall user experience still depends on human decisions and iteration.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Copilot CLI also helped reduce context switching, allowing me to stay focused on solving the actual workflow problem for moderators rather than getting stuck on setup details.&lt;/p&gt;

&lt;p&gt;Overall, it significantly improved productivity and made experimentation much easier throughout the build process.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>Who’s That Pokémon? – The Twist!</title>
      <dc:creator>yousu.be</dc:creator>
      <pubDate>Mon, 15 Sep 2025 07:06:01 +0000</pubDate>
      <link>https://forem.com/splmdny/whos-that-pokemon-the-twist-26nf</link>
      <guid>https://forem.com/splmdny/whos-that-pokemon-the-twist-26nf</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-ai-studio-2025-09-03"&gt;Google AI Studio Multimodal Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;*&lt;em&gt;Who’s That Pokémon? *&lt;/em&gt;– The Twist! is a fun and challenging Pokémon quiz game built with React, TypeScript, and the Google Gemini API.&lt;/p&gt;

&lt;p&gt;Can you guess the Pokémon from its silhouette? Careful—the silhouette might not be the Pokémon you think it is!&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;GitHub repo: &lt;a href="https://github.com/splmdny/whos-that-pokemon-with-google-ai" rel="noopener noreferrer"&gt;WhosThatPokemon repo&lt;/a&gt;&lt;br&gt;
Google AI studio: &lt;a href="https://aistudio.google.com/apps/drive/1zd5jIM-GQmYnCRM6BdLYrwcn4oDHSXb0?showPreview=true&amp;amp;showAssistant=true&amp;amp;resourceKey=" rel="noopener noreferrer"&gt;WhosThatPokemon&lt;/a&gt;&lt;br&gt;
Image: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrvgi44mnpg869j9jsdk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrvgi44mnpg869j9jsdk.png" alt="pokemon beginning" width="710" height="818"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5mrbb96btru6kmcdnrg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5mrbb96btru6kmcdnrg.png" alt="pokemon start" width="457" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fke7e1urbsii56o4kp0l2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fke7e1urbsii56o4kp0l2.png" alt="pokemon clue" width="454" height="626"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzigybhat8o6aq13lgkfa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzigybhat8o6aq13lgkfa.png" alt="pokemon failed" width="453" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Google AI Studio
&lt;/h2&gt;

&lt;p&gt;I leveraged Google Gemini API with multimodal inputs to create the core twist mechanic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;gemini-2.5-flash-image-preview → for morphing Pokémon, reshaping one into another’s silhouette.&lt;/li&gt;
&lt;li&gt;gemini-2.5-flash → for progressive AI clue generation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The app sends multiple modalities (text instructions + Pokémon images) to Gemini:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Text Prompt → Explains the morphing rules (e.g., “Take source Pokémon’s colors/texture and apply to silhouette shape”).&lt;/li&gt;
&lt;li&gt;Image 1 → Source Pokémon (for colors and texture).&lt;/li&gt;
&lt;li&gt;Image 2 → Target Pokémon silhouette (for shape).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Output: Gemini processes them together to generate a unique AI morphed Pokémon—a perfect example of multimodality in action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multimodal Features
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Image + Text Fusion → Combines text prompts with Pokémon images for morphing.&lt;/li&gt;
&lt;li&gt;AI-Generated Silhouette Morphing → Creates unpredictable twists in gameplay.&lt;/li&gt;
&lt;li&gt;Progressive AI Clue System → AI adapts hints based on player performance.&lt;/li&gt;
&lt;li&gt;Downloadable Artwork → Save and share AI-generated Pokémon creations.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Team
&lt;/h2&gt;

&lt;p&gt;Yusup Almadani&lt;br&gt;
Github : &lt;a href="https://github.com/splmdny" rel="noopener noreferrer"&gt;https://github.com/splmdny&lt;/a&gt;&lt;br&gt;
Website : &lt;a href="https://splmdny.vercel.app/" rel="noopener noreferrer"&gt;https://splmdny.vercel.app/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>ai</category>
      <category>gemini</category>
    </item>
    <item>
      <title>FitThat.Me – AI-Powered Virtual Try-On</title>
      <dc:creator>yousu.be</dc:creator>
      <pubDate>Mon, 15 Sep 2025 06:45:12 +0000</pubDate>
      <link>https://forem.com/splmdny/fitthatme-ai-powered-virtual-try-on-id</link>
      <guid>https://forem.com/splmdny/fitthatme-ai-powered-virtual-try-on-id</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-ai-studio-2025-09-03"&gt;Google AI Studio Multimodal Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;FitThat.Me is an AI-powered virtual dressing room that helps you try on clothes anytime, anywhere — no changing room required.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload a full-body photo of yourself.&lt;/li&gt;
&lt;li&gt;Add images of clothing items.&lt;/li&gt;
&lt;li&gt;Instantly see how the outfit fits your style.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This app solves the hassle of online shopping uncertainty and makes it easier for users to visualize outfits before purchase. Whether you’re at home, commuting, or traveling, you can try on clothes on-the-go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Web deploy: &lt;a href="https://fitthatme.netlify.app/" rel="noopener noreferrer"&gt;https://fitthatme.netlify.app/&lt;/a&gt; (I'm still fixed it, free tier - out of quota)&lt;br&gt;
GitHub repo: &lt;a href="https://github.com/splmdny/fitThat.me" rel="noopener noreferrer"&gt;FitThat.Me repo&lt;/a&gt;&lt;br&gt;
Google AI studio: &lt;a href="https://aistudio.google.com/apps/drive/1It8AbyotR5va0SrcK76uvI0AfueFFWNN?showAssistant=true&amp;amp;resourceKey=&amp;amp;showCode=true" rel="noopener noreferrer"&gt;FitThat.Me&lt;/a&gt;&lt;br&gt;
Image: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz5bol74vk68ujut0gyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz5bol74vk68ujut0gyc.png" alt="fitting processed" width="800" height="528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Google AI Studio
&lt;/h2&gt;

&lt;p&gt;I leveraged Google AI Studio with the Gemini 2.5 multimodal APIs to build the try-on experience:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Gemini 2.5 Flash Image Preview → for image editing &amp;amp; composition, merging clothing onto the user’s uploaded photo.&lt;/li&gt;
&lt;li&gt;Imagen 4.0 Generate 001 → for generating placeholder product images (in case users don’t have high-quality clothing photos).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These APIs made it possible to handle realistic image overlays while keeping the system lightweight and responsive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multimodal Features
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Image Understanding&lt;/strong&gt; → Detects user body shape and clothing alignment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image Editing &amp;amp; Composition&lt;/strong&gt; → Fits clothes naturally onto uploaded photos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-Generated Clothing Previews&lt;/strong&gt; → Fills in missing product visuals with AI-generated placeholders.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Together, these multimodal features create a personalized, interactive fitting experience that enhances confidence in styling choices and online shopping.&lt;/p&gt;

&lt;h2&gt;
  
  
  Team
&lt;/h2&gt;

&lt;p&gt;Yusup Almadani&lt;br&gt;
Github : &lt;a href="https://github.com/splmdny" rel="noopener noreferrer"&gt;https://github.com/splmdny&lt;/a&gt;&lt;br&gt;
Website : &lt;a href="https://splmdny.vercel.app/" rel="noopener noreferrer"&gt;https://splmdny.vercel.app/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>ai</category>
      <category>gemini</category>
    </item>
  </channel>
</rss>
