<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Uncle B Laced It</title>
    <description>The latest articles on Forem by Uncle B Laced It (@uncle_blacedit_4828f0b2).</description>
    <link>https://forem.com/uncle_blacedit_4828f0b2</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/uncle_blacedit_4828f0b2"/>
    <language>en</language>
    <item>
      <title>The Neural Core Portfolio</title>
      <dc:creator>Uncle B Laced It</dc:creator>
      <pubDate>Sun, 18 Jan 2026 16:15:32 +0000</pubDate>
      <link>https://forem.com/uncle_blacedit_4828f0b2/the-neural-core-portfolio-an1</link>
      <guid>https://forem.com/uncle_blacedit_4828f0b2/the-neural-core-portfolio-an1</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/new-year-new-you-google-ai-2025-12-31"&gt;New Year, New You Portfolio Challenge Presented by Google AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;About Me&lt;br&gt;
I’m Bryan Chense Simwayi, a Creative AI Technologist based in Zambia.&lt;br&gt;
I don’t see AI as just a productivity tool or a code generator. I see it as a new interface for thought—a way to externalize cognition, perception, agency, and exploration. My work lives at the intersection of experimental AI systems, psychoacoustics, and human–AI collaboration.&lt;br&gt;
For this challenge, I didn’t want to build a static brochure. I wanted to build a digital lab—a portfolio that behaves like a living system, not a document.&lt;/p&gt;

&lt;p&gt;Portfolio&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://bryan-chense-simwayi-portfolio-937160735221.us-west1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;


&lt;p&gt;How I Built It&lt;br&gt;
This portfolio is a fully functional, client-side React application with real AI integration and interactive systems.&lt;br&gt;
The Stack&lt;br&gt;
Google AI Studio: Used as the primary environment for prompting, iterating, and refining the "Neural Core" persona.&lt;br&gt;
Gemini 2.0 Flash (via @google/genai SDK): Powers the live AI assistant embedded in the site.&lt;br&gt;
Google Cloud Run: The app is containerized via Docker and deployed to Cloud Run to ensure scalability for the real-time visualizers.&lt;br&gt;
Frontend: React 19, TypeScript, and Tailwind CSS (CDN).&lt;br&gt;
The AI Assistant (“Neural Core”)&lt;br&gt;
The assistant embedded in the bottom-right is not a mock UI. It is a fully functional agent that functions as an intelligent archive of my work.&lt;br&gt;
Real Connection: It initializes a Gemini 2.0 Flash chat session directly in the browser.&lt;br&gt;
Context Aware: It operates on a strict JSON knowledge graph of my bio, projects, and philosophy.&lt;br&gt;
Function Calling: I exposed a navigate() tool to Gemini. If you ask it to "Take me to the Lab," it doesn't just tell you where to go—it actually executes the navigation function to route the app.&lt;br&gt;
Voice Input: It utilizes the browser's native webkitSpeechRecognition API to allow for voice-to-text interaction.&lt;br&gt;
The Lab (Interactive Audio Visualizer)&lt;br&gt;
The Lab page is a real-time system powered by the Web Audio API and HTML5 Canvas.&lt;br&gt;
It captures live microphone input and visualizes the frequency data (FFT) as a 3D particle sphere. The bass frequencies pulse at the bottom, mids ripple through the center, and treble sparks at the top. It serves as a metaphor for "Neural Weight Connection," making the portfolio a genuine interactive experiment.&lt;br&gt;
Generative Interface Elements&lt;br&gt;
I avoided generic templates. All visuals are code-driven:&lt;br&gt;
The background runs a particle simulation that changes behavior (Flow vs. Grid vs. Orbit) based on which page you are viewing.&lt;br&gt;
Project cards generate procedural SVG visuals based on the project's ID and category—no static stock images are used.&lt;/p&gt;

&lt;p&gt;What I’m Most Proud Of&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Treating the portfolio as an engineering project
This isn't just layout and content. It includes stateful navigation logic, real-time audio analysis, and agentic AI integration. The portfolio itself is a case study.&lt;/li&gt;
&lt;li&gt;Research-style presentation
Each project page is structured around "Intent," "Exploration," and "Field Notes." This reflects how I actually work: through experimentation and observation rather than just feature checklists.&lt;/li&gt;
&lt;li&gt;Using Google AI Studio as a design partner
I didn't just use AI Studio to generate code. I treated it as a collaborator—using it to shape the tone of the assistant, refine the visual metaphors, and architecture the system prompts that give the site its unique "personality."&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Somnium Audio Dream Journal</title>
      <dc:creator>Uncle B Laced It</dc:creator>
      <pubDate>Wed, 07 Jan 2026 02:55:48 +0000</pubDate>
      <link>https://forem.com/uncle_blacedit_4828f0b2/somnium-audio-dream-journal-aog</link>
      <guid>https://forem.com/uncle_blacedit_4828f0b2/somnium-audio-dream-journal-aog</guid>
      <description>&lt;p&gt;This post is my submission for DEV Education Track: Build Apps with Google AI Studio.&lt;br&gt;
&lt;strong&gt;What I Built&lt;/strong&gt;&lt;br&gt;
I built Somnium, a mystical, voice-first dream journal that acts as a bridge to your subconscious. Instead of typing out dreams in the middle of the night, users simply record their voice. The app uses Google's Gemini API to transcribe the audio, analyze the dream using Jungian psychology, detect emotional themes, and even generate a surrealist image representing the dreamscape.&lt;br&gt;
Key Prompts &amp;amp; Features:&lt;br&gt;
I utilized the multimodal capabilities of the gemini-3-flash-preview model to process raw audio blobs directly.&lt;br&gt;
Analysis Prompt: "You are an expert Jungian dream analyst... Transcribe the audio... Analyze for hidden meanings... Identify archetypes... Rate the intensity of primary emotions."&lt;br&gt;
Visual Generation: I used the analysis output to construct a dynamic prompt for gemini-2.5-flash-image, requesting "Abstract Expressionism mixed with Dreamcore" based on the specific emotions and themes found in the dream.&lt;br&gt;
The app features a real-time audio visualizer, an emotion radar chart (Recharts), and an auto-tagging system where the AI suggests relevant keywords for the journal.&lt;br&gt;
&lt;strong&gt;Demo&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://somnium-audio-dream-journal-937160735221.us-west1.run.app" rel="noopener noreferrer"&gt;https://somnium-audio-dream-journal-937160735221.us-west1.run.app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Experience&lt;/strong&gt;&lt;br&gt;
Building with the Google GenAI SDK was surprisingly intuitive, specifically regarding Structured Output.&lt;br&gt;
Multimodal Ease: I was amazed that I didn't need a separate library for speech-to-text. Passing the audio blob directly to Gemini with a prompt to "transcribe and analyze" handled both tasks in a single request, significantly reducing latency and code complexity.&lt;br&gt;
&lt;strong&gt;JSON Schema&lt;/strong&gt;: Using the responseSchema configuration was a game-changer. It ensured that Gemini always returned data (like emotion scores and archetype lists) in a perfect JSON format that my React components could render immediately without parsing errors.&lt;br&gt;
**&lt;br&gt;
 The ability to chain the text analysis output into an image generation prompt allowed for a really cohesive user experience where the visuals truly matched the "vibe" of the dream interpretation.&lt;/p&gt;

</description>
      <category>deved</category>
      <category>learngoogleaistudio</category>
      <category>ai</category>
      <category>gemini</category>
    </item>
    <item>
      <title>Hertzlab</title>
      <dc:creator>Uncle B Laced It</dc:creator>
      <pubDate>Sun, 04 Jan 2026 23:12:24 +0000</pubDate>
      <link>https://forem.com/uncle_blacedit_4828f0b2/hertzlab-3i75</link>
      <guid>https://forem.com/uncle_blacedit_4828f0b2/hertzlab-3i75</guid>
      <description>&lt;p&gt;This is a submission for the DEV's Worldwide Show and Tell Challenge Presented by Mux&lt;br&gt;
What I Built&lt;br&gt;
HertzLab is a professional-grade Progressive Web App (PWA) that combines precise audio engineering tools with wellness features. It serves as a frequency generator, a physics-based oscilloscope, a brainwave entrainment tool (Binaural Beats), and an AI-powered acoustic consultant—all running entirely in the browser with zero latency.&lt;br&gt;
My Pitch Video&lt;br&gt;
&lt;a href="https://stream.mux.com/8O8V4ycz6uOXDm1bWQWJsOkIG02x6s007Xb00tsZxIVEn8.m3u8" rel="noopener noreferrer"&gt;https://stream.mux.com/8O8V4ycz6uOXDm1bWQWJsOkIG02x6s007Xb00tsZxIVEn8.m3u8&lt;/a&gt;&lt;br&gt;
Demo&lt;br&gt;
Live App: &lt;a href="https://hertzlab-287108981488.us-west1.run.app/" rel="noopener noreferrer"&gt;https://hertzlab-287108981488.us-west1.run.app/&lt;/a&gt;&lt;br&gt;
Source Code: &lt;a href="https://github.com/UncleBUbl/HertzLab.git" rel="noopener noreferrer"&gt;https://github.com/UncleBUbl/HertzLab.git&lt;/a&gt;&lt;br&gt;
Testing Instructions:&lt;br&gt;
No login is required. The app works offline and on mobile.&lt;br&gt;
Generator Mode: Click "Start" and use the stepper or slider to change frequencies. Try the "Sweep" function to hear an automated frequency ramp.&lt;br&gt;
Visualizer: Switch views between "Oscilloscope" (Waveform) and "Spectrum Analyzer" (Frequency bars) using the overlay buttons on the canvas.&lt;br&gt;
AI Chat: Click the Sparkles icon to open the AI Sound Lab. Ask "What is this frequency used for?" to see Google Gemini analyze the current sound.&lt;br&gt;
Binaural Mode: Switch tabs to "Binaural" to generate independent carrier and beat frequencies for meditation focus (best experienced with headphones).&lt;br&gt;
The Story Behind It&lt;br&gt;
I built HertzLab because I was frustrated with the current state of online frequency generators. If you are an audio engineer trying to test a subwoofer, a physics student studying wavelengths, or someone trying to meditate with binaural beats, your options are usually:&lt;br&gt;
Clunky, ugly websites from the 90s covered in ads.&lt;br&gt;
YouTube videos (which have audio compression artifacts).&lt;br&gt;
Expensive, heavy software.&lt;br&gt;
I wanted to build a "Swiss Army Knife" for sound that was beautiful enough to keep open on a second monitor and accurate enough for scientific testing. I wanted to bridge the gap between Audio Physics (Signal processing, wave interference) and Mindfulness (Breathwork, Entrainment).&lt;br&gt;
Technical Highlights&lt;br&gt;
HertzLab is built with React, TypeScript, and Tailwind CSS, but the real magic happens under the hood:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Custom Web Audio DSP Engine
Instead of playing pre-recorded MP3 files, HertzLab generates audio in real-time using the Web Audio API. I wrote a custom AudioEngine class that manages:
Dual Oscillators &amp;amp; Stereo Panning: For generating Binaural Beats, the engine creates two separate oscillators with slightly detuned frequencies (e.g., 200Hz Left, 210Hz Right) and hard-pans them to create a psychoacoustic 10Hz "beat" inside the brain.
Algorithmic Noise: Pink and Brown noise are generated mathematically by filling an AudioBuffer with random values and applying specific filter algorithms (like the Paul Kellet instrumentation algorithm) to shape the spectral density.&lt;/li&gt;
&lt;li&gt;CRT-Style Physics Visualization
The visualizer does not use a charting library. It renders directly to an HTML5  roughly 60 times per second.
Phosphor Persistence: To mimic old analog oscilloscopes, the canvas isn't cleared every frame. Instead, a semi-transparent black rectangle is drawn over the previous frame. This creates a "trail" or ghosting effect.
Chromatic Aberration: At higher amplitudes, the render loop splits the waveform drawing into three separate passes (Red, Green, Blue) with slight coordinate offsets. This simulates a "prism" effect, visualizing the physical intensity of the sound.&lt;/li&gt;
&lt;li&gt;AI-Powered Acoustics
I integrated the Google Gemini API (@google/genai) to act as an on-demand physics tutor. The ChatInterface component injects the current application state (Frequency: 432Hz, Waveform: Sine) into the system prompt. This allows the AI to provide context-aware answers, turning the app from a simple tool into an educational platform.
&amp;lt;!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --&amp;gt;
&amp;lt;!-- ❗ By submitting this project, I confirm that my video adheres to Mux's terms of service: &lt;a href="https://www.mux.com/terms" rel="noopener noreferrer"&gt;https://www.mux.com/terms&lt;/a&gt; --&amp;gt;
&amp;lt;!-- Thanks for participating! --&amp;gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>devchallenge</category>
      <category>muxchallenge</category>
      <category>showandtell</category>
      <category>video</category>
    </item>
  </channel>
</rss>
