<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Chella Kamina</title>
    <description>The latest articles on Forem by Chella Kamina (@rkchellah).</description>
    <link>https://forem.com/rkchellah</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rkchellah"/>
    <language>en</language>
    <item>
      <title>Building Anzen: What I Learned About Token Vault the Hard Way</title>
      <dc:creator>Chella Kamina</dc:creator>
      <pubDate>Tue, 07 Apr 2026 01:38:29 +0000</pubDate>
      <link>https://forem.com/rkchellah/building-anzen-what-i-learned-about-token-vault-the-hard-way-5477</link>
      <guid>https://forem.com/rkchellah/building-anzen-what-i-learned-about-token-vault-the-hard-way-5477</guid>
      <description>&lt;p&gt;When I started building Anzen for the Authorised to Act hackathon, I thought Token Vault would be the easy part. I was wrong.&lt;/p&gt;

&lt;p&gt;The concept is simple and powerful: instead of your AI agent holding OAuth tokens for GitHub, Gmail, and Slack, Auth0 holds them in a secure vault. The agent requests a scoped token when it needs one, uses it, and the token is gone. No credentials stored in your app. No breach risk. No all-or-nothing access.&lt;/p&gt;

&lt;p&gt;The implementation is where it gets interesting. The first thing I discovered is that nextjs-auth0 v4 is a completely different SDK from v3. The familiar handleAuth function is gone. The middleware file convention changed. Environment variable names changed. Even the callback URL path changed from /api/auth/callback to /auth/callback. None of this was obvious from the documentation.&lt;/p&gt;

&lt;p&gt;The second discovery was about the token exchange flow itself. Calling getAccessToken() returns an Auth0 JWT — not the GitHub or Slack token you actually need. To get the provider token, you have to make a separate POST to the token endpoint using grant_type: urn:ietf:params:oauth:grant-type:token-exchange with specific subject and requested token type parameters. This is the Token Vault exchange, and getting those parameters exactly right took significant debugging.&lt;/p&gt;

&lt;p&gt;The third discovery was Groq compatibility. Zod schemas include a $schema meta-field in their JSON output that Groq's API rejects, causing silent "Failed to call a function" errors. The fix was switching from Zod to jsonSchema() from the AI SDK.&lt;/p&gt;

&lt;p&gt;Despite these challenges, the architecture we ended up with is exactly what AI agents should look like: zero credentials in the app, scoped access per action, and a user who stays in control at every step. That's the promise of Token Vault, and it's worth the difficulty of getting there.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>hackathon</category>
      <category>auth0challengagent</category>
      <category>programming</category>
    </item>
    <item>
      <title>I Stopped Fixing Bugs Myself and Built an AI to Do It Instead</title>
      <dc:creator>Chella Kamina</dc:creator>
      <pubDate>Wed, 25 Mar 2026 21:41:18 +0000</pubDate>
      <link>https://forem.com/rkchellah/i-stopped-fixing-bugs-myself-and-built-an-ai-to-do-it-instead-2ee1</link>
      <guid>https://forem.com/rkchellah/i-stopped-fixing-bugs-myself-and-built-an-ai-to-do-it-instead-2ee1</guid>
      <description>&lt;p&gt;Every developer knows this pain. A bug gets filed and suddenly someone has to drop everything, read the issue, search through the codebase, figure out what broke, write a fix, add tests, open a PR and report back. It’s all manual work that takes time away from things that actually need a human brain. I kept asking myself one simple question, how much of this work can an agent handle on its own? That’s how BugFixer came about.&lt;/p&gt;

&lt;p&gt;BugFixer is an AI agent that works inside GitLab using Claude. You file a bug report, assign BugFixer to the issue and walk away. The agent reads the issue, looks through the codebase, finds the problem, writes a fix with tests, opens a merge request and comments back explaining what it did. Nothing gets merged automatically, you still review and approve. BugFixer does the work, you make the final call.&lt;/p&gt;

&lt;p&gt;In my demo, I filed a bug about passwords being stored as plain text in an authentication file. After assigning BugFixer, it found three vulnerable functions, replaced the plaintext storage with bcrypt hashing, wrote security tests and opened a merge request. It also spotted a session token vulnerability that I never mentioned. It found that on its own just by reading the code.&lt;/p&gt;

&lt;p&gt;Building this wasn’t easy. The first problem was the YAML tool configuration. The documentation wasn’t clear, and small syntax changes caused confusing errors like “tool_name is missing” or “tool_name not expected.” I had to ask in the GitLab Discord community to figure it out. The second issue was permissions. Even with a Developer role, the agent could read files but couldn’t create commits or merge requests. Early runs were triggering but producing nothing, no commits, no MRs, no comments. Fixing that took days and was the biggest challenge.&lt;/p&gt;

&lt;p&gt;Learn about Medium’s values&lt;br&gt;
What I’m most proud of is that BugFixer fixed real bugs on its own. It worked on a Python calculator, a geospatial script I was working on at work and an authentication module with multiple security issues. Every run produced real fixes, real tests and real merge requests. But the best part was the session token issue it found on its own. I never asked for that. It just read the code and noticed the problem.&lt;/p&gt;

&lt;p&gt;This project taught me how to build an AI agent from scratch on a platform I had never used before, how to write prompts that give an LLM enough context to make decisions, and how to debug using only session logs. It also showed me that building on a beta platform means hitting walls with little or no documentation. You just have to keep testing until something works.&lt;/p&gt;

&lt;p&gt;Right now, BugFixer handles one bug at a time, but real teams deal with many issues at once. The next step is to run multiple agents in parallel and prioritize bugs by severity. I also want it to leave inline comments directly in the merge request not just a summary on the issue. Long term, the goal is for BugFixer to run the pipeline after fixing a bug, check if tests pass and only then open a merge request. The tools for that already exist, so it’s close.&lt;/p&gt;

&lt;p&gt;BugFixer isn’t about replacing developers. It’s about removing repetitive work so people can focus on what really matters. If an agent can read bugs, fix code, and write tests, then maybe our time is better spent reviewing, designing, and solving harder problems.&lt;/p&gt;

&lt;p&gt;Try it out: &lt;a href="https://gitlab.com/gitlab-ai-hackathon/participants/34601039" rel="noopener noreferrer"&gt;GitLab&lt;/a&gt;&lt;br&gt;
YouTube: &lt;a href="https://youtu.be/w-suFgVKQeE?si=LcuO2v4aGUYbjpvR" rel="noopener noreferrer"&gt;BugFixer - AI Agent that fixes bugs automatically in GitLab&lt;/a&gt;&lt;/p&gt;

</description>
      <category>claude</category>
      <category>gitlab</category>
      <category>hackathon</category>
      <category>ai</category>
    </item>
    <item>
      <title>I built a real-time AI visual companion in one week from Zambia, here's what actually happened</title>
      <dc:creator>Chella Kamina</dc:creator>
      <pubDate>Thu, 12 Mar 2026 23:49:01 +0000</pubDate>
      <link>https://forem.com/rkchellah/i-built-a-real-time-ai-visual-companion-in-one-week-from-zambia-heres-what-actually-happened-dd4</link>
      <guid>https://forem.com/rkchellah/i-built-a-real-time-ai-visual-companion-in-one-week-from-zambia-heres-what-actually-happened-dd4</guid>
      <description>&lt;p&gt;This afternoon I was standing outside my house in Lusaka, Zambia, testing an app on my brother's phone because mine is too slow for real-time AI.&lt;/p&gt;

&lt;p&gt;I pointed the camera at my gate and asked: "Is my gate locked? Do I look safe?"&lt;/p&gt;

&lt;p&gt;Gemini described the gate, the car parked nearby, the surroundings and then gave me safety advice I didn't even ask for, based on what it saw.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Hackathon Challenge&lt;/strong&gt;&lt;br&gt;
I created this post for the purposes of entering the Gemini Live Agent Challenge 2026.&lt;/p&gt;

&lt;p&gt;Most tools built for visual impairment are either expensive, complicated, or require a specialist device. I wanted to build something that works on any phone, in any browser, right now.&lt;/p&gt;

&lt;p&gt;Gemini Live made that possible. It watches through a camera, hears your voice, and speaks back, all in real time. That's SightLine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The stack&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next.js 14 for the frontend&lt;/li&gt;
&lt;li&gt;FastAPI for the backend&lt;/li&gt;
&lt;li&gt;Gemini 2.0 Flash Live on Vertex AI for the AI&lt;/li&gt;
&lt;li&gt;WebSocket for the real-time connection&lt;/li&gt;
&lt;li&gt;Google Cloud Run for deployment&lt;/li&gt;
&lt;li&gt;Google Cloud Build for the container pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Straightforward on paper. The reality was messier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What actually broke and how I fixed it&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IAM permissions killed half a day.&lt;/strong&gt;&lt;br&gt;
Cloud Run and Cloud Build each require specific roles to access the Artefact Registry. The documentation doesn't give you the exact combination up front. I got there through trial and error, &lt;code&gt;artifactregistry.reader&lt;/code&gt; on the compute service account, &lt;code&gt;artifactregistry.admin&lt;/code&gt; on the Cloud Build service account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next.js bakes environment variables at build time.&lt;/strong&gt;&lt;br&gt;
This means if your backend URL is in a &lt;code&gt;.env&lt;/code&gt; file that's excluded from Git, which it should be, your frontend will always try to connect to &lt;code&gt;localhost&lt;/code&gt; in production. I wasted hours debugging WebSocket failures before I understood what was happening. The fix was hardcoding the URL directly in the hook. Not elegant, but it works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The audio pipeline was three problems pretending to be one.&lt;/strong&gt;&lt;br&gt;
Getting clean real-time PCM16 audio from a mobile browser was problem one. Stopping the microphone from picking up Gemini's voice and feeding it back as input was problem two. Recovering smoothly from the end of each exchange without the session dropping was problem three. I solved them with a &lt;code&gt;isSpeakingRef&lt;/code&gt; that mutes the mic while Gemini is talking, a 400ms cooldown before reopening, and a WebSocket ping/pong keepalive every 20 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The latency reality&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'm in Lusaka, Zambia. My Cloud Run service is in us-east4, Virginia. Every Gemini response has to travel across the Atlantic twice.&lt;/p&gt;

&lt;p&gt;That latency is noticeable. It doesn't break the app but it slows it down compared to what someone in the US would experience. When Gemini Live becomes available in African GCP regions this gets dramatically better. Right now SightLine works in spite of the distance and working is what matters for a week-one build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who SightLine is actually for&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I want to be straight about this. SightLine is not currently built for people with complete blindness. Pressing START, navigating the UI and switching cameras require enough vision to use a phone screen.&lt;/p&gt;

&lt;p&gt;The real users today are people with low vision or partial sight. People who can use a phone but struggle with fine detail. People with deteriorating vision from age or a medical condition. People in situations where even a sighted person would struggle bad lighting, tiny print, unfamiliar text.&lt;/p&gt;

&lt;p&gt;Making SightLine work for users with complete blindness is the next step voice-activated start, audio-guided onboarding, full screen reader support. That's the roadmap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I actually learned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I came into this as a data analyst with some Python experience. I left with a working knowledge of Vertex AI, Cloud Run, Docker, real-time audio streaming, WebSocket session management, and IAM configuration all learned under deadline pressure.&lt;/p&gt;

&lt;p&gt;The thing nobody tells you about building real AI applications is that the AI part is often the easiest bit. It's the infrastructure, the deployment pipeline, the browser APIs and the edge cases that take the time.&lt;/p&gt;

&lt;p&gt;But the tools are genuinely good right now. As a data analyst from Lusaka, I can build and deploy a real-time multimodal AI app in one week. That still surprises me a little.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it&lt;/strong&gt; : &lt;a&gt;SightLine&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitHub: &lt;a href="https://github.com/rkchellah/Sightline" rel="noopener noreferrer"&gt;Git&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Point it at small text. Ask what it sees. It works.&lt;/p&gt;

&lt;p&gt;If you're building accessibility tools or have thoughts on where SightLine should go, I'd like to hear from you.&lt;/p&gt;

</description>
      <category>geminiliveagentchallenge</category>
      <category>ai</category>
      <category>hackathon</category>
      <category>agents</category>
    </item>
  </channel>
</rss>
