<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Salim Ọlánrewájú Oyinlọlá</title>
    <description>The latest articles on Forem by Salim Ọlánrewájú Oyinlọlá (@salimcodes).</description>
    <link>https://forem.com/salimcodes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/salimcodes"/>
    <language>en</language>
    <item>
      <title>I Wasn't Going to Write for the OpenClaw Challenge. Then 2026.4.24 Dropped.</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Mon, 27 Apr 2026 07:01:41 +0000</pubDate>
      <link>https://forem.com/salimcodes/i-wasnt-going-to-write-for-the-openclaw-challenge-then-2026424-dropped-36oa</link>
      <guid>https://forem.com/salimcodes/i-wasnt-going-to-write-for-the-openclaw-challenge-then-2026424-dropped-36oa</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026-04-16"&gt;OpenClaw Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  I had no plans to write anything
&lt;/h2&gt;

&lt;p&gt;I'm an AI engineering senior analyst, and I spend most of my week buried in agent infrastructure. When the OpenClaw Challenge went live, my honest plan was to ship something for the &lt;em&gt;OpenClaw in Action&lt;/em&gt; track and ignore the writing prompt. I had two builds queued up. I didn't think I had a &lt;em&gt;take&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Then I opened my laptop on Saturday's morning, saw the &lt;a href="https://github.com/openclaw/openclaw/releases/tag/v2026.4.24" rel="noopener noreferrer"&gt;v2026.4.24 release notes&lt;/a&gt;, and changed my mind in roughly eight seconds.&lt;/p&gt;

&lt;p&gt;Because here's the thing nobody is saying loudly enough. The speed at which OpenClaw ships is starting to feel unreasonable, and the surface area it now covers is getting ridiculous. I've been tracking this project for months, and the slope of the curve from "neat agent runner" to "thing that joins your meetings, picks up the phone, and clicks coordinates in a browser" has been almost vertical. 2026.4.24 is the release where I stopped being able to dismiss it as hype.&lt;/p&gt;

&lt;p&gt;So I'm writing the post I didn't plan to write. Here's what 2026.4.24 actually means, from the seat of someone who runs agents in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What landed
&lt;/h2&gt;

&lt;p&gt;The shipping list, if you skipped the notes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Voice calls can now reach the full agent.&lt;/strong&gt; Talk Mode, Voice Call, and the new Google Meet plugin all share a capability called &lt;code&gt;openclaw_agent_consult&lt;/code&gt;. Realtime voice stays fast, but when a question needs tools or memory or a lookup, the voice session hands it off to the full agent and comes back with a real answer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DeepSeek V4 Flash and V4 Pro joined the catalog.&lt;/strong&gt; V4 Flash is now the onboarding default. Replay and thinking-mode fixes for follow-up tool-call turns came with it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Browser automation got serious.&lt;/strong&gt; Coordinate clicks, profile-level headless overrides, stable tab reuse, stale-lock recovery, longer default action budgets. This is the part that quietly matters most.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Meet plugin.&lt;/strong&gt; Personal Google auth, realtime voice in the meeting, recordings, transcripts, smart summaries, participant logs, and tab recovery if your browser times out mid-call.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster startup.&lt;/strong&gt; Lighter model catalogs, lazy-loaded providers, better dependency repair.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fixes across the board.&lt;/strong&gt; Telegram, Slack, MCP, sessions, TTS, a new Gradium TTS engine, better tool access UI, improved memory search visibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One breaking change.&lt;/strong&gt; The plugin SDK drops &lt;code&gt;api.registerEmbeddedExtensionFactory()&lt;/code&gt;. If you rewrite tool results, you migrate to &lt;code&gt;api.registerAgentToolResultMiddleware()&lt;/code&gt;. Don't skip this if you maintain plugins. Behavior diverges across Pi and Codex runtimes if you do.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is one release. One.&lt;/p&gt;

&lt;h2&gt;
  
  
  The part that actually changes the shape of things
&lt;/h2&gt;

&lt;p&gt;I want to talk about voice and Meet together, because I think people are going to underrate them separately and miss what just happened.&lt;/p&gt;

&lt;p&gt;For most of the past two years, "AI in meetings" has meant transcription. Otter, Fireflies, Granola, the rest. Useful, but passive. The AI watched. You did the work afterward. The real cognitive load, which is keeping context, remembering what was decided, chasing down the half-answered question, turning vague intent into actual work, stayed entirely on the humans. The transcription was a souvenir.&lt;/p&gt;

&lt;p&gt;What landed in 2026.4.24 is not that. Your OpenClaw agent can now &lt;em&gt;join the meeting&lt;/em&gt; with your Google account, &lt;em&gt;participate in realtime voice&lt;/em&gt;, and &lt;em&gt;consult the full agent stack mid-conversation&lt;/em&gt; when someone asks a question that needs a tool call. That last clause is the one. It means the thing on the call has memory, has access to your other systems, and can think with tools while the meeting is still happening.&lt;/p&gt;

&lt;p&gt;Let me put that in terms that reflect my actual week.&lt;/p&gt;

&lt;p&gt;I sit in a recurring Tuesday review with our model-eval team where someone always asks a question like "wait, what was the regression rate on that prompt variant we ran two sprints ago?" The honest answer is that we burn ten minutes while somebody opens the eval dashboard, finds the right run, and reads numbers off a screen. Sometimes the person who knows where it lives isn't on the call. Sometimes we just move on and lose the thread. Multiply that by every standup, every sync, every architecture review. The amount of meeting time my org spends &lt;em&gt;retrieving&lt;/em&gt; rather than &lt;em&gt;thinking&lt;/em&gt; is genuinely embarrassing.&lt;/p&gt;

&lt;p&gt;An agent that can sit on the call, hear the question, hit the eval store, come back with the number, and do it in the same breath as the rest of the conversation, that doesn't save ten minutes. It changes what the meeting is &lt;em&gt;for&lt;/em&gt;. The meeting becomes the place where humans decide things, not the place where humans wait on each other to surface facts.&lt;/p&gt;

&lt;p&gt;Here is the analogy that keeps landing for me. For the past few years, working with AI has felt like having a brilliant intern who only takes appointments. You schedule time, you go to their office (the chat window), you describe your problem in detail, you wait for a response, and then you carry whatever they said back into your real work. The intern is sharp. The intern is fast. But the intern does not come to you.&lt;/p&gt;

&lt;p&gt;What 2026.4.24 ships is the intern walking out of the office and pulling up a chair next to you. Not a new model. Not a smarter chatbot. A change of &lt;em&gt;posture&lt;/em&gt;. The AI is now showing up in the rooms where work actually happens, the meeting, the phone call, the browser tab you already have open, instead of waiting for you to come visit it. That is a different category of product than what we've had.&lt;/p&gt;

&lt;p&gt;The Voice Call piece, separately, is the same idea applied to the phone. You can ring your agent. Full memory, full tool access. You pick up and talk. I tested it on Saturday morning while I was making coffee, and the strangest thing about it isn't that it works. It's that it feels mundane within about ninety seconds. You stop being impressed and you start &lt;em&gt;delegating&lt;/em&gt;. "Pull the latest numbers on the eval run, draft a Slack to the team, schedule a follow-up for Monday." The interface is gone. There is no app. There is just a phone call to someone who happens to know everything about my work.&lt;/p&gt;

&lt;p&gt;Try the second analogy on for size. Most of the AI products I've used over the past three years feel like a vending machine. You walk up, you put in a request, you get something back. The vending machine is in the lobby. You go to the lobby. The vending machine never comes to your desk. What changed in 2026.4.24 is that the vending machine grew legs. It is now the thing wandering around the office asking who needs what. That sounds silly when I write it out. It is also exactly correct.&lt;/p&gt;

&lt;p&gt;I keep coming back to the word &lt;em&gt;posture&lt;/em&gt; because I think it's the right one. A chatbot has the posture of a tool. A meeting participant has the posture of a colleague. The technical delta between those two things is smaller than people think. The experiential delta is enormous, and once you feel it, you can't really go back to typing into a box.&lt;/p&gt;

&lt;h2&gt;
  
  
  The part nobody talks about, which is browser automation
&lt;/h2&gt;

&lt;p&gt;The headlines on this release have been Voice and Meet. The thing that will quietly determine whether agents are useful for real work, in real jobs, on real Tuesdays, is the browser stack. And 2026.4.24 did the unglamorous work there.&lt;/p&gt;

&lt;p&gt;Let me explain why this is the part I care about most, and why it took me about an hour of testing to realize the browser changes were the actual story of the release.&lt;/p&gt;

&lt;p&gt;In my role I run a lot of evaluation pipelines. The work is repetitive in shape but not in detail. Pull a list of model outputs, compare them against a gold set, file the discrepancies into a tracker, tag the failures by category, ping the model owner if a regression crosses a threshold. The shape of the work doesn't change. The specifics, which model, which dataset, which thresholds, which Jira board, which Slack channel, change every week.&lt;/p&gt;

&lt;p&gt;That kind of work has been &lt;em&gt;almost&lt;/em&gt; automatable for about a year. I say almost because the part that always broke was the browser. Our internal eval dashboard renders results in a custom-rendered table that's basically a canvas. Our project tracker has a custom dropdown that doesn't expose its options to the DOM until you click. Our Slack workspace requires a session that times out at unpredictable intervals. These are not exotic problems. These are what every internal tool at every company looks like.&lt;/p&gt;

&lt;p&gt;Coordinate clicks fix the canvas problem. If your agent can only click DOM nodes, half the modern web is invisible to it. Anything rendered to a canvas, half of dashboards, most data viz, every internal tool that someone built in React with a custom widget library, was a wall. Coordinate clicks turn that wall into a door. The agent sees pixels. The agent clicks pixels. The thing under the pixels happens.&lt;/p&gt;

&lt;p&gt;Stable tab reuse and stale-lock recovery fix the session problem. A long-running browser agent is going to encounter a tab that's hung, a session that's expired, a popup that wasn't there yesterday, a network blip that left a lock file in a weird state. Without recovery, every one of those is a dead workflow. With recovery, the agent shrugs and keeps going. The difference between an agent you can leave running overnight and an agent you have to babysit is exactly this kind of plumbing.&lt;/p&gt;

&lt;p&gt;Longer default action budgets fix the &lt;em&gt;real workflows are long&lt;/em&gt; problem. The previous defaults assumed a few dozen actions per task. Real internal workflows are not a few dozen actions. Filing a single regression in our system is something like fifteen browser steps end to end if you count the dropdowns and the comment fields. A batch of twenty regressions blew through the old budget every single time.&lt;/p&gt;

&lt;p&gt;Profile-level headless overrides matter because some of our internal tools refuse to render in headless mode. They check for it. They throw a banner. The override lets the agent run in a real browser context for the tools that need it, and headless for the ones that don't, on a per-profile basis. That sounds like a footnote. In practice it's the difference between "this works on my machine" and "this works in production."&lt;/p&gt;

&lt;p&gt;Put all of that together, and here's what I can do this week that I could not do last week.&lt;/p&gt;

&lt;p&gt;I can hand my agent a Slack thread of regression reports. It opens the eval dashboard, finds the runs in question, clicks through the canvas-rendered comparison view, screenshots the diffs, files them into the tracker with the right labels, posts a summary back to the thread, and pings the relevant model owners. End to end. No babysitting. The work that used to take me an afternoon takes the agent about twelve minutes, and I find out it's done because a Slack message shows up.&lt;/p&gt;

&lt;p&gt;That is one workflow. There are at least four others I can already see lining up behind it. The compliance review where I have to walk through a checklist in a SharePoint form for every model release. The vendor evaluation where I open a procurement portal that nobody loves and fill in the same fields I filled in last quarter. The weekly stakeholder report where I pull screenshots from three different dashboards and paste them into a doc with captions. The monthly cost reconciliation where I cross-reference the API console against the finance team's spreadsheet. Every single one of these tasks is "click some pixels in a tool that wasn't designed for me." Every single one is now in scope.&lt;/p&gt;

&lt;p&gt;That is not a demo. That is not a hackathon thing. That is Tuesday, and it changes what my Tuesday is for.&lt;/p&gt;

&lt;p&gt;The reason browser automation is the unglamorous answer to "is this real" is because every interesting workplace agent eventually needs to operate the same crusty internal tools that humans operate. The web is not designed for agents. It's designed for humans clicking on pixels. Until your agent can also click on pixels, robustly, with recovery, in a session that lasts longer than five minutes, you don't have an agent. You have a very expensive script that requires a human supervisor.&lt;/p&gt;

&lt;p&gt;2026.4.24 is the release where I stopped needing to supervise.&lt;/p&gt;

&lt;h2&gt;
  
  
  A real use case, which is how I run OpenClaw updates
&lt;/h2&gt;

&lt;p&gt;A couple of weeks ago I had a thought that turned into the most useful skill I've built on top of OpenClaw, and it's the loop that made me trust this project enough to put it in front of work data.&lt;/p&gt;

&lt;p&gt;The thought was simple. OpenClaw ships so fast that &lt;em&gt;manually&lt;/em&gt; tracking releases had become a real tax on my week. I'd missed a breaking change once already and shipped a regression to a teammate's dev environment because of it. I didn't want to miss another. So I asked the obvious question. If I have an agent framework that can do anything, why am I tracking its own updates by hand?&lt;/p&gt;

&lt;p&gt;I built a tiny skill. The skill runs on a cron at 10pm. It does five things.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It checks the installed version with &lt;code&gt;openclaw --version&lt;/code&gt; and &lt;code&gt;openclaw status&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;If we're current, it stays silent. No noise.&lt;/li&gt;
&lt;li&gt;If there's an update, it pulls the GitHub release notes, then searches X and the web for community reception and breakage reports.&lt;/li&gt;
&lt;li&gt;It composes a structured briefing with new features, community sentiment, known regressions, and a recommendation.&lt;/li&gt;
&lt;li&gt;It posts the briefing to &lt;code&gt;#openclaw-configuration&lt;/code&gt; in Slack.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That last step is the whole point. I don't want a chatbot. I want a daily standup from an analyst who has read the release notes for me, vetted them against what people are actually saying online, and has an opinion.&lt;/p&gt;

&lt;p&gt;This is the second reason I'm writing this post.&lt;/p&gt;

&lt;p&gt;Look, I just spent a thousand words telling you that 2026.4.24 is a category-shifting release. &lt;strong&gt;All of that is true.&lt;/strong&gt; Voice reaching the full agent is real. Meet is real. The browser work is real. DeepSeek V4 in the catalog is real.&lt;/p&gt;

&lt;p&gt;And &lt;em&gt;also&lt;/em&gt;, the day it shipped, the bundled dependencies were broken for a chunk of users, the Bonjour mDNS gateway was crash-looping on VPS deployments without multicast (which is most of them), Telegram was silently failing in production while the Control UI looked fine, Node 24 users were getting ESM loader errors, and a meaningful number of people rolled back to 2026.4.22.&lt;/p&gt;

&lt;p&gt;Both things are true at the same time. &lt;strong&gt;That's the actual story of OpenClaw right now.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The shipping velocity is the thing that makes this project exciting and the thing that makes it a little scary to operate. You can't keep up by reading release notes once a week. You can't keep up by waiting for a friend to tell you what broke. You either build the loop that keeps up for you, or you eat a regression in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the briefing recommended
&lt;/h2&gt;

&lt;p&gt;For 2026.4.24, my agent recommended &lt;em&gt;holding&lt;/em&gt; and waiting for a patch. Specifically, stay on 2026.4.22 if I'm running the Telegram or WhatsApp bridges in production, but spin up an isolated 2026.4.24 instance to evaluate the Meet plugin and the voice-to-full-agent handoff, because both of those are the kind of capability shift you want hands-on with as soon as possible.&lt;/p&gt;

&lt;p&gt;I followed that recommendation. The eval instance is running. The production agents are still on 2026.4.22. When the patch lands, the cron will tell me, and I'll cut over.&lt;/p&gt;

&lt;p&gt;That's the workflow. That's why I trust it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual takeaway
&lt;/h2&gt;

&lt;p&gt;If you take one thing from this post, take this. &lt;strong&gt;The right way to run OpenClaw in 2026 is to let OpenClaw help you run OpenClaw.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The release cadence is faster than any human can responsibly track. The surface area now covers voice, Meet, browsers, model providers, plugin SDKs, gateways, and a dozen integrations. No single person reads all of that carefully every Friday. But an agent can. And once you have an agent that does, you stop being scared of the velocity and start being grateful for it.&lt;/p&gt;

&lt;p&gt;This, I think, is the thing OpenClaw quietly gets right that other personal-AI projects don't. It is hackable enough that a forty-line skill can become your operations analyst. It ships fast enough that you genuinely need one. And the loop closes on itself in a way that feels right.&lt;/p&gt;

&lt;p&gt;I wasn't going to write anything for this challenge. I was going to build. But I think the building and the writing are the same point. The agents are good enough now to manage their own upgrade path, to sit in your meetings, to pick up the phone, to click the pixels you've been clicking for years. And you should let them.&lt;/p&gt;

&lt;p&gt;See you when the next update drops. My cron will tell me first.&lt;/p&gt;

&lt;h2&gt;
  
  
  ClawCon Michigan
&lt;/h2&gt;

&lt;p&gt;I didn't make it out to ClawCon Michigan this year, but the recaps coming out of it are what put OpenClaw on my radar in the first place. If anyone reading this attended, I'd love to hear what the hallway-track conversations were like, especially around the Meet plugin roadmap.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>openclawchallenge</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Olumide: A self-hosted AI nurse for chronic disease, living in WhatsApp</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Mon, 27 Apr 2026 06:59:44 +0000</pubDate>
      <link>https://forem.com/salimcodes/olumide-a-self-hosted-ai-nurse-for-chronic-disease-living-in-whatsapp-o13</link>
      <guid>https://forem.com/salimcodes/olumide-a-self-hosted-ai-nurse-for-chronic-disease-living-in-whatsapp-o13</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026-04-16"&gt;OpenClaw Challenge&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;It's a Tuesday morning in Abeokuta, Nigeria. Chief Bamidele Adeyemi is 62, retired, hypertensive, and type-2 diabetic. He's on amlodipine, metformin, and empagliflozin. He sees Dr. Chuks every three months for about four minutes. The other 3 months minus 4 minutes, he is on his own.&lt;/p&gt;

&lt;p&gt;By 3pm he has eaten only tea and a biscuit since breakfast. He WhatsApps his AI: &lt;em&gt;"I'm feeling small dizzy and shaky."&lt;/em&gt; Twenty seconds later, four things happen at once:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;His chat replies with the mild-hypoglycemia protocol, &lt;em&gt;"3 to 4 glucose tablets, half a glass of orange juice, or 3 biscuits, now. I'll wait. We'll re-test in 15."&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;The bedside Arduino on his nightstand goes solid red. The buzzer plays the alert pattern. The LCD reads &lt;code&gt;LOW SUGAR / TAKE SUGAR NOW&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;His daughter Funmi, a nurse in Manchester, gets a calm Telegram message: &lt;em&gt;"FYI, your dad had a mild hypo (glucose 68, recovered). Second this month. I've drafted a note for Dr. Chuks. No action needed from you tonight."&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Dr. Chuks's clinician dashboard gains a same-day note draft, ready to read at his next break.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is &lt;strong&gt;Olumide&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Olumide is a self-hosted, multi-agent chronic-disease companion that lives in WhatsApp. It is built around real African use cases:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use case 1 — Chief Bamidele in Abeokuta in SouthWest Nigeria.&lt;/strong&gt; Hypertension affects ~46% of African adults and fewer than 7% are controlled. Most of the failure is not clinical, it is the silence between visits. Olumide fills that silence. It runs morning and evening check-ins, logs every BP reading and dose, recognises hypoglycemia in conversation, drives a bedside Arduino for adherence, escalates to family on the right channel with the right tone, and quietly builds the doctor's pre-visit pack so the next four minutes actually move things forward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use case 2 — Mama Aisha in Kano in the Northern part of Nigeria, and her daughter Halima in Toronto.&lt;/strong&gt; Mama Aisha is 68, diabetic, lives alone since her husband passed. Halima sends money home and worries every day. With Olumide, Halima sets up the gateway on a small home server in the family's Kano house. Mama Aisha keeps using WhatsApp like she always has, voice notes in Hausa, photos of her glucometer. When her sugar spikes after a wedding meal, Olumide handles the coaching in Hausa. Halima gets a weekly summary on Sunday morning, in English, and a calm WhatsApp ping if something genuinely needs her. The diaspora has been sending money home for decades; Olumide is the first time they have been able to send actual care.&lt;/p&gt;

&lt;p&gt;Both use cases run on the &lt;strong&gt;same code, the same gateway, the same Arduino&lt;/strong&gt;. The only difference is the patient profile YAML.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Olumide is an open-source personal-health tool, &lt;strong&gt;not&lt;/strong&gt; a medical device. It does not diagnose, does not prescribe, does not titrate. It tracks, reminds, surfaces patterns, and routes the patient back to their real doctor with three months of ground truth in their hand. The scope is in the system prompt, the refusal rules are in the skills, and the capability to "change a dose" simply does not exist in the toolset.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How I Used OpenClaw
&lt;/h2&gt;

&lt;p&gt;OpenClaw is the entire runtime. I did not build a parallel platform, I composed Olumide out of OpenClaw primitives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Software: &lt;a href="https://github.com/salimcodes/olumide" rel="noopener noreferrer"&gt;&lt;code&gt;github.com/salimcodes/olumide&lt;/code&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Hardware: &lt;a href="https://github.com/salimcodes/olumide-hardware" rel="noopener noreferrer"&gt;&lt;code&gt;github.com/salimcodes/olumide-hardware&lt;/code&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The multi-agent architecture
&lt;/h3&gt;

&lt;p&gt;The core of the system is &lt;code&gt;olumide/orchestrator/openclaw.py&lt;/code&gt;, which classifies the severity of every incoming signal with an LLM call, picks the right agent, and dispatches their JSON-emitted actions in parallel via &lt;code&gt;asyncio.gather&lt;/code&gt;. Six agents are registered at startup:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Agent&lt;/th&gt;
&lt;th&gt;Role&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PrimaryCareAgent&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Warm, patient-facing orchestrator. Greets, checks in, hands off.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ClinicalReasoningAgent&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Strict-JSON tier classifier (1–4) that emits &lt;code&gt;actions[]&lt;/code&gt; for parallel dispatch. Triages symptoms, identifies red flags, calls protocols.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;MedicationSafetyAgent&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Drug-interaction checks, refill timing, NAFDAC authenticity verification.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;FamilyCircleAgent&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Composes consent-scoped updates per circle member and channel.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;ClinicLiaisonAgent&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Builds pre-visit packs and drafts clinician notes for Dr. Chuks.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;CrisisResponseAgent&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tier-4 escalation: alerts, transport, family, on-call.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each agent extends a thin OpenClaw base wrapper (&lt;code&gt;olumide/agents/base.py&lt;/code&gt;) that handles the LLM call. The patient profile (&lt;code&gt;config/patient_bamidele.yaml&lt;/code&gt;) is injected into every system prompt as ground truth being meds, conditions, doctor, circle, language, fasting status, so no agent ever has to "remember" who Bamidele is.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tools the agents call
&lt;/h3&gt;

&lt;p&gt;When the Clinical Reasoning Agent emits an action like &lt;code&gt;{"type": "device_alert", "reason": "HYPO"}&lt;/code&gt;, the orchestrator dispatches it to the right tool:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;olumide/tools/device.py&lt;/code&gt;&lt;/strong&gt; → &lt;code&gt;reminder&lt;/code&gt; / &lt;code&gt;escalate&lt;/code&gt; / &lt;code&gt;lcd&lt;/code&gt; / &lt;code&gt;log_dose&lt;/code&gt; — talks to the Arduino bridge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;olumide/tools/clinician.py&lt;/code&gt;&lt;/strong&gt; → &lt;code&gt;draft_clinician_note&lt;/code&gt; writes to &lt;code&gt;dashboard/clinician_notes.json&lt;/code&gt;, which Dr. Chuks's dashboard polls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;olumide/tools/circle.py&lt;/code&gt;&lt;/strong&gt; → &lt;code&gt;notify_circle_member&lt;/code&gt; formats the message per the recipient's consent scope and channel.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;olumide/tools/communication.py&lt;/code&gt;&lt;/strong&gt; → &lt;code&gt;send_whatsapp&lt;/code&gt; for the live Meta Cloud API path.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All four fire &lt;strong&gt;in parallel&lt;/strong&gt;. By the time Bamidele has finished reading his reply, the LED is already red, Funmi already has her message, and the dashboard already has the note.&lt;/p&gt;

&lt;h3&gt;
  
  
  Severity classification + override
&lt;/h3&gt;

&lt;p&gt;The orchestrator's severity classifier is an LLM call, but I learned not to trust it for clinical numbers. So &lt;code&gt;app.py&lt;/code&gt; has a small &lt;code&gt;_vision_severity_override()&lt;/code&gt; that bypasses the LLM for unambiguous cases, glucose &amp;lt;70 or &amp;gt;250 forces URGENT, BP ≥180/120 forces CRISIS, and so on. This was one of the biggest reliability wins in the build: keeping the LLM for nuance and using deterministic rules for the things you cannot afford to be creative about.&lt;/p&gt;

&lt;h3&gt;
  
  
  The hardware as a tool node
&lt;/h3&gt;

&lt;p&gt;The Arduino is a peripheral nerve, not the brain. The firmware (&lt;a href="https://github.com/salimcodes/olumide-hardware/blob/main/firmware/olumide_firmware.ino" rel="noopener noreferrer"&gt;&lt;code&gt;olumide-hardware/firmware/olumide_firmware.ino&lt;/code&gt;&lt;/a&gt;) speaks a small line-delimited JSON protocol over USB serial at 115200 baud:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;device&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;→&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;host&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"EVT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"e"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"BTN_SHORT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1745000000&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"EVT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"e"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"RFID"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"tag"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"AMLO_5MG"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1745000100&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"TELE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"temp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mf"&gt;28.4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"hum"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;62&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"lux"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;412&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"ts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;1745000060&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;host&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;→&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;device&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"CMD"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"c1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"REMIND"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"Amlodipine 5mg"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"color"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"GREEN"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"beep"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"SHORT"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"CMD"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"c2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"ALERT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"reason"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"HYPO"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"t"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"CMD"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"c3"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"c"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="s2"&gt;"AWAIT_ACK"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="nl"&gt;"timeout"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The kit I had: Uno, 1602 LCD over I2C, RC522 RFID, RGB LED, active and passive buzzers, DHT11, DS1302 RTC, photoresistor, button, composes into a bedside device that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Logs medication doses when an RFID-tagged pill bottle is tapped.&lt;/li&gt;
&lt;li&gt;Surfaces reminders on the LCD with ambient colour-coded status on the RGB LED (green = on track, yellow = something due, red = attention).&lt;/li&gt;
&lt;li&gt;Triggers a check-in (button short-press) or a panic event (long-press) that wakes the agent through &lt;code&gt;POST /webhook/device&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Dims itself at night via the photoresistor as a small touch, big quality-of-life difference for an elder.&lt;/li&gt;
&lt;li&gt;Holds 24 hours of pre-loaded reminders in local cache so brief gateway/USB outages don't break adherence.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyx65lq2ey6c5qd1eql8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyx65lq2ey6c5qd1eql8.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The host-side bridge for development is &lt;code&gt;tests/fake_bridge.py&lt;/code&gt; — a Python simulator that renders the same JSON commands in colour in a terminal, so the demo can run with or without the physical board.&lt;/p&gt;

&lt;p&gt;The wiring diagram, full pin map, protocol spec, and troubleshooting guide are all in the &lt;a href="https://github.com/salimcodes/olumide-hardware" rel="noopener noreferrer"&gt;hardware repo&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The endpoints the demo touches
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET  /sim                     →  WhatsApp-style 3-panel demo console
POST /sim/message             →  send a message as Bamidele, get reply + agent trace
POST /sim/upload              →  send a photo (BP monitor, glucometer) for vision analysis
POST /sim/reset               →  wipe clinician notes + circle log between takes
GET  /dashboard/              →  Dr. Chuks's live clinician dashboard
GET  /webhook                 →  WhatsApp Cloud API verification
POST /webhook                 →  WhatsApp Cloud API messages (production path)
POST /webhook/device          →  Arduino RFID taps and button presses
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;/sim&lt;/code&gt; console is the heart of the demo. It shows three panels: Bamidele's chat (left), Funmi's family alerts (right), and the multi-agent log (bottom). Type a message, or use a quick-action button, and watch agent calls, device commands, family notifications, and clinician notes fan out in real time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Image understanding for vitals
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;olumide/ingestion/vision.py&lt;/code&gt; analyses photos the patient sends; perhaps, a BP monitor reading, a glucometer screen and synthesises a structured patient message that the agents process as if Bamidele had typed the values. So when he photographs his Omron after morning measurement, the agent sees: &lt;em&gt;"My BP just measured 138/86 with pulse 72."&lt;/em&gt; The &lt;code&gt;_vision_to_message()&lt;/code&gt; function in &lt;code&gt;app.py&lt;/code&gt; deliberately appends &lt;em&gt;"This looks abnormal sir, what should I do?"&lt;/em&gt; when the values cross clinical thresholds, so the red-flag keyword matchers in the clinical agent fire reliably.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://youtube.com/shorts/i0DQdLbIYSw" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flq56wzm9s015gr2fki7n.jpg" alt="Watch the video" width="480" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8q2wve3slqkkm1gaxvh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr8q2wve3slqkkm1gaxvh.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bvfbc1qqiw689v7t6gd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bvfbc1qqiw689v7t6gd.png" alt=" " width="249" height="495"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdll7dqi5gn6zknq56i9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdll7dqi5gn6zknq56i9j.png" alt=" " width="226" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 4-minute flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Morning routine.&lt;/strong&gt; &lt;em&gt;"Good morning Olumide"&lt;/em&gt; → warm greeting addressing him as "sir," anchored in his profile and yesterday's logbook entry.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vitals report.&lt;/strong&gt; &lt;em&gt;"BP 138/86, fasting glucose 122"&lt;/em&gt; → interpretation against his 14-day trend, with the small drift flagged but not alarmed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Symptom triage (the money shot).&lt;/strong&gt; &lt;em&gt;"I'm feeling small dizzy and shaky"&lt;/em&gt; → triage interview → &lt;em&gt;"glucose 68"&lt;/em&gt; → Tier 3 hypoglycemia.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Within seconds, in parallel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The chat replies with the mild_hypo_protocol.&lt;/li&gt;
&lt;li&gt;The Arduino (or &lt;code&gt;fake_bridge.py&lt;/code&gt;) prints &lt;strong&gt;CMD: ESCALATE&lt;/strong&gt;, with red LED + alert buzzer + LCD lines &lt;code&gt;LOW SUGAR&lt;/code&gt; / &lt;code&gt;TAKE SUGAR NOW&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Funmi's panel gets a Tier 3 family alert.&lt;/li&gt;
&lt;li&gt;The dashboard gains a same-day clinician note draft for Dr. Chuks.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The reveal.&lt;/strong&gt; Switch to the multi-agent log at the bottom of &lt;code&gt;/sim&lt;/code&gt;. Show severity classification → agent selected → tier → actions dispatched → reasoning trace. Six agents, twelve tool calls, twenty seconds, all OpenClaw primitives, no custom orchestration runtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Mama Aisha use case.&lt;/strong&gt; Swap &lt;code&gt;config/patient_bamidele.yaml&lt;/code&gt; for a Mama Aisha profile. Same code, Hausa voice notes, daughter on Telegram instead of WhatsApp, weekly summary in English. Demonstrates the framework, not the persona.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Repos:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/salimcodes/olumide" rel="noopener noreferrer"&gt;&lt;code&gt;github.com/salimcodes/olumide&lt;/code&gt;&lt;/a&gt; — gateway, agents, tools, demo console&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/salimcodes/olumide-hardware" rel="noopener noreferrer"&gt;&lt;code&gt;github.com/salimcodes/olumide-hardware&lt;/code&gt;&lt;/a&gt; — Arduino firmware, wiring, protocol&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Right-sizing is harder than scaling up.&lt;/strong&gt; The first version of this idea was a continental, HMO-funded chronic-disease platform. Compressing it down to one patient on one laptop with one Arduino on the bedside table was uncomfortable, and it turned out to be the version that actually demonstrates the OpenClaw thesis. The grand version is still possible later. The small one had to come first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The skill prompts are where most of the engineering lives.&lt;/strong&gt; I went in expecting to write clever code. I ended up writing careful prose. Getting &lt;code&gt;clinical.md&lt;/code&gt; to gather the right symptom information, recognise the red flags, refuse to titrate doses, and emit clean JSON that the orchestrator can dispatch, that's a different muscle than coding, and it's the layer that determines whether the system is safe or dangerous.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't trust the LLM with numbers it can't argue with.&lt;/strong&gt; The severity classifier is an LLM call, and it's right &lt;em&gt;most&lt;/em&gt; of the time. But "most of the time" is not good enough when glucose 54 needs to be CRISIS and BP 178/118 needs to be URGENT every single time. &lt;code&gt;_vision_severity_override()&lt;/code&gt; in &lt;code&gt;app.py&lt;/code&gt; is twenty lines of &lt;code&gt;if/else&lt;/code&gt; that prevents the most dangerous failure mode in the system. Some things should be deterministic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parallel action dispatch is the OpenClaw thesis in one mechanism.&lt;/strong&gt; The moment I switched from sequential &lt;code&gt;await&lt;/code&gt; calls to &lt;code&gt;asyncio.gather()&lt;/code&gt; for the action list, the demo went from "feels like an app" to "feels alive." The buzzer beeps while the daughter's phone is buzzing while the dashboard is updating while the chat is replying. That simultaneity is what people remember.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Self-hosting is a feature, not a constraint.&lt;/strong&gt; When the patient's data lives on their own laptop in plain files they can read with &lt;code&gt;cat&lt;/code&gt;, an entire class of trust questions evaporates. The architecture itself becomes the privacy story. You can hand someone a tool that touches their health data because there's nowhere else for the data to go.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware is a peripheral nerve, not the product.&lt;/strong&gt; The Arduino sees opaque RFID UIDs and reports button presses. It doesn't know that &lt;code&gt;04A1B2C3D4&lt;/code&gt; means "amlodipine 5mg", the mapping lives in the gateway profile. That separation is what lets the firmware be MIT-licensed and forkable without inheriting any clinical-scope concerns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Wire RC522 to 3.3V or it dies.&lt;/strong&gt; Twice.&lt;/p&gt;

&lt;h2&gt;
  
  
  ClawCon Michigan
&lt;/h2&gt;

&lt;p&gt;No.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>openclawchallenge</category>
    </item>
    <item>
      <title>Beyond the Classroom: Inspiring Careers in Open Source</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Sat, 07 Dec 2024 00:00:25 +0000</pubDate>
      <link>https://forem.com/gh-campus-experts/beyond-the-classroom-inspiring-careers-in-open-source-56ld</link>
      <guid>https://forem.com/gh-campus-experts/beyond-the-classroom-inspiring-careers-in-open-source-56ld</guid>
      <description>&lt;p&gt;Hi, I’m &lt;a href="https://githubcampus.expert/salimcodes/" rel="noopener noreferrer"&gt;Salim Oyinlola&lt;/a&gt;, a GitHub Campus Expert studying at the University of Lagos in Nigeria. As a Campus Expert, my role is to support and enrich the tech communities around me, and I’ve had the privilege of working closely with GitHub Education to make that happen. Over time, I’ve come to realize how powerful communities can be in shaping our careers—and how often students like myself don’t get to see the many paths to a career in tech available to us.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhy88e6epbf93zlr1mgb.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffhy88e6epbf93zlr1mgb.jpg" alt="A picture of Salim Oyinlola on the intro of the Beyond the Classroom: Open Source Stories documentary" width="800" height="1061"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A picture of Salim Oyinlola on the intro of the Beyond the Classroom: Open Source Stories documentary&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In my local tech community, I’ve observed that students believe there’s a single, defined route to success in tech: learn the basics via the platforms on the  &lt;a href="https://gh.io/opensourcestories24" rel="noopener noreferrer"&gt;GitHub Student Developer Pack&lt;/a&gt;, do the same for Data Structures and Algorithms, grind LeetCode, host HackerRank practice marathons with friends, and nail those mock interviews to land an internship. Many see this as the ultimate pathway, but I’ve come to understand there’s another rewarding route that also works well: open source.&lt;/p&gt;

&lt;p&gt;While internships are widely recognized as the gold standard of practical experience, I believe there’s equally as much to gain—sometimes more—from getting involved in open source projects. Internships teach you about teamwork, what it’s like to be part of a larger tech stack, and introduce you to real-world working dynamics. Open source, however, offers a different kind of learning where you’re gaining these same teamwork skills but on a global scale, coding and collaborating with people from across the world. It’s where you learn not only to code but also to connect, bridging time zones, backgrounds, and skill levels.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Challenges of Building a Career in Open Source in Nigeria
&lt;/h3&gt;

&lt;p&gt;Building a career in open source isn’t easy, especially here in Nigeria. Students face challenges like scarce mentorship opportunities, and low awareness about what open source even is. But despite these challenges, I’ve seen many Nigerian students dive into open source, driven by community, curiosity, and the chance to contribute to projects that impact people worldwide. I found eight students whose journey shows that open-source involvement can be both a stepping stone and a fulfilling pursuit, even with limited resources. And so, &lt;a href="https://youtube.com/playlist?list=PLmbXeEhgz7-7JZmnY8NmjXU31Q7BjbpzE&amp;amp;si=HxdgT7pOkgl4_VJu" rel="noopener noreferrer"&gt;Beyond the Classroom: Open Source Stories&lt;/a&gt; was born.&lt;/p&gt;

&lt;h3&gt;
  
  
  Beyond the Classroom: Open Source Stories
&lt;/h3&gt;

&lt;p&gt;This four-episode documentary series, released in October 2024 for HacktoberFest, shines a light on Nigerian students who have crafted tech careers outside of traditional classroom paths by immersing themselves in open source. The series features eight students from universities across Nigeria, each sharing how open source has opened doors to incredible opportunities and changed their career paths.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4okngas9c0i9dxz00zv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4okngas9c0i9dxz00zv.jpg" alt="A compilation of the eight Nigerian students who told their stories." width="800" height="800"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A compilation of the eight Nigerian students who told their stories.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Each episode explores a different stage in the open-source journey:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://youtu.be/3KoAHUrUHkI?si=dTui6LOg96s6BH8e" rel="noopener noreferrer"&gt;Episode One: Hello World&lt;/a&gt; – Here, each student talks about their first encounter with open source. They share what it means to them, their initial impressions, and the steps they took to make their first contributions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://youtu.be/aGYPI3LSnAQ?si=hfDrrCEN5CYvHrTL" rel="noopener noreferrer"&gt;Episode Two: Blooming Where You’re Planted&lt;/a&gt; – In this episode, the students discuss how they carved out a place for themselves within the open-source ecosystem. They share the obstacles, learning curves, and moments of self-discovery that helped them find their unique paths.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://youtu.be/ME29dEfO2lI?si=ktZ0K-5ul6uxHLaA" rel="noopener noreferrer"&gt;Episode Three: Money Makes the World Go Round&lt;/a&gt; – Financial realities come into play as the students reveal how they monetized their open-source passions, from Google Summer of Code (GSoC) to the MLH Fellowship and the Outreachy Internship. They explain how they balanced passion projects with financial needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://youtu.be/xWVm3jTJQT4?si=yiAbIcZtmGFAvjZL" rel="noopener noreferrer"&gt;Episode Four: It Takes a Village&lt;/a&gt; – This final episode delves into the motivations beyond money that keep these students in open source. They open up about the friendships, connections, and personal satisfaction they’ve gained from their contributions, proving that open source is as much about community as it is about code.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the documentary, a recurring theme is the importance of the GitHub Student Developer Pack, which has been a crucial resource for all these students. They each share their favorite tools in the pack, highlighting how these resources have fueled their journeys.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Power of Storytelling
&lt;/h3&gt;

&lt;p&gt;If there’s one other thing I’m passionate about, it’s storytelling. I believe in the power of great storytelling, and I’m convinced that stories resonate far more with listeners/viewers than advice alone. This documentary series isn’t just about offering advice—it’s about telling real stories from real people. When we tell stories, especially drawn from our own experiences, we’re inviting people into a world where they can learn through the lens of another person’s lived experience. While advice tends to tell people what they should do, stories show them what’s possible, what has worked or failed, and why. For me, this approach makes stories not only relatable but impactful, because they allow listeners to draw their own conclusions, spark curiosity, empathy, and self-reflection.&lt;/p&gt;

&lt;p&gt;Stories also create a deeper sense of connection that pure advice can’t. Through stories, abstract advice becomes personal and real, making the experience far more memorable. And in my view, when a story is told well, it leaves room for the listener to take away a unique lesson that resonates on a personal level.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Role of Community Partnerships
&lt;/h3&gt;

&lt;p&gt;To ensure that Beyond the Classroom reached a broad audience, I knew community partnerships would be essential. Partnering with &lt;a href="https://blog.oscafrica.org/osca-x-beyond-the-classroom-open-source-stories" rel="noopener noreferrer"&gt;Open Source Community Africa (OSCA)&lt;/a&gt; and &lt;a href="https://x.com/chaoss_africa/status/1842168055243186299/photo/1" rel="noopener noreferrer"&gt;CHAOSS Africa&lt;/a&gt; helped bring the series to life. These partnerships allowed us to leverage their networks to attract a diverse and engaged viewership, expanding our reach and impact in the open-source community.  Most importantly, the support the documentary received from GitHub Education through the Campus Experts program was instrumental in making this series possible, providing resources and a platform to connect with a broader audience dedicated to open-source collaboration and learning.&lt;/p&gt;

&lt;p&gt;Through &lt;em&gt;Beyond the Classroom&lt;/em&gt;, I hope to inspire students to see open source as a powerful avenue for career growth and personal fulfillment. This project has shown me how storytelling can inspire others, how open source can empower, and how community partnerships can amplify our voices. For students across Nigeria, this documentary is proof that there are meaningful paths outside the classroom—paths built on passion, collaboration, and a love for learning.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>storytelling</category>
      <category>githubeducation</category>
      <category>hacktoberfest</category>
    </item>
    <item>
      <title>Object-Oriented Design: Why don't you explain this to me like I'm five</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Fri, 11 Aug 2023 13:46:36 +0000</pubDate>
      <link>https://forem.com/salimcodes/object-oriented-design-why-dont-you-explain-this-to-me-like-im-five-173h</link>
      <guid>https://forem.com/salimcodes/object-oriented-design-why-dont-you-explain-this-to-me-like-im-five-173h</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the world of creating software, a strong foundation is key. Before even writing a single line of code, there is a vital step called object-oriented analysis and design (OOAD). The method this step offers helps software developers plan solutions smartly, thereby ensuring smooth development. It lets software creators build applications that aren't just functional, but also adaptable, easy to maintain, and ready for future growth. With object-oriented programming languages now being the standard for everything from web development to running desktop applications, mastering and understanding object-oriented analysis and design has become essential.&lt;/p&gt;

&lt;p&gt;In this article, I will guide you through the foundational concepts and principles of the process that allows you to take your ideas for an application into the right pieces so you know directly what code to write (i.e. Object-Oriented Design) in the form of a story-telling experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Object-Oriented Design: A relatable analogy
&lt;/h2&gt;

&lt;p&gt;Many of today's widely adopted programming languages follow the object-oriented paradigm. However, it is important to note that this isn't the only programming approach available. To comprehend the advantages of object-oriented languages, I would draw a comparison with the alternative method i.e. procedural programming languages, such as the plain C language. In procedural coding, a program is composed of a sequence of operations to be executed. While certain portions might be structured into named functions for modularity, the primary objective is to move from Point X to Point Y in a bid to accomplish a task.&lt;/p&gt;

&lt;p&gt;Permit me to compare these approaches by drawing a parallel with a familiar analogy. Think of traditional storytelling versus interactive theater. In traditional storytelling, it is almost like a classic book where the author guides you through the plot, describing each event and character's actions in a linear sequence. The narrative flows step by step, following a predefined path. This linear approach can be likened to procedural programming in software development as a program is designed as a series of instructions, much like the chapters of a book. While some sections might be grouped for better organization, the overall structure is sequential similar to how the author crafts a coherent story by arranging events.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foogbkwktb7rly2wejztn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foogbkwktb7rly2wejztn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A split-screen image showing a traditional storytelling scene on one side and actors performing in interactive theater on the other, representing the comparison between procedural programming and object-oriented programming.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A split-screen image showing a traditional storytelling scene on one side and actors performing in interactive theater on the other, representing the comparison between procedural programming and object-oriented programming.&lt;br&gt;
Now, shift your mind to interactive theater. In this dynamic setting, instead of a single narrator guiding the story, different actors portray distinct characters, each with their motives and actions. This can be likened to object-oriented programming where a program is divided into objects. Much like the characters in the interactive theater, each object encapsulates its data and behavior, and these objects interact with each other, similar to actors on a stage. The characters' exchanges shape the unfolding story, just as the interactions between objects influence the software's behavior.&lt;/p&gt;

&lt;p&gt;In both traditional storytelling and interactive theater, through the ages, we have heard some compelling and captivating narratives created. Similarly, in procedural and object-oriented programming, over the years, a couple of functional software or dare I say, life-changing software have been developed. However, the way these narratives or programs are structured and experienced differ. Just as interactive theater adds layers of complexity and engagement to the storytelling experience, object-oriented programming enhances software development by promoting reusability, adaptability, and modular design. Both paradigms have their merits, and their selection depends on the nature of the project. However, understanding the analogy between traditional storytelling and interactive theater would help you appreciate when and how to apply object-oriented programming, leading to software that's not only functional but also flexible, maintainable, and capable of adapting to changing requirements, much like an interactive theater experience that evolves based on audience interaction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reusability: Another working analogy
&lt;/h2&gt;

&lt;p&gt;Now, let's relate this to code reusability. Imagine if a particular character or plot element in the story could only be used once and never appeared again. For sure, this would limit the possibilities for creating new stories with the same characters or elements. Similarly, in procedural programming, code segments are often tailored for a specific task and aren't easily adaptable for reuse in different contexts. Each piece of code is designed for a specific purpose, making it less flexible when applied to different scenarios.&lt;/p&gt;

&lt;p&gt;On the other hand, in interactive theater where actors play distinct characters, each with their actions and roles, a character from one interactive theater production could seamlessly be integrated into another production, bringing their unique personality and actions. Real-life instances of this include when Olivia Pope from the American political thriller television series, Scandal showed up for an episode on How to Get Away with Murder or when Jake Peralta from the American police procedural comedy television series, Brooklyn Nine-Nine made a cameo appearance on New Girl. In object-oriented programming, this translates to the ability to create classes or objects that can be reused across different projects whilst encapsulating specific functionalities. This promotes efficient development as it is like having a library of well-developed characters ready to perform in different stories. To summarize, object-oriented programming excels in code reusability due to its modular and encapsulated nature.&lt;/p&gt;

&lt;h2&gt;
  
  
  Collaboration: Another viable comparison
&lt;/h2&gt;

&lt;p&gt;Imagine having to rewrite an entire chapter of a book just to incorporate a new plot twist. That would be challenging and may disrupt the coherence of the entire story. In a similar sense, with traditional storytelling, the narrative follows a predetermined linear path - once the story is written and published, making significant changes would require altering the entire text, which can be a complex and time-consuming process. This rigidity is akin to procedural programming.&lt;/p&gt;

&lt;p&gt;On the other hand, let's consider interactive theater. In this context, to incorporate a new plot twist - say the villain's origin story, only the attributes of the villain need to change and it seamlessly fits into the narrative. This adaptability mirrors the flexibility of object-oriented programming which in turn encourages collaboration.&lt;/p&gt;

&lt;p&gt;Collaboration is essential in software development as it brings multiple individuals together to collectively create high-quality software products. For instance, another person's perspective can shed new light on your code, potentially identifying flaws, optimizations, or alternative solutions that you might have missed. Given that using object-oriented programming, the codebase is divided into discrete objects, each encapsulating specific behavior and data, it is easier to edit and collaborate. This parallels how scripts can be changed more easily with the interactive theater point of view.&lt;/p&gt;

&lt;h2&gt;
  
  
  APIE: Using the analogy
&lt;/h2&gt;

&lt;p&gt;APIE stands for the four core principles of object-oriented programming – abstraction, polymorphism, inheritance and encapsulation. Now, in terms of the earlier analogy of interactive theater, here is what they imply:&lt;/p&gt;

&lt;h3&gt;
  
  
  Abstraction
&lt;/h3&gt;

&lt;p&gt;Abstraction involves focusing on what an object does rather than how it does it. In other words, abstraction distills the essential characteristics of an object while hiding unnecessary details. In the context of our analogy, it suggests that when crafting a compelling story, you use the essential elements while omitting unnecessary details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1n40rqi89w1hfu4jdh8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1n40rqi89w1hfu4jdh8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A diagram illustrating abstraction, with layers of details being abstracted away, leaving only the essential features.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just as a skilled storyteller focuses on the core narrative, characters, and pivotal events, abstraction in object-oriented programming involves hiding all but the relevant data about an object to reduce complexity and increase efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Polymorphism
&lt;/h3&gt;

&lt;p&gt;Polymorphism as a fundamental concept in object-oriented programming allows objects of different classes to be treated as if they are objects of a common base class. It enables you to write more flexible and versatile code by abstracting away the specific implementation details of each class.&lt;/p&gt;

&lt;p&gt;In the context of our analogy, imagine you're the director of a superhero interactive theater franchise and you have two iconic characters: Superman and Wonder Woman. Each of these characters has unique abilities, background stories, and characteristics that make them distinct. As the director, you have a crucial scene where both Superman and Wonder Woman need to save the day. However, you want to keep your script and scenes as flexible as possible, allowing for different superheroes to be included in the future. This is where polymorphism comes into play: You decide to create a common base concept called "Superhero" that captures the core traits of all superheroes: their ability to save the day and make a difference. This base concept includes elements like wearing a signature costume, having a secret identity, and using unique powers to combat evil.&lt;/p&gt;

&lt;p&gt;Now, in your movie scenes, whenever there's a moment that requires a superhero to step in and use their abilities, you simply set the stage for a "Superhero" to shine. You don't need to know in advance whether it will be Wonder Woman or Superman or any other superhero that comes along later. You trust that whoever steps into the role of the "Superhero" will embody those essential traits and characteristics.&lt;/p&gt;

&lt;p&gt;In this analogy, the "Superhero" base concept represents the polymorphic aspect. It allows you, the director, to create scenes and scenarios that are open to interpretation by different specific superheroes, each with their unique abilities and stories. The audience sees these characters as superheroes, regardless of whether they have super strength like Wonderwoman or rely on gadgets like Batman.&lt;/p&gt;

&lt;p&gt;This flexible approach not only makes your movies more adaptable for future additions to the superhero cast but also reinforces the idea that the superhero archetype is more about the core values and actions rather than the specific individual wearing the cape and mask. This is similar to how polymorphism in object-oriented programming allows you to interact with objects of different classes through a common interface, regardless of their specific implementations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Inheritance
&lt;/h3&gt;

&lt;p&gt;Inheritance is a mechanism that allows a new class (subclass or derived class) to inherit properties and behaviors from an existing class (superclass or base class). This promotes code reuse and hierarchy. This mimics the way characters' traits are inherited by their successors in a theatrical series.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi8spl8uvy6ttj3q0zul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi8spl8uvy6ttj3q0zul.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A visual representation of inheritance, with a hierarchy of people, showing how properties and behaviors are inherited from a base class.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Just as a new character in a show retains certain characteristics from its predecessor, subclasses inherit and extend the functionalities of their parent class, fostering code reuse and maintaining a structured hierarchy. As a test, would you say Tariq St. Patrick from the spin-off and sequel to the popular crime drama television series Power, Ghost is a sub-class of James St. Patrick from the original show? I think yes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Encapsulation
&lt;/h3&gt;

&lt;p&gt;Encapsulation involves bundling data and methods that operate on the data into a single unit, known as a class. This paradigm serves as a protective barrier that prevents outside code from directly accessing or modifying an object's internal state. This shields the object's internal complexities and ensures that changes to the object's implementation do not disrupt other parts of the codebase.&lt;/p&gt;

&lt;p&gt;Encapsulation is similar to the backstage management of an interactive theater production. Behind the scenes, various aspects such as costumes, props, and scripts are carefully organized and managed to ensure a smooth and cohesive performance. Just as audience members at the theater only interact with the on-stage actors and props, encapsulation restricts external access to an object's internal state, allowing interactions through well-defined interfaces. This safeguards the integrity of the object and maintains the overall stability of the codebase.&lt;/p&gt;

&lt;p&gt;In all, these principles contribute to well-structured, maintainable, and adaptable software systems, just as they enhance the dynamics and effectiveness of storytelling and theatrical performances. Finally, I admit that in some instances, the analogies might have been flawed. After all, we can rarely have two situations with the same peculiarities. However, I hope I have been able to shed more light on the concept of Object-Oriented Design a little more for you.&lt;/p&gt;

</description>
      <category>oop</category>
      <category>python</category>
      <category>softwaredevelopment</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Introduction to Database Optimization</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Sun, 06 Aug 2023 20:00:59 +0000</pubDate>
      <link>https://forem.com/salimcodes/introduction-to-database-optimization-5fco</link>
      <guid>https://forem.com/salimcodes/introduction-to-database-optimization-5fco</guid>
      <description>&lt;h3&gt;
  
  
  Normalization
&lt;/h3&gt;

&lt;p&gt;During the early 1970s, Edgar Codd formulated a set of rules for arranging data within databases, collectively referred to as normalization rules. Database normalization is a cornerstone of effective data organization, ensuring reliability, consistency, and efficiency within relational databases. It's a systematic approach that guides designers in structuring database tables to minimize redundancy, prevent anomalies, and enhance data integrity. Through a series of progressive steps known as normal forms, databases are optimized for better management and query performance. These guidelines are pivotal in mitigating redundancy and bolstering data integrity. Each normal form builds upon the previous one, refining the database structure to adhere to well-defined rules. This article delves into the fundamental concepts of normalization, exploring the key principles of the first, second, and third normal forms, their significance, and how they contribute to the overall efficiency and reliability of a database system.&lt;/p&gt;

&lt;p&gt;The initial three regulations, denoted as the first, second, and third normal forms, serve as the standard benchmarks for optimizing business databases. The application of these principles represents a critical phase in the design of any database. These guidelines comprise a series of formal criteria, with each subsequent rule building upon the preceding one as we progress toward achieving the third normal form. &lt;/p&gt;

&lt;p&gt;While the definitions of these forms involve scholarly intricacies rooted in the mathematical underpinnings of databases, they provide intriguing insights for those inclined towards such knowledge. Upon the implementation of the second normalization rule in a database, we can affirm its compliance with that specific normal form. Beyond the third normal form, multiple other normal forms exist; however, due to their complexity, in this article, I will not delve into them here, as they are better suited for more advanced database scenarios. The process of normalization aids in precluding issues while working with data and warrants revisitation whenever modifications are introduced to the schema or the structural makeup of a database.&lt;/p&gt;

&lt;h3&gt;
  
  
  First Normal Form
&lt;/h3&gt;

&lt;p&gt;The concept of the first normal form in database design ensures the organization of data by requiring each cell to hold single, indivisible values and eliminating repetitive groups within tables. This principle promotes data integrity and reduces redundancy. It necessitates that fields within tables contain singular values, discouraging the presence of columns representing multiple instances of data within a single row. The extension of the first normal form further mandates the eradication of duplicate rows and emphasizes that the arrangement of rows and columns bears no impact on data interpretation. &lt;/p&gt;

&lt;p&gt;Ultimately, adherence to the first normal form sets the foundation for effective database optimization and data management.&lt;/p&gt;

&lt;h3&gt;
  
  
  Second Normal Form
&lt;/h3&gt;

&lt;p&gt;The second normal form stipulates that no entry within our table should be contingent solely on a portion of a key that serves to uniquely identify a row. This implies that for any column present in the table, excluding those forming the key, each value must be grounded solely in the entirety of the key. These values must convey information about a specific row that isn't ascertainable merely from a segment of the key. This challenge often arises in scenarios involving composite keys. To delve into the requisites of the second normal form, let's focus on the events table as an example.&lt;/p&gt;

&lt;h3&gt;
  
  
  Third Normal Form
&lt;/h3&gt;

&lt;p&gt;While the second normal form tells us that we should not be able to determine a value in a column from only part of a composite key, third normal form tells us we should not be able to figure out any value in a column that is not a key. The concept builds upon the principles of the first and second normal forms and addresses the issue of transitive dependencies within a relational database. The goal of the third normal form is to eliminate these transitive dependencies by decomposing the table into smaller, related tables. In a table adhering to 3NF, each non-key attribute should depend directly on the primary key, without being influenced by other non-key attributes.&lt;/p&gt;

&lt;p&gt;To achieve third normal form, a table must already satisfy the criteria of 2NF. This means that all non-key attributes are fully functionally dependent on the primary key, with no partial dependencies. Also, if there's a transitive dependency, it's necessary to decompose the table into multiple tables to ensure that each non-key attribute depends solely on the primary key. &lt;/p&gt;

&lt;h3&gt;
  
  
  Denormalization
&lt;/h3&gt;

&lt;p&gt;Although adhering to the practice of normalizing databases up to the third normal form is widely recommended, there can be instances where business demands or database performance concerns necessitate deviating from the principles of normalization. Denormalization emerges as a process deliberately introducing data duplication within tables, thereby contravening the rules of normalization. Importantly, denormalization is implemented subsequent to the normalization process and does not imply the avoidance of normalization altogether. In the context of our restaurant database, the likelihood of encountering performance bottlenecks remains low in the near future. However, it serves as an illustrative example of the denormalization concept.&lt;/p&gt;

&lt;p&gt;Denormalization revolves around making trade-offs. Often, there's an acceleration in data retrieval speed accompanied by a compromise in data consistency. The decision to denormalize hinges on evaluating your unique business requisites. While it may offer gains in query performance, it's essential to weigh this advantage against the potential reduction in data integrity. Ultimately, the choice to denormalize should align with the specific needs and priorities of your organization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Just to recap on the three laws:&lt;/p&gt;

&lt;h4&gt;
  
  
  First Normal Form (1NF):
&lt;/h4&gt;

&lt;p&gt;1NF mandates that each cell in a table should contain only atomic (indivisible) values, eliminating repeating groups and ensuring single values per field.&lt;/p&gt;

&lt;h4&gt;
  
  
  Second Normal Form (2NF):
&lt;/h4&gt;

&lt;p&gt;2NF stipulates that non-key attributes must be fully functionally dependent on the entire primary key, eliminating partial dependencies.&lt;/p&gt;

&lt;h4&gt;
  
  
  Third Normal Form (3NF):
&lt;/h4&gt;

&lt;p&gt;3NF addresses transitive dependencies by requiring that non-key attributes be dependent solely on the primary key, eliminating dependencies on other non-key attributes.&lt;/p&gt;

</description>
      <category>database</category>
      <category>beginners</category>
      <category>programming</category>
    </item>
    <item>
      <title>AIHiveCollective: The Netflix for AI Tools</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Tue, 23 May 2023 22:07:54 +0000</pubDate>
      <link>https://forem.com/salimcodes/aihivecollective-the-netflix-for-ai-tools-535a</link>
      <guid>https://forem.com/salimcodes/aihivecollective-the-netflix-for-ai-tools-535a</guid>
      <description>&lt;h2&gt;
  
  
  What I built: AIHiveCollective: The Netflix for AI Tools
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Category Submission: Wacky Wildcards
&lt;/h3&gt;

&lt;h3&gt;
  
  
  App Link
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://allhive.azurewebsites.net/"&gt;https://allhive.azurewebsites.net/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Screenshots
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OFXJMlWi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4rnmo54ro5kslhhv0mwk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OFXJMlWi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4rnmo54ro5kslhhv0mwk.png" alt="Image description" width="800" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the tools is shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q-Vrt3aU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1t5pjtv7islvik08os4s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q-Vrt3aU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1t5pjtv7islvik08os4s.png" alt="Image description" width="768" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Description
&lt;/h3&gt;

&lt;p&gt;AIHive is a cutting-edge Flask application designed for AI developers and enthusiasts who are eager to share and explore an array of AI tools in a seamless and integrated manner. AIHive's structure is inspired by the popular entertainment platform, Netflix. Designed in a user-friendly 'Netflix for AI Tools' manner, AIHive Collective provides a dynamic platform where AI professionals can contribute, share, and explore an ever-growing library of AI tools. In the spirit of open-source, AIHive Collective encourages its users to upload their creations, document their methodologies, and share their inventive approaches with our global community. &lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Source Code
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/HammedBabatunde/AIHiveCollective"&gt;https://github.com/HammedBabatunde/AIHiveCollective&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Background (What made you decide to build this particular app? What inspired you?)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  How I built it (How did you utilize GitHub Actions or GitHub Codespaces? Did you learn something new along the way? Pick up a new skill?)
&lt;/h3&gt;

&lt;p&gt;By creating a &lt;code&gt;.yaml&lt;/code&gt; file, I utilized GitHub Actions to automate the deployment of this fun AI project. The idea is quite simple. The project allows AI developers make new entries by contributing to the code and adding their own AI tools. &lt;/p&gt;

</description>
      <category>githubhack23</category>
    </item>
    <item>
      <title>Arduino-Based Smart Gate System Prototype for Vehicle Detection and Access Control</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Tue, 23 May 2023 21:57:33 +0000</pubDate>
      <link>https://forem.com/salimcodes/arduino-based-smart-gate-system-prototype-for-vehicle-detection-and-access-control-5dp0</link>
      <guid>https://forem.com/salimcodes/arduino-based-smart-gate-system-prototype-for-vehicle-detection-and-access-control-5dp0</guid>
      <description>&lt;h2&gt;
  
  
  Arduino-Based Smart Gate System Prototype for Vehicle Detection and Access Control
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Category Submission: Interesting IoT
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Screenshots
&lt;/h3&gt;

&lt;p&gt;The gate and road before the car reached the sensor;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yixb4p1potynu9ycay7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yixb4p1potynu9ycay7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The gate and road after the car reached the sensor;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwenwschdcwiqjmqv4fd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwenwschdcwiqjmqv4fd9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The YouTube video can be found &lt;a href="https://youtu.be/FiaagaZrhX8" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Description
&lt;/h3&gt;

&lt;p&gt;This Arduino-based Smart Gate System is designed to automate the opening and closing of a gate in response to the presence of a vehicle approaching it. This project utilizes an Arduino Uno board along with various components such as an ultrasonic sensor, a servo motor, a buzzer, and LEDs to create an intelligent gate system. The system aims to enhance security, convenience, and efficiency by eliminating the need for manual gate operation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Components:
&lt;/h4&gt;

&lt;p&gt;Arduino Uno Board: The Arduino Uno acts as the brain of the system, processing sensor data and controlling the gate mechanism.&lt;/p&gt;

&lt;p&gt;Ultrasonic Sensor: An ultrasonic sensor is employed to detect the presence of a vehicle approaching the gate. It emits ultrasonic waves and measures the time taken for the waves to bounce back after hitting an object. By calculating the distance between the gate and the vehicle, the system determines if a vehicle is within a predefined range.&lt;/p&gt;

&lt;p&gt;Servo Motor: A servo motor is used to control the gate's opening and closing mechanism. It is connected to the gate and rotates to either open or close it in response to commands from the Arduino.&lt;/p&gt;

&lt;p&gt;Buzzer: A small buzzer is incorporated to provide audio feedback to the user. It produces sound alerts to indicate different system states, such as gate opening, closing, or any errors.&lt;/p&gt;

&lt;p&gt;LEDs: LEDs are used to provide visual indications of the system's status. Different colored LEDs can be employed to signify various events, such as gate open, gate closed, or system error.&lt;/p&gt;

&lt;p&gt;Working Principle:&lt;/p&gt;

&lt;p&gt;Initialization: Upon startup, the Arduino initializes all the components, including the ultrasonic sensor, servo motor, buzzer, and LEDs.&lt;/p&gt;

&lt;p&gt;Distance Measurement: The ultrasonic sensor continuously emits ultrasonic waves and measures the time taken for the waves to return after hitting an object. By converting the time into distance using a predefined formula, the system determines the distance between the gate and any approaching vehicle.&lt;/p&gt;

&lt;p&gt;Vehicle Detection: The Arduino compares the measured distance with a predefined threshold to determine if a vehicle is within range. If the distance is below the threshold, it signifies the presence of a vehicle approaching the gate.&lt;/p&gt;

&lt;p&gt;Gate Control: Upon detecting a vehicle, the Arduino sends a command to the servo motor to open the gate. The gate remains open for a specified duration to allow the vehicle to pass through.&lt;/p&gt;

&lt;p&gt;Gate Closing: After the specified duration, the Arduino sends another command to the servo motor to close the gate. Alternatively, the gate can also be closed manually using a push button or by sensing the departure of the vehicle.&lt;/p&gt;

&lt;p&gt;Feedback: Throughout the process, the buzzer emits different sounds to provide audio feedback to the user. The LEDs also indicate the current state of the gate, such as open or closed, and any errors that may occur.&lt;/p&gt;

&lt;h4&gt;
  
  
  Future Prospects:
&lt;/h4&gt;

&lt;p&gt;The Arduino-based Smart Gate System prototype showcases the potential for implementing an intelligent gate system at the Faculty of Engineering, University of Lagos.  Additionally, a comprehensive user interface could be developed to monitor gate status and configure system settings. The prototype serves as a foundation for a scalable and versatile smart gate solution, promising increased efficiency and security in managing vehicle access.&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Source Code
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://github.com/salimcodes/SmartGatebySalim" rel="noopener noreferrer"&gt;https://github.com/salimcodes/SmartGatebySalim&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Permissive License
&lt;/h3&gt;

&lt;h2&gt;
  
  
  Background (What made you decide to build this particular app? What inspired you?)
&lt;/h2&gt;

&lt;p&gt;As an engineering student at the University of Lagos, I have always been driven by a deep desire to utilize my knowledge and skills for the betterment of society. It was during my time on campus that I witnessed the immense burden placed on the security personnel who tirelessly opened and closed the gate of the faculty entrance. Their tireless efforts to ensure the safety of our academic community inspired me to find a solution that would alleviate their workload and enhance efficiency.&lt;/p&gt;

&lt;p&gt;Filled with determination and a passion for creating positive change, I embarked on a journey to develop a smart gate using Arduino technology. The concept was simple yet powerful: to automate the gate-opening process by integrating an ultrasonic sensor that would detect approaching vehicles and trigger the gate to open or close accordingly. This innovative solution would not only reduce the stress on the security personnel but also streamline the traffic flow, making the entire system more efficient and secure.&lt;/p&gt;

&lt;p&gt;The journey towards building this smart gate was not without its challenges. I faced numerous obstacles, from technical difficulties to time constraints. However, my unwavering dedication and belief in the transformative power of engineering fueled my determination to overcome every hurdle that came my way. I spent countless hours researching, prototyping, and refining the design, constantly pushing myself to deliver the best possible solution.&lt;/p&gt;

&lt;p&gt;But it was more than just creating a technological marvel. For me, this project was deeply personal. It was a reflection of my commitment to making a meaningful impact and leaving a lasting legacy. I wanted to showcase the potential of engineering as a force for good, addressing real-world problems and contributing to the advancement of society.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I built it (How did you utilize GitHub Actions or GitHub Codespaces? Did you learn something new along the way? Pick up a new skill?)
&lt;/h3&gt;

&lt;p&gt;GitHub Codespaces played a vital role in transforming the Arduino Uno prototype code into a commercially viable solution. The idea is simple. Using codespaces,  With its cloud-based development environment, Codespaces enabled seamless code transfer and collaboration. The power of open source and collaboration, similar to AI's recent advancements, can propel IoT to new heights. By leveraging Codespaces, developers can share code, ideas, and best practices, accelerating innovation in IoT. This collaborative approach fosters interconnected solutions that address real-world challenges and enhance our lives.&lt;/p&gt;

</description>
      <category>githubhack23</category>
      <category>iot</category>
      <category>arduino</category>
    </item>
    <item>
      <title>Building the Salim Writing Blog</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Wed, 24 Aug 2022 23:23:36 +0000</pubDate>
      <link>https://forem.com/salimcodes/building-the-salim-writing-blog-2n0b</link>
      <guid>https://forem.com/salimcodes/building-the-salim-writing-blog-2n0b</guid>
      <description>&lt;h3&gt;
  
  
  Overview of my Submission
&lt;/h3&gt;

&lt;p&gt;For my submission in the Redis Hackathon, I built a Blog Service with python's fastest web framework, FastAPI and Redis OM. I was able to create a Redis database on Redis, database models with Redis-OM and then develop Restful APIs with Python FastAPI that could interact with Redis to create, retrieve and search data. FastAPI is a modern, high-performance Web framework for developing RESTful APIs in Python. It is known for being fast, easy and automatically generating swagger API docs.&lt;/p&gt;

&lt;p&gt;The inspiration for this project is simply from how much I love to write. By virtue of how much I love to write and my curiosity as a software engineer, I have always wondered what the database architecture of blogging services would be like. &lt;/p&gt;

&lt;p&gt;Here is the link to the GitHub repository what I built is stored in containing a README and a MIT license.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/salimcodes/salim-blog"&gt;https://github.com/salimcodes/salim-blog&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category: Wacky Wildcards
&lt;/h3&gt;

&lt;p&gt;Here's a short video that explains the project and how it uses Redis:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=eLC8isM7iCE"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Tltcy6aH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://img.youtube.com/vi/eLC8isM7iCE/0.jpg" alt="IMAGE ALT TEXT HERE" width="480" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Language Used
&lt;/h3&gt;

&lt;p&gt;I used Python for my project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Code: GitHub repository containing a README and a MIT license
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--A9-wwsHG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/salimcodes"&gt;
        salimcodes
      &lt;/a&gt; / &lt;a href="https://github.com/salimcodes/salim-blog"&gt;
        salim-blog
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;h1&gt;
Salim Blog API&lt;/h1&gt;
&lt;p&gt;A Simple Blog Service created with Fast API and Redis-OM&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://user-images.githubusercontent.com/64667212/186480200-6d4d01d1-886d-4e72-91fd-61430a5bfed6.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--igSlI-Os--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480200-6d4d01d1-886d-4e72-91fd-61430a5bfed6.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://user-images.githubusercontent.com/64667212/186480270-0b4e1fcb-973f-48a0-ae6e-e40f69a6ddea.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OTI3b_m5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480270-0b4e1fcb-973f-48a0-ae6e-e40f69a6ddea.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a rel="noopener noreferrer nofollow" href="https://user-images.githubusercontent.com/64667212/186480338-c1832cea-6527-4ba6-9343-9fe97b01cfd7.png"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yudt-5kf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480338-c1832cea-6527-4ba6-9343-9fe97b01cfd7.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
Overview video&lt;/h1&gt;
&lt;p&gt;Here's a short video that explains the project and how it uses Redis:&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=eLC8isM7iCE" rel="nofollow"&gt;&lt;img src="https://camo.githubusercontent.com/b2daba4e31cf54a0c85c5ad3837af4f63f6f5362f3268249d6fd193f2ad41c81/68747470733a2f2f696d672e796f75747562652e636f6d2f76692f654c433869734d376943452f302e6a7067" alt="IMAGE ALT TEXT HERE"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
How it works&lt;/h2&gt;
&lt;p&gt;The blog is pretty simple. It has the following API's;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A &lt;code&gt;GET&lt;/code&gt; method at the home page that displays the message &lt;code&gt;Hello world, I am Salim from Africa!&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Two &lt;code&gt;POST&lt;/code&gt; methods [to create authors and blogs respectively] that users can use to create a new blog and register as an author.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The author method collects the pk, first name, last name, email address, bio of the author and the date the author joined. The schema is shown below.&lt;/p&gt;
&lt;div class="snippet-clipboard-content notranslate position-relative overflow-auto"&gt;&lt;pre class="notranslate"&gt;&lt;code&gt;"pk": "string",
  "first_name": "string",
  "last_name": "string",
  "email": "string",
  "bio": "string",
  "date_joined": "2022-08-24T16:59:09.222111"
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;A &lt;code&gt;GET&lt;/code&gt; method that retrieves the created blogs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A &lt;code&gt;PUT&lt;/code&gt; method that is capable of updating blogs.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A &lt;code&gt;DELETE&lt;/code&gt; method that makes users able to delete blogs.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
How&lt;/h3&gt;…&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/salimcodes/salim-blog"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h3&gt;
  
  
  Collaborators
&lt;/h3&gt;

&lt;p&gt;Solo project&lt;/p&gt;

&lt;h3&gt;
  
  
  Screenshots and Demos
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KsV6E8tr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480601-c3b89d36-7a11-4116-a5d8-cc7758ffe4fb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KsV6E8tr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480601-c3b89d36-7a11-4116-a5d8-cc7758ffe4fb.png" alt="image" width="438" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--igSlI-Os--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480200-6d4d01d1-886d-4e72-91fd-61430a5bfed6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--igSlI-Os--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480200-6d4d01d1-886d-4e72-91fd-61430a5bfed6.png" alt="image" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--OTI3b_m5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480270-0b4e1fcb-973f-48a0-ae6e-e40f69a6ddea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--OTI3b_m5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480270-0b4e1fcb-973f-48a0-ae6e-e40f69a6ddea.png" alt="image" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Yudt-5kf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480338-c1832cea-6527-4ba6-9343-9fe97b01cfd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Yudt-5kf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://user-images.githubusercontent.com/64667212/186480338-c1832cea-6527-4ba6-9343-9fe97b01cfd7.png" alt="image" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Check out &lt;a href="https://redis.io/docs/stack/get-started/clients/#high-level-client-libraries"&gt;Redis OM&lt;/a&gt;, client libraries for working with Redis as a multi-model database.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Use &lt;a href="https://redis.info/redisinsight"&gt;RedisInsight&lt;/a&gt; to visualize your data in Redis.&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Sign up for a &lt;a href="https://redis.info/try-free-dev-to"&gt;free Redis database&lt;/a&gt;.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>redishackathon</category>
      <category>python</category>
      <category>fastapi</category>
      <category>programming</category>
    </item>
    <item>
      <title>Speech-To-Text Technology: The mental health angle (Innovative Ideas Challenge)</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Wed, 06 Apr 2022 17:44:34 +0000</pubDate>
      <link>https://forem.com/salimcodes/speech-to-text-technology-the-mental-health-angle-innovative-ideas-challenge-n87</link>
      <guid>https://forem.com/salimcodes/speech-to-text-technology-the-mental-health-angle-innovative-ideas-challenge-n87</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;When I discovered that there was a category for innovative ideas in the Deepgram Hackathon on DEV, my joy knew no bounds. I have always believed that only with innovative and fresh ideas can technology, and by extension, the world make progress. This is because I believe innovation is the cornerstone of sustained economic growth and prosperity. That was why I applied for this category of the hackathon. Although prior to the Deepgram x DEV Hackathon, I had not encountered Deepgram, given my interest in Artificial Intelligence, I am not new to the concept of speech recognition technology. It is worthy of note that I am not only interested in Artificial Intelligence. However, my social impact activities has made me grown an interest in the field of mental health. Mental health is the state of well-being in which the individual realizes his/her own abilities, hence being able to cope with the normal stresses of life, work productively and fruitfully and above all, is able to make a contribution to his or her community. Without an iota of doubt, the issue of mental health is an important one that should not be treated with levity. Ergo, as much as I can, I endeavour to advocate for open conversations along with better awareness and understanding of mental health issues. This is why this particular submission resonates with me. &lt;/p&gt;

&lt;h3&gt;
  
  
  My Deepgram Use-Case
&lt;/h3&gt;

&lt;p&gt;For context, in 2021, I was [opportuned to be amongst the 6% of undergraduate students] worldwide &lt;a href="https://twitter.com/SalimOpines/status/1486782089416646657?t=WJ-VhxRp66hbbMBMajmIyw&amp;amp;s=19"&gt;selected as a United Nations Millennium Fellow&lt;/a&gt;. The United Nations Academic Impact stipulates that selected fellows are required to work on a project of their choice in tandem with a Sustainable Development Goal(SDG) of their choice. I worked on the third Sustainable Development Goal (Good health and Well-being). &lt;/p&gt;

&lt;p&gt;In general, there is a global shortage of mental health workers, with demand outstripping service provision. In a country like mine (Nigeria), reports suggest that we have as few as 0.1 psychiatrists for every 1,000,000 people. As an innovative young mind, whilst many see a problem, I see an opportunity. I opine that the insufficient number of mental health workers should prompt the utilization of technological advancement to meet the needs of the people who are affected by mental health conditions. Furthermore, the laissez faire approach to mental health in some parts of the world reduces the chances of persons with mental health disorders to get treatment. &lt;/p&gt;

&lt;p&gt;It is &lt;a href="https://www.thelancet.com/journals/lanpsy/article/PIIS2215-0366(15)00505-2/fulltext"&gt;estimated&lt;/a&gt; that mental health represents around 34% of the global disease burden, and with this predicted to increase, the NHS faces more pressure to meet these demands as ever. One “solution” that appears to be growing in popularity is the use of chatbots for the screening, diagnosis and treatment of mental health conditions. &lt;/p&gt;

&lt;h3&gt;
  
  
  Dive into Details
&lt;/h3&gt;

&lt;p&gt;Chatbots are systems that are able to converse and interact with human users using spoken (our use case), written and visual languages. However, my idea would be focusing on the spoken languages use-case mainly for the ease and anonymity it offers. My belief is that chatbots have the potential to be useful tools for individuals with mental disorder, especially those who are reluctant to seek mental health advice due to stigmatization.&lt;br&gt;
Given the growing availability of large health databases and corpora, a chatbot can be designed using several evidence-based therapies such as cognitive behavioral therapy, behavioral reinforcement and mindfulness to target symptoms of depression for users. &lt;br&gt;
The major benefactors of this idea are people with depression, anxiety, schizophrenia, dementia, phobic disorders, stress and eating disorders. &lt;/p&gt;

&lt;p&gt;Using Deepgram’s SDKs which are supported for use with the Deepgram API, it becomes possible to create a chatbot that can convert the speech input made by users to text. This chatbot will depend only on a well-versed decision tree that will generate their responses. This implementation will be a web-based chatbot as opposed to stand-alone software for two reasons. Firstly, to use web-based chatbots, users do not need to install a specific application to their devices, thereby reducing the risk of breaching the privacy. Secondly, web-based chatbots are more accessible than stand-alone chatbots. &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;From participating in Deepgram hackathon ‘Innovative Ideas’ challenge, I have learnt more about speech-to-text technology. Furthermore, I am super glad I could use this opportunity to advocate for open conversations about mental health issues whilst being innovative with challenges that relates to mental health.&lt;/p&gt;

</description>
      <category>hackwithdg</category>
      <category>ai</category>
      <category>mentalhealth</category>
    </item>
    <item>
      <title>Speech-to-text Technology: Tales of just another knackered Software Developer (Innovative Ideas Challenge)</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Sun, 27 Mar 2022 06:55:22 +0000</pubDate>
      <link>https://forem.com/salimcodes/speech-to-text-technology-tales-of-just-another-knackered-software-developer-innovative-ideas-challenge-463b</link>
      <guid>https://forem.com/salimcodes/speech-to-text-technology-tales-of-just-another-knackered-software-developer-innovative-ideas-challenge-463b</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;When I recieved the newsletter announcing the commencement of the Deepgram hackathon on DEV, I was super excited. I had always had a plethora of innovative ideas on how speech-to-text technology could help a number of people and that was why I decided to participate in the Innovative Ideas category of the Deepgram Hackathon on DEV. I believe that with a number of my innovative ideas, millions of lives can be bettered. I am of the opinion that Deepgram can be embedded in applications such that it would ease people’s lives. I guess it is a great thing that the hackathon allows for users to make as many submissions as they would like. Although prior to the Deepgram x DEV Hackathon, I had not encountered Deepgram, given my interest in Artificial Intelligennce, I am not new to the concept of speech recognition technology.&lt;/p&gt;

&lt;p&gt;I wrote on my &lt;a href="https://dev.to/salimcodes/speech-to-text-technology-tales-of-just-another-knackered-college-student-innovative-ideas-challenge-17p9"&gt;first submission&lt;/a&gt; on how Deepgram can help make student’s lives much better. On this submission, I would be talking about how Deepgram’s API can be used to help developers. I mean, the first set of people that should benefit from a developer’s API should be other developers, right? Such selfish beings, developers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://i.giphy.com/media/11RUijoqnc1yJW/giphy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://i.giphy.com/media/11RUijoqnc1yJW/giphy.gif" width="300" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  My Deepgram Use-Case
&lt;/h3&gt;

&lt;p&gt;My idea is pretty simple. A speech-to-text application that would help programmers and software developers who blog. Apart from earning $80k to $90k per annum, programmers are often encouraged to spend some of their time on blogging. The general consensus is that it will add extra benefits to one’s career and one can be a reason to inspire other fellow programmers or students. Although I do not earn $80k to $90k per annum, I constantly find myself &lt;a href="https://tealfeed.com/salimopines"&gt;blogging&lt;/a&gt; my software development journey. The benefits of blogging for programmers and developers cannot be over-emphasized. A programmer’s blog helps provide an ample opportunity to learn and polish their skills. A programmer’s blog can be a survival guide for beginner developers. A programmer’s blog will help build relations with new developers. A programmer’s blog also creates a timeline for their growth. However, after writing hundreds (try thousands?) of lines of code, writing that 500-word blog post explaining the process can seem like a tall order. I have experienced this first-hand and that was what inspired my submission to the Innovative Ideas category of the Deepgram Hackathon on Dev. I believe Deepgram could help with their speech-to-text technology by making sure all tired software developers have to do is speak. Without an iota of doubt, I believe I would rather talk about the process of using a Linear Regression algorithm than stare at my screen and write (after writing codes all day or all week long). &lt;/p&gt;

&lt;h3&gt;
  
  
  Dive into Details
&lt;/h3&gt;

&lt;p&gt;This innovative idea will help such that blogging will become easier and less laborious for software developers. Using Deepgram's SDKs, which are supported for use with the Deepgram API, a pre-recorded audio file can be automatically transcribed. This will make sure the audio that the blogger (or software developer) had pre-recorded using a voice recorder can be converted from audio to text. As such, bloggers who are the major benefactor of this idea can easily pre-record their post. I took a look at Deepgram's documentation and realized that this is very possible. The most important Deepgram feature for this is the one that allows for the transcription of pre-recorded audios. To implement this, after creating a Deepgram account, I  would need to generate a unique Deepgram API key. Since I am more accustomed to python, I would look to pip install the needed third-party module using the command &lt;code&gt;pip install deepgram-sdk&lt;/code&gt;. In the terminal, I would create a new file in my project's location, and populate it with code as given in the documentation. I think it is worthy of note that Speech-to-Text technology improves endurance and reduce writing fatigue by eliminating the physical act of composing to paper and keyboard. This will in turn shift focus from the physical act of writing to that of expression and organization of thoughts and knowledge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;In conclusion, from participating in Deepgram Hackathon on DEV, I have gained a lot of insights as regards how speech-to-text technologies work and how it can help education as the key to development while it opens up a world of endless possibilities. Excited about my next innovative idea? Well, I am!&lt;/p&gt;

</description>
      <category>hackwithdg</category>
      <category>ai</category>
      <category>deepgram</category>
      <category>hackathon</category>
    </item>
    <item>
      <title>Speech-to-text Technology: Tales of just another knackered college student (Innovative Ideas Challenge)</title>
      <dc:creator>Salim Ọlánrewájú Oyinlọlá</dc:creator>
      <pubDate>Fri, 25 Mar 2022 11:23:14 +0000</pubDate>
      <link>https://forem.com/salimcodes/speech-to-text-technology-tales-of-just-another-knackered-college-student-innovative-ideas-challenge-17p9</link>
      <guid>https://forem.com/salimcodes/speech-to-text-technology-tales-of-just-another-knackered-college-student-innovative-ideas-challenge-17p9</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;For as long as I can remember, I have always been obsessed with the idea of automation. Be it schedule-sending mails or walking past an automatic door, the idea of things operating on their own without fail after being pre-programmed really just tickles my fancy. As an undergraduate student of Electrical and Electronics Engineering with a focus on Artificial Intelligence, I am constantly stressed out by virtue of the immense workload. From attending long classes to going for laboratory experiments to preparing for exams to writing term papers whilst managing projects, the hustle and bustle is non-stop. Take into account my social impact and volunteering commitments and you might just have the perfect stress recipe.  As such, I find myself constantly thinking of ways to automate as many tasks as possible. I guess this constant urge to reduce labor whilst increasing productivity was the genesis behind my fixation for innovative ideas.  Although prior to the Deepgram x DEV Hackathon, I had not encountered Deepgram, given my interest in Artificial Intelligennce, I am not new to the concept of speech recognition technology. &lt;/p&gt;

&lt;h3&gt;
  
  
  My Deepgram Use-Case
&lt;/h3&gt;

&lt;p&gt;The origin of this idea traces back to my last holiday. In a bid to give back to the community whilst staying proactive, I was tutoring a younger student (let’s call him Dipo) in preparation for his Secondary School Leaving Examinations. The student in question is quite acquainted with and fond of technological gadgets and his action that particular day struck a string so hard, it almost felt like a Eureka moment. I asked him the value of Planck’s constant, a fundamental physics constant used in quantum mechanics calculations only to be answered by his phone. Now that I think of it, I don’t know what was more surprising at that point. The accuracy to which the Google assistant returned the constant or the fact that the Google assistant heard what I said, considering how fast I usually talk. Perhaps, I was distracted by the euphoria that hit me. I am pretty convinced that what I felt is similar to how Archimedes felt when he supposedly hopped out of his bath and ran onto the streets to tell the king, ‘I’ve found it’. I felt like a game-changer. I thought to myself, ‘if Dipo could do that to his tutor, why can’t I do that on my college professors?’ Of course, I would need their permission in my case. Right at that moment, I realized that if I harnessed reliable speech-to-text technology, I could save myself a boatload of stress. And, I believe a lot of college students feel the same way. Wouldn’t life be much easier if instead of typing as the lecturers gave their lectures during the online classes, one simply had a speech-to-text technology convert their lectures to a text document?&lt;/p&gt;

&lt;h3&gt;
  
  
  Dive into Details
&lt;/h3&gt;

&lt;p&gt;Although speech recognition technology is already a part of our everyday lives, for now, it is still limited in its application areas. Speech-to-text technology, and by extension, Artificial Intelligence has the potential to make far-reaching changes in the educational sector. As in my innovative idea, with an app that automatically transcribes live streaming audio from the lecturers in real time using Deepgram’s SDKs, which are supported for use with the &lt;a href="https://developers.deepgram.com/api-reference/"&gt;Deepgram API&lt;/a&gt; will improve endurance and reduce writing fatigue by eliminating the physical act of composing to paper and keyboard. This will in turn shift focus from the physical act of writing to that of expression and organization of thoughts and knowledge. In a bid to confirm this hypothesis, I tried this locally when I endeavored to turn BBC’s live stream into text. Here was my result. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1nXvNVLt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7u4qnn9v688i6u5m2zhk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1nXvNVLt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7u4qnn9v688i6u5m2zhk.png" alt="Nil" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A sample of my code can be found &lt;a href="https://github.com/salimcodes/devXdeepgram"&gt;here&lt;/a&gt;.   &lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Finally, as an artificial intelligence enthusiast, I am excited by how much speech-to-text technology, a subset of Artificial Intelligence can change the way we look at Education. With the research I did whilst participating in Deepgram Hackathon "Innovative Ideas" challenge, I have gained a lot of insights as regards how speech-to-text technologies work and how it can help education as the key to development while it opens up a world of endless possibilities.&lt;/p&gt;

</description>
      <category>hackwithdg</category>
      <category>ai</category>
      <category>firstpost</category>
    </item>
  </channel>
</rss>
