<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rami Kronbi</title>
    <description>The latest articles on Forem by Rami Kronbi (@ramikronbi).</description>
    <link>https://forem.com/ramikronbi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ramikronbi"/>
    <language>en</language>
    <item>
      <title>We Built Sign Language AI for a Language With Almost No Dataset. Here's What That Actually Looks Like.</title>
      <dc:creator>Rami Kronbi</dc:creator>
      <pubDate>Tue, 05 May 2026 16:29:25 +0000</pubDate>
      <link>https://forem.com/ramikronbi/we-built-sign-language-ai-for-a-language-with-almost-no-dataset-heres-what-that-actually-looks-kem</link>
      <guid>https://forem.com/ramikronbi/we-built-sign-language-ai-for-a-language-with-almost-no-dataset-heres-what-that-actually-looks-kem</guid>
      <description>&lt;p&gt;&lt;em&gt;The problem starts here: a hand, in motion, carrying meaning. Teaching a machine to read it is harder than it looks.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;I was standing in a cafe in Beirut when I saw it happen.&lt;/p&gt;

&lt;p&gt;A deaf man was trying to explain something to the barista. He signed. She stared. He tried again, slower, more deliberate, like that would help. She shook her head apologetically and reached for a notepad. He took it, wrote something down, she read it, nodded. The whole exchange took maybe four minutes for something that should have taken thirty seconds.&lt;/p&gt;

&lt;p&gt;I stood there watching and thought: we have real-time translation for dozens of spoken languages in our pockets. Why not this?&lt;/p&gt;

&lt;p&gt;That question became OmniSign, a real-time Lebanese Sign Language (LSL) translator. And building it taught me things about machine learning that no paper had prepared me for, because the hardest problems weren't technical. They were human.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Dataset Problem Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;When you want to train a computer vision model, the standard advice is: get more data. ImageNet has over 14 million images. Common Voice has thousands of hours of speech. Even niche spoken languages have crowdsourced datasets you can start from.&lt;/p&gt;

&lt;p&gt;Lebanese Sign Language has almost none of that.&lt;/p&gt;

&lt;p&gt;LSL is a distinct language, not a transliteration of Arabic, not a derivative of French Sign Language, though it shares some roots. It has its own grammar, its own spatial logic, its own regional quirks. And it is used by a community that has been largely invisible to the tech world.&lt;/p&gt;

&lt;p&gt;So before I could write a single line of model code, I had to figure out how to build a dataset from scratch.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is what the unglamorous middle of an ML project looks like. Every frame, reviewed. Every label, decided by a human.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgptrhx46heyc7s9149d.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgptrhx46heyc7s9149d.jpg" alt="Example of Lebanese sign language data" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Finding People Who Would Actually Help
&lt;/h2&gt;

&lt;p&gt;The first challenge was access. I needed signers willing to be filmed, and not just willing, but patient enough to repeat the same sign dozens of times under different conditions, at different speeds, with different lighting. And they had to trust that this wasn't going to end up as some project that got submitted, got a grade, and disappeared.&lt;/p&gt;

&lt;p&gt;The deaf community has seen a lot of that. Technology built &lt;em&gt;about&lt;/em&gt; them, not &lt;em&gt;with&lt;/em&gt; them.&lt;/p&gt;

&lt;p&gt;Getting past that took time and relationship, not code. It meant showing up, explaining what the goal actually was, being honest about what the system could and couldn't do. It meant involving people in decisions, not just data collection.&lt;/p&gt;

&lt;p&gt;Once we had that trust, the filming itself was its own challenge. We recorded in different environments: different backgrounds, different light sources, indoors and outdoors, because a model trained only in a clean lab setting will fail spectacularly in a pharmacy with fluorescent lights and motion blur.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Variation Problem
&lt;/h2&gt;

&lt;p&gt;Here's something I didn't fully appreciate until I was knee-deep in footage: sign languages have dialects.&lt;/p&gt;

&lt;p&gt;Not in the same loose way people use that word. I mean real, meaningful variation. A sign that means one thing to someone from one part of Lebanon might look subtly different to someone from another region. Age matters. Individual signers develop personal style. Some people sign large and expansive; others keep everything close to the body.&lt;/p&gt;

&lt;p&gt;This is actually true of spoken languages too, but for speech recognition, you have decades of research and millions of data points to smooth out that variation. For LSL, every variation we encountered was a new challenge to solve with whatever data we had.&lt;/p&gt;

&lt;p&gt;Our solution was imperfect but pragmatic: we over-indexed on signer diversity rather than sign volume. Fewer total signs, more variation &lt;em&gt;per sign&lt;/em&gt;. The model had to learn that a sign is a category, not a specific hand shape at a specific moment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;MediaPipe hand landmarks: 21 points per hand, tracked in real time. The model doesn't see a hand. It sees a skeleton, moving through space.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fan27yix978g1n9ek6xbm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fan27yix978g1n9ek6xbm.png" alt="MediaPipe landmark detections over human in several frames" width="800" height="520"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What "Good Enough" Means When There's No Benchmark
&lt;/h2&gt;

&lt;p&gt;This is the question that kept me up at night: how do you know your model is good?&lt;/p&gt;

&lt;p&gt;For most ML tasks, you have benchmarks. You can compare your accuracy to the state of the art, see where you land, iterate. For LSL, there was no benchmark. No prior model to compare against. No established test set.&lt;/p&gt;

&lt;p&gt;So I had to define what success looked like from first principles, and that forced an uncomfortable honesty: the only real measure of success was whether the people who use LSL found the tool useful.&lt;/p&gt;

&lt;p&gt;We demoed the system to members of the deaf community. We watched how they used it. Where it hesitated, where it failed, where it surprised us by working. That feedback loop, messy and qualitative as it was, became more valuable than any metric I could compute.&lt;/p&gt;

&lt;p&gt;The system isn't perfect. It's probably not close to perfect. But it translated in real time, in front of real people, and some of them smiled when it worked. That felt like a more honest measure of success than an accuracy number on a test set I built myself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feza91v7ed2ln0dbl179d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feza91v7ed2ln0dbl179d.png" alt="Training and testing loss accuracy of training" width="685" height="344"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What I'd Tell Someone Starting This Problem
&lt;/h2&gt;

&lt;p&gt;Don't start with the model. Start with the community.&lt;/p&gt;

&lt;p&gt;Not because it's the ethical thing to do (though it is), but because you will build the wrong thing if you don't. The assumptions you make in isolation, about what signs to include, what variation looks like, what "correct" even means, will be wrong in ways that matter.&lt;/p&gt;

&lt;p&gt;The dataset is not a preprocessing step you get through before the real work starts. The dataset &lt;em&gt;is&lt;/em&gt; the work. In low-resource settings, every annotation decision, every filming session, every signer you include or exclude, shapes what the model can and cannot do. That deserves the same care and intention as the architecture.&lt;/p&gt;

&lt;p&gt;And finally: ship something. An imperfect tool that someone can actually use is worth more than a perfect model that lives in a notebook. The pharmacy moment that started all of this, that man and that pharmacist, they don't need 99% accuracy. They need something that works well enough, right now, in the real world.&lt;/p&gt;

&lt;p&gt;That's what we were building toward. And we're not done yet.&lt;/p&gt;




&lt;p&gt;If you're working on low-resource sign language AI or have LSL data you'd like to contribute, I'd genuinely love to talk, reach me at &lt;a href="https://ramikronbi.com" rel="noopener noreferrer"&gt;ramikronbi.com&lt;/a&gt;.*&lt;/p&gt;

</description>
      <category>ai</category>
      <category>deeplearning</category>
      <category>society</category>
    </item>
    <item>
      <title>Seeing in the Dark: Real-Time Thermal Super-Resolution (That Actually Runs on Edge Devices)</title>
      <dc:creator>Rami Kronbi</dc:creator>
      <pubDate>Mon, 02 Feb 2026 00:13:53 +0000</pubDate>
      <link>https://forem.com/ramikronbi/seeing-in-the-dark-real-time-thermal-super-resolution-that-actually-runs-on-edge-devices-3nc7</link>
      <guid>https://forem.com/ramikronbi/seeing-in-the-dark-real-time-thermal-super-resolution-that-actually-runs-on-edge-devices-3nc7</guid>
      <description>&lt;p&gt;Thermal cameras are practically superpowers. But there's a catch: unless you have $20,000 to drop on military-grade hardware, thermal vision looks like a blurry, low-res mess.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1jgq81qoigwhisailjk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1jgq81qoigwhisailjk.jpg" alt="Example output of a high-end thermal camera" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A few months ago, I was working on a drone project. The goal was simple: strap a thermal camera to a drone and detect objects in real-time. It sounds like something out of a sci-fi movie, flying at night, spotting heat signatures, perfect situational awareness.&lt;/p&gt;

&lt;p&gt;However, the "objects" in question were just glowing blobs. A person looked like a smudge; a car looked like a slightly larger smudge. The resolution on affordable thermal sensors is horribly low. For a computer vision model trying to do object detection, this is a nightmare. You can't classify what you can't see.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I had two options:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Buy a high-resolution thermal camera (and I live off of noodles for the rest of my life).&lt;/li&gt;
&lt;li&gt;Fix the hardware limitations with software.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I decided to build a Deep Learning model to upscale these low-res thermal images into crisp, high-definition video in real-time.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem: Why Standard AI Failed Me
&lt;/h2&gt;

&lt;p&gt;If you've messed around in image upscaling, you've probably heard of ESRGAN or similar "Super-Resolution" (SR) models. They are fantastic at taking a tiny JPEG and turning it into a 4K wallpaper.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzzlezgqqk9otopxe2it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzzlezgqqk9otopxe2it.png" alt="Thermal image showing problems with detection on thermal imagery" width="800" height="304"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, why not just use that?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;They are too slow. Most state-of-the-art super-resolution models are heavy. They utilize millions of parameters. On a massive GPU, they might run at 5 or 7 FPS. That's fine for photos, but for a drone flying at 15kph? That latency is fatal. By the time the frame is processed, the drone has already crashed into the tree it didn't see.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Thermal is not RGB. Thermal images don't have "colors" in the traditional sense; they have temperature gradients. Standard models trained on ImageNet (cats, dogs, and cars) hallucinate textures that don't exist in heat maps. They try to add "fur" to a heat blob.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I needed an architecture that was lightweight, incredibly fast, and understood the physics of heat.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Solution: Enter IMDN (and a lot of coffee)
&lt;/h2&gt;

&lt;p&gt;I settled on an architecture called IMDN (Information Multi-Distillation Network).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfwrnif4je5po0hkg48v.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyfwrnif4je5po0hkg48v.webp" alt="Example IMDN performance, retrieved from original IMDN paper" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Without getting too bogged down in the math (the code is onsuper resolution GitHub if you want the nerdy-details), the brilliance of IMDN is that it doesn't try to reconstruct the entire image at every single layer.&lt;/p&gt;

&lt;p&gt;Instead, it uses a "distillation" process. It extracts features, keeps what's useful, and passes the rest down the line. This drastically reduces the computational cost.&lt;/p&gt;

&lt;p&gt;What is also interesting about this model is that you can train it to upscale to any scale you want (2x, 3x, 4x, 5x, etc.). You aren't limited to fixed increments, giving you the flexibility to balance resolution and speed exactly how your project needs it.&lt;/p&gt;

&lt;p&gt;Implementing the architecture was tricky, specifically adapting the Information Distillation Blocks (IDB) to handle single-channel thermal data effectively without losing the high-frequency details (the sharp edges where hot meets cold).&lt;/p&gt;

&lt;p&gt;But the architecture was only half the battle.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Secret Struggle: The Data Nightmare
&lt;/h2&gt;

&lt;p&gt;In Deep Learning, everyone talks about the model, but the real war is won in the dataset.&lt;/p&gt;

&lt;p&gt;There is no "ImageNet for Thermal Super-Resolution" that you can just download and hit train. I had to get creative. I spent weeks pulling data from widely different sources, and manually curating a massive mixed dataset.&lt;/p&gt;

&lt;p&gt;This was the hardest part of the project. Thermal data is noisy and the resolutions vary wildly. I had to clean, normalize, and align thousands of images to create a "Ground Truth" that the model could actually learn from.&lt;/p&gt;

&lt;p&gt;I also used a transfer learning trick: leveraging weights from RGB domains and "teaching" them to interpret thermal gradients, which gave the model a head start on understanding edges and shapes.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Results: Breaking the Real-Time Barrier
&lt;/h2&gt;

&lt;p&gt;After weeks of training and tweaking the loss functions to prioritize thermal contrast, the results were… honestly, better than I expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwaa850uokig2xd4eruq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwaa850uokig2xd4eruq1.png" alt="x3 Enhanced image using my custom model" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Metrics:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;PSNR: 34.2 dB (This is the signal-to-noise ratio. Anything above 30 is considered excellent quality).&lt;/li&gt;
&lt;li&gt;SSIM: 0.840 (Structural Similarity - meaning the upscaled image actually looks like the original scene, not a hallucination).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can see the difference immediately. The "blob" on the left becomes a distinct object with edges and shape on the right.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Speed Test
&lt;/h3&gt;

&lt;p&gt;This is where the IMDN architecture shines. On my laptop (RTX 3070), the model achieves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;~130 FPS at 2x scale&lt;/li&gt;
&lt;li&gt;~60 FPS at 4x scale&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is absurdly fast. That's not just "real-time"; that's "faster than the camera can record."&lt;/p&gt;




&lt;h2&gt;
  
  
  The "Whoa" Moment: Edge Deployment
&lt;/h2&gt;

&lt;p&gt;Here's the thing, the drone can't carry my laptop :) However, it carried an Nvidia Jetson Orin&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9c85363fs4uvm1w51u1t.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9c85363fs4uvm1w51u1t.jpeg" alt="Drone application with thermal camera" width="646" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before delving into how it ran on the Jetson, it is important to note that what is considered real-time for thermal images differs from RGB. A thermal camera has at best 20 FPS acquisition rate, so running the model at 20–30 FPS is considered realtime, since you're utilizing all the bandwith of the camera.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To Achieve 20–30 FPS on Jetson, the following tweaks were made:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Pipeline was implemented in C++&lt;/li&gt;
&lt;li&gt;Model was converted to TensorRT (~97% conversion accuracy)&lt;/li&gt;
&lt;li&gt;Multithreaded Inference with some optimizations&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;30 FPS on an edge device is the holy grail. It means you can run this super-resolution model inline with your object detection model. The drone sees the low-res thermal frame, upscales it to HD, and detects the object , all in less than 20 milliseconds.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;This isn't just about making cooler-looking images. This is about accessibility.&lt;/p&gt;

&lt;p&gt;High-resolution thermal cameras cost a fortune. By using efficient AI, we can take a cheap, low-res sensor and simulate the performance of a sensor that costs 10x as much.&lt;/p&gt;

&lt;p&gt;For search and rescue drones, autonomous vehicles driving at night, or industrial monitoring, this is a game changer. We can finally have high-fidelity thermal vision without the high-fidelity price tag.&lt;/p&gt;




&lt;h2&gt;
  
  
  Attribution
&lt;/h2&gt;

&lt;p&gt;I have open-sourced the code for this, however a bit of attribution would be nice :)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Portfolio: &lt;a href="https://ramikronbi.com" rel="noopener noreferrer"&gt;Rami Kronbi&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;LinkedIn: &lt;a href="https://linkedin.com/in/rami-kronbi" rel="noopener noreferrer"&gt;Rami Kronbi&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/Kronbii" rel="noopener noreferrer"&gt;Kronbii&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;SRC: &lt;a href="https://github.com/Kronbii/thermal-super-resolution" rel="noopener noreferrer"&gt;Thermal Super Resolution&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>deeplearning</category>
      <category>iot</category>
      <category>machinelearning</category>
      <category>performance</category>
    </item>
    <item>
      <title>AI Should Serve Society - Not Just Industry and Billionaires</title>
      <dc:creator>Rami Kronbi</dc:creator>
      <pubDate>Fri, 09 Jan 2026 22:16:28 +0000</pubDate>
      <link>https://forem.com/ramikronbi/ai-should-serve-society-not-just-industry-and-billionaires-37c9</link>
      <guid>https://forem.com/ramikronbi/ai-should-serve-society-not-just-industry-and-billionaires-37c9</guid>
      <description>&lt;p&gt;&lt;em&gt;Build with purpose. Others will follow.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4px1q22zk4pwcj21cvj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4px1q22zk4pwcj21cvj.png" alt="Ai and Power" width="735" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI is moving fast. It is moving faster than our laws, faster than our ethics, and often, faster than our collective sense of responsibility.&lt;/p&gt;

&lt;p&gt;The real question isn’t how powerful AI can become.&lt;/p&gt;

&lt;p&gt;The question that keeps me up at night is &lt;strong&gt;&lt;em&gt;who does it actually serve?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Right now, the vast majority of AI innovation is optimized for three things: &lt;strong&gt;scale, profit, and market dominance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And look — that’s not inherently wrong. Businesses need to grow.&lt;br&gt;&lt;br&gt;
But it is incomplete.&lt;/p&gt;

&lt;p&gt;When AI serves only industry titans and billionaires, we miss out on its most profound potential: &lt;strong&gt;the ability to uplift society at large.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Technology Is Never Neutral
&lt;/h2&gt;

&lt;p&gt;We have to stop pretending that algorithms are objective.&lt;/p&gt;

&lt;p&gt;Every model we train, every dataset we curate, and every deployment strategy we choose reflects a human choice.&lt;/p&gt;

&lt;p&gt;We are constantly answering silent questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;What problem are we solving?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Who actually benefits from this solution?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;And who gets left behind?&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When we design purely for efficiency or revenue, technology naturally gravitates toward the people who already have money and power. It follows the path of least resistance.&lt;/p&gt;

&lt;p&gt;But society’s hardest problems — &lt;strong&gt;accessibility, safety, healthcare gaps, educational inequality, and the climate crisis&lt;/strong&gt; — rarely sit at the top of a revenue roadmap.&lt;/p&gt;

&lt;p&gt;That is exactly why leadership matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  Using AI for Society Is a Choice
&lt;/h2&gt;

&lt;p&gt;Building AI for social good doesn’t mean abandoning technical excellence or innovation. It means redirecting that brilliance with intention.&lt;/p&gt;

&lt;p&gt;True leadership in this space — whether you are an engineer, a researcher, or a founder — isn’t about who builds the biggest model.&lt;/p&gt;

&lt;p&gt;It’s about choosing problems that matter, even if they don’t scale immediately.&lt;/p&gt;

&lt;p&gt;It’s about designing systems people can understand and trust, rather than black boxes that alienate them.&lt;/p&gt;

&lt;p&gt;It’s about measuring success by the impact you leave on a community, not just the valuation you raise in a seed round.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yd4rqnib2qg0gc0eqil.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3yd4rqnib2qg0gc0eqil.jpeg" alt="Human Centered AI Illustration" width="735" height="443"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How We Actually Do This
&lt;/h2&gt;

&lt;p&gt;So how do we move this from an abstract ideal to reality?&lt;/p&gt;

&lt;p&gt;It starts by getting out of the bubble — building from real human pain points, not tech-first ideas looking for a problem.&lt;/p&gt;

&lt;p&gt;It means collaborating outside the tech echo chamber: sitting down with educators, doctors, and community leaders who understand the nuance of the problems we’re trying to solve.&lt;/p&gt;

&lt;p&gt;It means designing for constraints and accessibility, not just for the ideal user with the fastest internet connection.&lt;/p&gt;

&lt;p&gt;Sometimes, the most revolutionary thing you can do is ship a smaller, focused solution that solves one real problem incredibly well.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building Where Nothing Existed: A Personal Example
&lt;/h2&gt;

&lt;p&gt;My team and I experienced this firsthand when we built &lt;strong&gt;OmniSign&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We saw a massive gap in accessibility for the Deaf community in Lebanon — there was no real-time tool to bridge the communication barrier.&lt;/p&gt;

&lt;p&gt;But when we started, we hit a wall. There wasn’t even a dataset for Lebanese Sign Language.&lt;/p&gt;

&lt;p&gt;The resources didn’t exist because the market hadn’t deemed it “profitable enough” to build them.&lt;/p&gt;

&lt;p&gt;We could have stopped there.&lt;/p&gt;

&lt;p&gt;Instead, we realized that if we wanted AI to serve this community, we had to do the heavy lifting ourselves.&lt;/p&gt;

&lt;p&gt;We built the dataset from scratch and developed the model to translate Lebanese Sign Language in real time.&lt;/p&gt;

&lt;p&gt;We didn’t wait for permission. We didn’t wait for big tech.&lt;br&gt;&lt;br&gt;
We built it because it was necessary.&lt;/p&gt;

&lt;p&gt;For more insight, visit the &lt;a href="https://laythayache.com/projects/omnisign" rel="noopener noreferrer"&gt;official project website&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bfwoq72mc15shsuvc0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bfwoq72mc15shsuvc0r.png" alt="Real-time Sign Language Translation" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Kind of AI We Should Be Proud Of
&lt;/h2&gt;

&lt;p&gt;The AI worth building isn’t just faster — it’s safer.&lt;/p&gt;

&lt;p&gt;It doesn’t blindly replace people. It empowers them.&lt;/p&gt;

&lt;p&gt;It reaches those usually written off as “not the target market.”&lt;/p&gt;

&lt;p&gt;This isn’t about rejecting industry. It’s about expanding our definition of responsibility.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thought
&lt;/h2&gt;

&lt;p&gt;AI is going to shape society whether we intend it to or not.&lt;/p&gt;

&lt;p&gt;The difference between a future of exploitation and one of empowerment is &lt;strong&gt;who leads the conversation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you are building AI today, you are already shaping that future.&lt;/p&gt;

&lt;p&gt;The real power move isn’t optimizing for the top 1%.&lt;br&gt;&lt;br&gt;
It’s choosing to build for the rest of us.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Build with purpose. Others will follow.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>showdev</category>
      <category>software</category>
    </item>
  </channel>
</rss>
