<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Pascal Clément</title>
    <description>The latest articles on Forem by Pascal Clément (@umbrincraft).</description>
    <link>https://forem.com/umbrincraft</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/umbrincraft"/>
    <language>en</language>
    <item>
      <title># How I built a fully automated Asian tech news pipeline with AI</title>
      <dc:creator>Pascal Clément</dc:creator>
      <pubDate>Wed, 01 Apr 2026 08:13:56 +0000</pubDate>
      <link>https://forem.com/umbrincraft/-how-i-built-a-fully-automated-asian-tech-news-pipeline-with-ai-4oe3</link>
      <guid>https://forem.com/umbrincraft/-how-i-built-a-fully-automated-asian-tech-news-pipeline-with-ai-4oe3</guid>
      <description>&lt;p&gt;Most Western developers have no idea what's happening in Asian tech. Not because nothing's happening — quite the opposite. The problem is the language barrier.&lt;/p&gt;

&lt;p&gt;So I built AsiafeedTech: a fully automated pipeline that ingests, translates, scores, and turns Asian tech news into YouTube Shorts — without any human intervention.&lt;/p&gt;

&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pipeline
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. RSS Ingestion&lt;/strong&gt;&lt;br&gt;
25+ sources from China, Japan, Korea and SEA. Runs 3x daily via Spring Boot scheduler.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. AI Translation &amp;amp; Filtering&lt;/strong&gt;&lt;br&gt;
Each article goes through Gemini with a carefully crafted prompt that translates, categorises, detects sponsored content, and filters non-tech articles in one call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Viral Scoring&lt;/strong&gt;&lt;br&gt;
Gemini scores each article 1-10. We add category bonuses (AI +3, EV +2, Robotics +2) to surface the most relevant stories.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. YouTube Shorts Generation&lt;/strong&gt;&lt;br&gt;
Top articles → Gemini script → Gemini TTS → Imagen 4.0 images → FFmpeg assembly → Whisper captions → YouTube upload.&lt;/p&gt;

&lt;p&gt;The hardest part was getting consistent quality at every step. Each AI call needs precise prompting, output validation, and fallback logic.&lt;/p&gt;

&lt;p&gt;Full writeup coming soon. Live at: &lt;a href="https://asiafeedtech.com" rel="noopener noreferrer"&gt;https://asiafeedtech.com&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>workflow</category>
      <category>automated</category>
    </item>
    <item>
      <title>We just upgraded the AsiaFeedTech content engine ⚡️</title>
      <dc:creator>Pascal Clément</dc:creator>
      <pubDate>Mon, 30 Mar 2026 13:41:51 +0000</pubDate>
      <link>https://forem.com/umbrincraft/we-just-upgraded-the-asiafeedtech-content-engine-3j96</link>
      <guid>https://forem.com/umbrincraft/we-just-upgraded-the-asiafeedtech-content-engine-3j96</guid>
      <description>&lt;p&gt;Here’s what’s new:&lt;/p&gt;

&lt;p&gt;✅ Semantic deduplication (no more repeated news)&lt;br&gt;
🎙️ Random AI voices (Kore / Charon)&lt;br&gt;
🏷️ AI-generated tags&lt;br&gt;
📺 Dynamic video length&lt;br&gt;
🟥 ASS captions with highlight&lt;br&gt;
🔔 Auto outro with subscribe&lt;/p&gt;

&lt;p&gt;Cleaner. Smarter. More engaging.&lt;/p&gt;

&lt;p&gt;visit us on &lt;a href="https://www.asiafeedtech.com" rel="noopener noreferrer"&gt;https://www.asiafeedtech.com&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a1eolvgp0qyhp8n2biq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a1eolvgp0qyhp8n2biq.png" alt=" " width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>contentcreation</category>
      <category>tech</category>
    </item>
    <item>
      <title>Caption &amp; Rendering Engine Upgrade</title>
      <dc:creator>Pascal Clément</dc:creator>
      <pubDate>Sat, 28 Mar 2026 17:12:43 +0000</pubDate>
      <link>https://forem.com/umbrincraft/caption-rendering-engine-upgrade-3aah</link>
      <guid>https://forem.com/umbrincraft/caption-rendering-engine-upgrade-3aah</guid>
      <description>&lt;p&gt;Update: Caption &amp;amp; Rendering Engine Upgrade&lt;/p&gt;

&lt;p&gt;We improved the video pipeline with a focus on stability and readability:&lt;/p&gt;

&lt;p&gt;Removed heavy zoom effects → lower memory usage&lt;br&gt;
Optimized FFmpeg pipeline → fewer crashes (OOM fixes)&lt;br&gt;
Introduced ASS-based captions → precise styling control&lt;br&gt;
Improved contrast + font rendering for mobile&lt;/p&gt;

&lt;p&gt;Result:&lt;br&gt;
More stable rendering + significantly better readability.&lt;/p&gt;

&lt;p&gt;new videos starts in: 8h&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://youtube.com/AsiaFeedTech/shorts" rel="noopener noreferrer"&gt;https://youtube.com/AsiaFeedTech/shorts&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devlog</category>
      <category>discord</category>
      <category>ai</category>
      <category>workflow</category>
    </item>
    <item>
      <title>90% of Asian tech news never reaches the West.</title>
      <dc:creator>Pascal Clément</dc:creator>
      <pubDate>Thu, 26 Mar 2026 08:42:37 +0000</pubDate>
      <link>https://forem.com/umbrincraft/90-of-asian-tech-news-never-reaches-the-west-3k1f</link>
      <guid>https://forem.com/umbrincraft/90-of-asian-tech-news-never-reaches-the-west-3k1f</guid>
      <description>&lt;p&gt;90% of Asian tech news never reaches the West.&lt;br&gt;
I built an AI Workflow that changes that:&lt;/p&gt;

&lt;p&gt;📡 RSS from 25+ Asian sources&lt;br&gt;
🤖 Translate → Script → Voice → Video → YouTube&lt;br&gt;
Fully automated. Zero manual work.&lt;/p&gt;

&lt;p&gt;👉 asiafeedtech.com&lt;br&gt;
📺 youtube.com/@asiafeedtech&lt;/p&gt;

&lt;h1&gt;
  
  
  AI #AIWorkflow #TechNews #Asia #buildinpublic
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title># How we built a fully automated Asian tech news pipeline with AI</title>
      <dc:creator>Pascal Clément</dc:creator>
      <pubDate>Mon, 23 Mar 2026 19:28:20 +0000</pubDate>
      <link>https://forem.com/umbrincraft/-how-we-built-a-fully-automated-asian-tech-news-pipeline-with-ai-3ajd</link>
      <guid>https://forem.com/umbrincraft/-how-we-built-a-fully-automated-asian-tech-news-pipeline-with-ai-3ajd</guid>
      <description>&lt;h1&gt;
  
  
  How we built a fully automated Asian tech news pipeline with AI
&lt;/h1&gt;

&lt;p&gt;Most of the world's most interesting tech news originates in Asia — and most of it never makes it to English-speaking audiences.&lt;/p&gt;

&lt;p&gt;We built AsiafeedTech to fix that.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;Every 8 hours, our system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ingests 25+ RSS feeds from China, Japan, Korea and Southeast Asia&lt;/li&gt;
&lt;li&gt;Translates and filters content using Gemini AI&lt;/li&gt;
&lt;li&gt;Scores each article for viral potential (AI score + category weighting)&lt;/li&gt;
&lt;li&gt;Surfaces only the best stories at asiafeedtech.com&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The video pipeline
&lt;/h2&gt;

&lt;p&gt;The top articles don't just become text posts — they become short-form videos, fully automated:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Gemini generates a YouTube script&lt;/li&gt;
&lt;li&gt;Gemini TTS produces the voiceover&lt;/li&gt;
&lt;li&gt;Pexels clips are sourced based on article context&lt;/li&gt;
&lt;li&gt;FFmpeg assembles the 9:16 video&lt;/li&gt;
&lt;li&gt;Whisper burns in captions&lt;/li&gt;
&lt;li&gt;Branding overlay is applied&lt;/li&gt;
&lt;li&gt;Upload to YouTube&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The interesting engineering challenge: making a multi-step AI pipeline reliable, where quality compounds (or degrades) at each stage.&lt;/p&gt;

&lt;p&gt;We'll be writing more about the architecture soon.&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://asiafeedtech.com" rel="noopener noreferrer"&gt;https://asiafeedtech.com&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  ai #buildinpublic #webdev #opensource
&lt;/h1&gt;

</description>
      <category>asiafeedtech</category>
      <category>ai</category>
      <category>workflow</category>
    </item>
    <item>
      <title>AsiafeedTech – Asian tech news translated to English daily</title>
      <dc:creator>Pascal Clément</dc:creator>
      <pubDate>Thu, 19 Mar 2026 11:54:16 +0000</pubDate>
      <link>https://forem.com/umbrincraft/asiafeedtech-asian-tech-news-translated-to-english-daily-1b74</link>
      <guid>https://forem.com/umbrincraft/asiafeedtech-asian-tech-news-translated-to-english-daily-1b74</guid>
      <description>&lt;p&gt;I got tired of missing out on tech innovations coming out of Asia simply &lt;br&gt;
because of the language barrier. So I built AsiafeedTech.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Aggregates 25+ RSS feeds from Chinese, Japanese and Korean tech publications&lt;/li&gt;
&lt;li&gt;Translates articles daily using Google Gemini 2.5 Flash&lt;/li&gt;
&lt;li&gt;Categorizes into AI, EV, Robotics, Startups, Hardware...&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Angular 19 with SSR&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Backend:&lt;/strong&gt; Spring Boot 3.5 + PostgreSQL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI:&lt;/strong&gt; Gemini 2.5 Flash via REST API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure:&lt;/strong&gt; Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Interesting challenges
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Parallel RSS ingestion with Spring &lt;a class="mentioned-user" href="https://dev.to/async"&gt;@async&lt;/a&gt; (5 threads)&lt;/li&gt;
&lt;li&gt;Prompt stored in DB – editable without redeployment&lt;/li&gt;
&lt;li&gt;"Less like this" feedback updates the AI prompt automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Live: &lt;a href="https://asiafeedtech.com" rel="noopener noreferrer"&gt;https://asiafeedtech.com&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kafka FinOps: How to Do Chargeback Reporting</title>
      <dc:creator>Pascal Clément</dc:creator>
      <pubDate>Wed, 11 Mar 2026 06:55:18 +0000</pubDate>
      <link>https://forem.com/umbrincraft/kafka-finops-how-to-do-chargeback-reporting-8g8</link>
      <guid>https://forem.com/umbrincraft/kafka-finops-how-to-do-chargeback-reporting-8g8</guid>
      <description>&lt;p&gt;If you run Kafka as shared infrastructure, you've probably faced this question at some point: &lt;strong&gt;who is responsible for this topic, and what does it cost us?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the core problem that Kafka FinOps tries to solve. In this post I'll explain what chargeback reporting means in a Kafka context, why it's hard, and how we implemented it in &lt;a href="https://partitionpilot.com" rel="noopener noreferrer"&gt;PartitionPilot&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Chargeback Reporting?
&lt;/h2&gt;

&lt;p&gt;Chargeback is a practice borrowed from cloud FinOps: instead of treating infrastructure costs as a single shared line item, you break them down by team, service, or product — and charge each one for what they actually use.&lt;/p&gt;

&lt;p&gt;In AWS or GCP this is relatively straightforward. Cloud providers give you cost allocation tags. But Kafka has no native cost model. It doesn't know about teams, budgets, or ownership.&lt;/p&gt;

&lt;p&gt;That's where chargeback reporting for Kafka comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two Cost Drivers in Kafka
&lt;/h2&gt;

&lt;p&gt;Before you can do chargeback, you need to understand what actually costs money in Kafka:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storage&lt;/strong&gt; — every message written to a topic is stored on disk until it expires (based on retention settings). A topic with a 7-day retention and high throughput can consume hundreds of gigabytes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traffic&lt;/strong&gt; — every byte written to (bytes-in) and read from (bytes-out) a topic generates network traffic. On AWS MSK or Confluent Cloud, this traffic is billed directly.&lt;/p&gt;

&lt;p&gt;Both can be measured via Prometheus JMX metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;kafka.log:type=Log,name=Size&lt;/code&gt; → storage per topic-partition&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec&lt;/code&gt; → inbound traffic per topic&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kafka.server:type=BrokerTopicMetrics,name=BytesOutPerSec&lt;/code&gt; → outbound traffic per topic&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Missing Piece: Ownership
&lt;/h2&gt;

&lt;p&gt;Metrics alone aren't enough for chargeback. You also need to know &lt;strong&gt;who owns each topic&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In most Kafka deployments, topic ownership is tribal knowledge. It lives in someone's head, in a Confluence page that's three years out of date, or nowhere at all.&lt;/p&gt;

&lt;p&gt;For chargeback to work, you need a system that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Tracks which team or person owns each topic&lt;/li&gt;
&lt;li&gt;Links cost metrics to that ownership&lt;/li&gt;
&lt;li&gt;Produces a report that finance or engineering management can actually use&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How PartitionPilot Implements This
&lt;/h2&gt;

&lt;p&gt;PartitionPilot connects to your Prometheus endpoint and takes periodic cost snapshots. Each snapshot captures storage and traffic per topic, stamped with a timestamp.&lt;/p&gt;

&lt;p&gt;On top of that, it lets you assign an owner to each topic and consumer group. Ownership is stored in a PostgreSQL database alongside the cost data.&lt;/p&gt;

&lt;p&gt;The result: a chargeback report in CSV format that looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Owner       | Topic                  | Storage (GB) | Traffic In (GB) | Traffic Out (GB) | Estimated Cost
------------+------------------------+--------------+-----------------+------------------+---------------
Team A      | orders.v2              | 12.4         | 45.2            | 180.8            | CHF 23.40
Team B      | user-events            | 8.1          | 120.3           | 360.9            | CHF 41.20
Team C      | analytics.raw          | 95.2         | 890.1           | 2670.3           | CHF 312.80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This report can be exported and shared with engineering managers or finance teams on a monthly basis.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is Harder Than It Sounds
&lt;/h2&gt;

&lt;p&gt;A few things make Kafka chargeback tricky in practice:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Topics are shared.&lt;/strong&gt; A single topic can be written to by one team and consumed by three others. Who pays for the outbound traffic — the producer or the consumers? There's no universal answer. PartitionPilot lets you assign separate ownership for producer and consumer sides.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retention makes storage non-obvious.&lt;/strong&gt; The cost of a topic depends not just on throughput, but on retention settings. A low-traffic topic with 30-day retention can cost more than a high-traffic topic with 1-hour retention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Metrics need aggregation.&lt;/strong&gt; Raw Prometheus metrics are per-broker, per-partition. You need to aggregate them per topic across all brokers to get meaningful numbers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;PartitionPilot is self-hosted via Docker Compose. You can start a free 30-day trial at &lt;a href="https://partitionpilot.com" rel="noopener noreferrer"&gt;partitionpilot.com&lt;/a&gt; — no credit card required.&lt;/p&gt;

&lt;p&gt;If your team is running Kafka as shared infrastructure and you want to start doing proper cost allocation, give it a try.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Pascal Clément — founder of PartitionPilot&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>finops</category>
      <category>devops</category>
      <category>cloudcost</category>
    </item>
    <item>
      <title>How I Built Kafka Cost Tracking with Prometheus JMX</title>
      <dc:creator>Pascal Clément</dc:creator>
      <pubDate>Wed, 11 Mar 2026 06:24:10 +0000</pubDate>
      <link>https://forem.com/umbrincraft/how-i-built-kafka-cost-tracking-with-prometheus-jmx-354i</link>
      <guid>https://forem.com/umbrincraft/how-i-built-kafka-cost-tracking-with-prometheus-jmx-354i</guid>
      <description>&lt;p&gt;When you run Kafka at scale, you quickly realize that not all topics are created equal. Some topics consume gigabytes of storage. Others generate massive traffic. But without tooling, you have no idea which ones — or which team owns them.&lt;/p&gt;

&lt;p&gt;This is the problem I ran into while building &lt;a href="https://partitionpilot.com" rel="noopener noreferrer"&gt;PartitionPilot&lt;/a&gt;, a Kafka cost management platform. In this post I'll explain how we track storage and traffic costs per topic using Prometheus JMX metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Kafka Has No Native Cost View
&lt;/h2&gt;

&lt;p&gt;Kafka gives you offsets, consumer lag, and partition counts. What it doesn't give you is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How much storage each topic is using&lt;/li&gt;
&lt;li&gt;How much traffic (bytes in/out) each topic generates&lt;/li&gt;
&lt;li&gt;Which team or service owns a topic&lt;/li&gt;
&lt;li&gt;What that costs you per month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For small clusters this doesn't matter much. But once you have dozens of teams and hundreds of topics, the question "who is responsible for this 500GB topic?" becomes very real.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Prometheus JMX Metrics
&lt;/h2&gt;

&lt;p&gt;Kafka exposes JMX metrics that can be scraped by Prometheus. The two most useful metrics for cost tracking are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;kafka.log:type=Log,name=Size&lt;/code&gt;&lt;/strong&gt; — the size in bytes of each topic-partition log on disk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;kafka.server:type=BrokerTopicMetrics,name=BytesInPerSec&lt;/code&gt;&lt;/strong&gt; and &lt;strong&gt;&lt;code&gt;BytesOutPerSec&lt;/code&gt;&lt;/strong&gt; — the traffic rate per topic.&lt;/p&gt;

&lt;p&gt;With these two metrics you can calculate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Storage cost&lt;/strong&gt;: bytes stored × your storage price per GB&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Traffic cost&lt;/strong&gt;: bytes transferred × your network price per GB&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How PartitionPilot Uses These Metrics
&lt;/h2&gt;

&lt;p&gt;PartitionPilot connects to your Prometheus endpoint (the one scraping Kafka JMX) and takes periodic snapshots. Each snapshot captures the current storage and traffic rates per topic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Kafka JMX → Prometheus → PartitionPilot → Cost snapshot (PostgreSQL)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From these snapshots we calculate a rolling cost per topic, per day, per month. The result is a dashboard where you can see at a glance which topics are your "top talkers" — the ones driving most of your Kafka bill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assigning Ownership
&lt;/h2&gt;

&lt;p&gt;Cost numbers alone aren't enough. You also need to know who owns each topic.&lt;/p&gt;

&lt;p&gt;PartitionPilot lets you assign an owner (a person or team) to each topic and consumer group. Once ownership is assigned, you can generate a chargeback report: a CSV that breaks down Kafka costs by owner, ready to share with engineering managers or finance teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exporting Metrics via /metrics
&lt;/h2&gt;

&lt;p&gt;PartitionPilot also exposes a &lt;code&gt;/metrics&lt;/code&gt; endpoint in Prometheus format. This means you can scrape PartitionPilot itself from your existing Prometheus setup and build Grafana dashboards on top of the cost and ownership data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Works with Apache Kafka, AWS MSK, and Confluent
&lt;/h2&gt;

&lt;p&gt;Any Kafka distribution that exposes JMX metrics via Prometheus is compatible. This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Apache Kafka&lt;/strong&gt; (self-hosted)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS MSK&lt;/strong&gt; (Managed Streaming for Apache Kafka)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Confluent Platform&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No agents or Kafka plugins are required — just a Prometheus scrape URL.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;PartitionPilot is self-hosted via Docker Compose. You can start a free 30-day trial at &lt;a href="https://partitionpilot.com" rel="noopener noreferrer"&gt;partitionpilot.com&lt;/a&gt; — no credit card required.&lt;/p&gt;

&lt;p&gt;If you're running Kafka and want to understand what it actually costs, give it a try. I'd love to hear your feedback.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Pascal Clément — founder of PartitionPilot&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kafka</category>
      <category>devops</category>
      <category>gineops</category>
      <category>prometheus</category>
    </item>
  </channel>
</rss>
