<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Paulo Victor Leite Lima Gomes</title>
    <description>The latest articles on Forem by Paulo Victor Leite Lima Gomes (@pvgomes).</description>
    <link>https://forem.com/pvgomes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/pvgomes"/>
    <language>en</language>
    <item>
      <title>server-side sharded watch is Kubernetes admitting the control plane has a data-scale problem</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Fri, 08 May 2026 00:01:28 +0000</pubDate>
      <link>https://forem.com/pvgomes/server-side-sharded-watch-is-kubernetes-admitting-the-control-plane-has-a-data-scale-problem-38dc</link>
      <guid>https://forem.com/pvgomes/server-side-sharded-watch-is-kubernetes-admitting-the-control-plane-has-a-data-scale-problem-38dc</guid>
      <description>&lt;p&gt;Kubernetes scalability conversations usually start with the obvious stuff.&lt;/p&gt;

&lt;p&gt;How many nodes? How many pods? How large is the cluster? How much etcd pain can one organization spiritually endure before someone says “maybe we should split this thing” in a meeting and everyone pretends they were already thinking it?&lt;/p&gt;

&lt;p&gt;Fair questions.&lt;/p&gt;

&lt;p&gt;But I think &lt;a href="https://kubernetes.io/blog/2026/05/06/kubernetes-v1-36-server-side-sharded-list-and-watch/" rel="noopener noreferrer"&gt;Kubernetes v1.36 server-side sharded list and watch&lt;/a&gt; points at a quieter, more interesting problem:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;the control plane is becoming a data distribution system, and the hard part is no longer only storing objects. It is feeding every client that wants to continuously know what changed.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That sounds boring. Good. Boring infrastructure primitives are where the real architecture leaks out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx6f29x7ibau69ccxs4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx6f29x7ibau69ccxs4.gif" alt="too many watches" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  watch was always the magic trick
&lt;/h2&gt;

&lt;p&gt;One of the reasons Kubernetes feels so powerful is the watch model.&lt;/p&gt;

&lt;p&gt;Controllers do not constantly ask, “hey, did anything happen?” like an anxious intern refreshing a dashboard. They list the current state, then watch for changes. The API server becomes the coordination point for a lot of little loops trying to move reality toward desired state.&lt;/p&gt;

&lt;p&gt;That model is elegant.&lt;/p&gt;

&lt;p&gt;It is also everywhere.&lt;/p&gt;

&lt;p&gt;Your deployment controller watches Deployments and ReplicaSets. Your autoscaler watches workloads and metrics-adjacent signals. Your policy engine watches resources. Your GitOps controller watches cluster state. Your service mesh watches endpoints and config. Your observability stack watches things. Your custom controllers watch things. Your shiny AI platform controller, written during a suspiciously optimistic sprint, also watches things.&lt;/p&gt;

&lt;p&gt;Eventually the cluster has fewer “users” than “watchers.”&lt;/p&gt;

&lt;p&gt;And that is the part people undercount.&lt;/p&gt;

&lt;p&gt;A mature Kubernetes environment is not just a pile of workloads. It is a pile of clients trying to keep a local mental model of the cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  the API server is not just an API anymore
&lt;/h2&gt;

&lt;p&gt;We still call it the Kubernetes API server, which is technically correct, but incomplete.&lt;/p&gt;

&lt;p&gt;At small scale, it feels like an API. You send requests. You get responses. Nice.&lt;/p&gt;

&lt;p&gt;At serious scale, it behaves more like a shared event distribution system with strong consistency expectations, historical state, authorization checks, fan-out pressure, and a very opinionated data model sitting behind it.&lt;/p&gt;

&lt;p&gt;That is why server-side sharded list/watch matters.&lt;/p&gt;

&lt;p&gt;The simple version: instead of forcing a client to list or watch a large resource set through one giant stream of objects, the server can split the work into shards. Clients can consume partitions of the dataset, and the system can distribute load more intelligently.&lt;/p&gt;

&lt;p&gt;The exact implementation details are less important than the admission behind the feature:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;large Kubernetes clusters have a data-scale problem at the watch layer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not a “Kubernetes is broken” problem. More like a “Kubernetes succeeded so hard that its coordination model is now carrying everybody’s automation habits” problem.&lt;/p&gt;

&lt;p&gt;That is a very different vibe.&lt;/p&gt;

&lt;h2&gt;
  
  
  controllers are cheap until they are not
&lt;/h2&gt;

&lt;p&gt;The platform engineering era made controllers feel cheap.&lt;/p&gt;

&lt;p&gt;Need policy? Add a controller.&lt;br&gt;
Need sync? Add a controller.&lt;br&gt;
Need drift correction? Add a controller.&lt;br&gt;
Need to turn a YAML wish into some cloud-side reality? Controller.&lt;br&gt;
Need your internal platform to look declarative? Another controller.&lt;/p&gt;

&lt;p&gt;I like controllers. They are one of the best ideas in modern infrastructure. But they are not free.&lt;/p&gt;

&lt;p&gt;Every controller needs to observe. Every observer consumes API server capacity, cache memory, network bandwidth, authorization checks, and operational attention. A single controller can be harmless. A platform full of controllers becomes an ecosystem of little data subscribers.&lt;/p&gt;

&lt;p&gt;This is where the accounting gets fuzzy.&lt;/p&gt;

&lt;p&gt;Teams are usually pretty good at counting pods and nodes. They are less good at counting control-plane pressure created by automation.&lt;/p&gt;

&lt;p&gt;A GitOps tool might look like “just one more platform component.” A policy engine might look like “just one more safety layer.” An operator installed by a vendor might look like “just how the product works.”&lt;/p&gt;

&lt;p&gt;Individually, sure.&lt;/p&gt;

&lt;p&gt;Together, they become a read-amplification machine pointed at the API server.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI agents will make this worse, obviously
&lt;/h2&gt;

&lt;p&gt;I do not mean “obviously” as in panic. I mean it as in: look at the pattern.&lt;/p&gt;

&lt;p&gt;AI coding agents, platform agents, remediation bots, incident assistants, internal developer portals, MCP-style tools, and automation loops all want context. They want to inspect state. They want to understand what exists before acting. They want to subscribe to change, detect drift, summarize, explain, fix, and sometimes confidently invent a root cause because the logs looked lonely.&lt;/p&gt;

&lt;p&gt;Some of those systems will talk directly to Kubernetes. Some will talk through platform APIs. Some will sit behind gateways. But the demand shape is the same: more machine clients wanting fresher operational state.&lt;/p&gt;

&lt;p&gt;That means control-plane scalability becomes less about human kubectl usage and more about automated consumers.&lt;/p&gt;

&lt;p&gt;The future cluster is not busy because one engineer ran &lt;code&gt;kubectl get pods -A&lt;/code&gt; too many times.&lt;/p&gt;

&lt;p&gt;It is busy because fifty systems are trying to keep themselves synchronized with reality.&lt;/p&gt;

&lt;p&gt;Server-side sharded watch is the kind of primitive you need when “watching the world” becomes normal behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  this changes what platform teams should measure
&lt;/h2&gt;

&lt;p&gt;If the control plane is a data product, platform teams need better product metrics for it.&lt;/p&gt;

&lt;p&gt;Not just cluster size. Not just API server CPU. Not just etcd latency after everything is already sad.&lt;/p&gt;

&lt;p&gt;I would want to know things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;which clients are opening the most watches?&lt;/li&gt;
&lt;li&gt;which resource types create the most list/watch pressure?&lt;/li&gt;
&lt;li&gt;which controllers reconnect too aggressively?&lt;/li&gt;
&lt;li&gt;how many clients are watching broad scopes when they only need namespaces?&lt;/li&gt;
&lt;li&gt;which internal tools repeatedly list everything because nobody designed a narrower contract?&lt;/li&gt;
&lt;li&gt;what happens to watch latency during deployments, outages, or large reconciliations?&lt;/li&gt;
&lt;li&gt;which teams are adding control-plane load as a hidden dependency of their product?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last one matters.&lt;/p&gt;

&lt;p&gt;A platform feature that adds ten pods is easy to reason about. A platform feature that adds ten high-cardinality watchers across multiple clusters is harder to see, but sometimes more important.&lt;/p&gt;

&lt;p&gt;This is the same kind of hidden tax we keep rediscovering in distributed systems: the expensive part is not always where the YAML makes noise.&lt;/p&gt;

&lt;h2&gt;
  
  
  sharding is not a license to be lazy
&lt;/h2&gt;

&lt;p&gt;There is a trap with scalability features.&lt;/p&gt;

&lt;p&gt;A system gets a better primitive, and everyone treats it as permission to continue the same behavior with slightly more confidence.&lt;/p&gt;

&lt;p&gt;That would be the wrong lesson here.&lt;/p&gt;

&lt;p&gt;Server-side sharded watch is useful because it gives Kubernetes a better way to serve large-scale clients. But it should also make platform teams more honest about their client design.&lt;/p&gt;

&lt;p&gt;If your controller only needs a subset of objects, do not watch the universe.&lt;br&gt;
If your tool can tolerate stale summaries, do not demand live cluster truth every second.&lt;br&gt;
If your platform API can provide a curated view, do not leak raw Kubernetes watches to every consumer.&lt;br&gt;
If your automation acts on changes, make sure it has backpressure, jitter, retry discipline, and boring failure behavior.&lt;/p&gt;

&lt;p&gt;Basically: do not turn the API server into Kafka because you were too busy to design an event contract.&lt;/p&gt;

&lt;p&gt;Kubernetes watch is a great primitive. It is not a substitute for thinking.&lt;/p&gt;

&lt;h2&gt;
  
  
  the real control-plane problem is social too
&lt;/h2&gt;

&lt;p&gt;The awkward part is that this is not only technical.&lt;/p&gt;

&lt;p&gt;Control-plane pressure is often created by organizational boundaries.&lt;/p&gt;

&lt;p&gt;One team installs an operator. Another adds policy. Another adds observability. Another adds security scanning. Another adds an internal platform abstraction. Nobody owns the combined shape until the API server starts sweating.&lt;/p&gt;

&lt;p&gt;Then suddenly everyone discovers they are “just a client.”&lt;/p&gt;

&lt;p&gt;This is why mature platform engineering needs ownership over control-plane consumption, not only control-plane availability. The platform team should not merely keep Kubernetes alive. It should define what good citizenship looks like for clients that depend on Kubernetes state.&lt;/p&gt;

&lt;p&gt;That means documentation, defaults, metrics, review patterns, and sometimes saying no to tools that treat the API server like an infinite free database.&lt;/p&gt;

&lt;p&gt;Not because platform teams enjoy being annoying.&lt;/p&gt;

&lt;p&gt;Because shared control planes become tragedy-of-the-commons machines when every client optimizes locally.&lt;/p&gt;

&lt;h2&gt;
  
  
  my take
&lt;/h2&gt;

&lt;p&gt;Server-side sharded list/watch is not the flashiest Kubernetes feature. It will not produce a thousand conference keynotes with lasers.&lt;/p&gt;

&lt;p&gt;But it is one of those features that reveals where the real system is going.&lt;/p&gt;

&lt;p&gt;Kubernetes is not just scheduling containers anymore. It is the coordination substrate for platforms, policies, agents, operators, and automation loops. That means the API server is not merely serving requests. It is distributing operational truth.&lt;/p&gt;

&lt;p&gt;And once operational truth has many subscribers, data-scale problems show up.&lt;/p&gt;

&lt;p&gt;So yes, sharded watch is a scalability feature.&lt;/p&gt;

&lt;p&gt;But it is also a warning label.&lt;/p&gt;

&lt;p&gt;If your platform keeps adding automation, controllers, agents, and “smart” tools, you are also adding readers of reality. Those readers have cost. They have failure modes. They have ownership questions.&lt;/p&gt;

&lt;p&gt;The cluster does not only run workloads.&lt;/p&gt;

&lt;p&gt;It runs everybody’s need to know what the workloads are doing.&lt;/p&gt;

&lt;p&gt;That may be the more interesting scalability problem now.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>platformengineering</category>
      <category>devops</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Your Terminal Is Becoming a Governed AI Runtime</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Thu, 07 May 2026 00:03:59 +0000</pubDate>
      <link>https://forem.com/pvgomes/your-terminal-is-becoming-a-governed-ai-runtime-1md</link>
      <guid>https://forem.com/pvgomes/your-terminal-is-becoming-a-governed-ai-runtime-1md</guid>
      <description>&lt;p&gt;There was a time when the terminal felt like the last private corner of software development.&lt;/p&gt;

&lt;p&gt;The browser got enterprise controls. The IDE got plugins, telemetry, policy, and procurement drama. The CI pipeline was always a tiny bureaucracy with YAML. But the terminal? The terminal was where developers went to be weird in peace.&lt;/p&gt;

&lt;p&gt;Aliases. Half-remembered shell scripts. &lt;code&gt;curl | jq&lt;/code&gt; rituals. SSH sessions with the emotional stability of a raccoon in a server room.&lt;/p&gt;

&lt;p&gt;Now GitHub has announced &lt;a href="https://github.blog/changelog/2026-05-06-enterprise-managed-plugins-in-github-copilot-cli-are-now-in-public-preview" rel="noopener noreferrer"&gt;enterprise-managed plugins for GitHub Copilot CLI&lt;/a&gt;, and I think the interesting part is not “Copilot can do more things in the terminal.”&lt;/p&gt;

&lt;p&gt;The interesting part is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;the terminal is becoming an AI action surface, and AI action surfaces eventually become governed runtimes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not because vendors are evil. Not because platform teams are control freaks. Because once an assistant can touch tools, repositories, cloud accounts, secrets, and deployment paths, “just let developers use it” stops being serious.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx6f29x7ibau69ccxs4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx6f29x7ibau69ccxs4.gif" alt="terminal chaos" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  the terminal used to be personal space
&lt;/h2&gt;

&lt;p&gt;The terminal has always been powerful, but its power was mostly mediated through the person typing.&lt;/p&gt;

&lt;p&gt;If I ran a destructive command, that was on me. If I installed a sketchy CLI, that was on me. If I glued five tools together with a shell pipeline and vibes, at least the blast radius moved at human typing speed.&lt;/p&gt;

&lt;p&gt;AI changes that shape.&lt;/p&gt;

&lt;p&gt;A CLI assistant is not just another autocomplete. It can interpret intent, discover commands, call tools, chain steps, edit files, summarize errors, propose fixes, and sometimes take actions faster than the developer fully reviews each intermediate decision.&lt;/p&gt;

&lt;p&gt;That does not make it bad. It makes it operational.&lt;/p&gt;

&lt;p&gt;The moment the assistant can say “I will create the branch, update the config, run the migration, open the PR, and fix CI,” the terminal has stopped being only a personal workspace. It has become a runtime for delegated work.&lt;/p&gt;

&lt;p&gt;And delegated work needs rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  plugins are where the governance starts
&lt;/h2&gt;

&lt;p&gt;Enterprise-managed plugins sound like a boring admin feature. That is why they matter.&lt;/p&gt;

&lt;p&gt;A plugin system answers very practical questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;which tools can the assistant call?&lt;/li&gt;
&lt;li&gt;who approved those tools?&lt;/li&gt;
&lt;li&gt;which teams can use them?&lt;/li&gt;
&lt;li&gt;how are they updated?&lt;/li&gt;
&lt;li&gt;what permissions do they imply?&lt;/li&gt;
&lt;li&gt;where does auditability start and end?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the same movie we watched with browser extensions, IDE extensions, Kubernetes admission controllers, CI marketplace actions, and Terraform modules. At first, the ecosystem is fun and chaotic. Then a few incidents happen. Then someone asks why a random package had access to production-adjacent credentials. Then the company discovers governance.&lt;/p&gt;

&lt;p&gt;The AI version will be faster because the assistant is not only installing plugins. It is using them on behalf of a human.&lt;/p&gt;

&lt;p&gt;That distinction matters.&lt;/p&gt;

&lt;p&gt;A normal CLI plugin waits for me to make mistakes. An AI-enabled CLI plugin can help me make mistakes at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  the real product is not the chat. it is the permission boundary.
&lt;/h2&gt;

&lt;p&gt;Every AI coding demo wants to show the assistant doing useful work. Fair enough. Demos need movement.&lt;/p&gt;

&lt;p&gt;But in production, the valuable questions are much less cinematic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;can the assistant read this repository?&lt;/li&gt;
&lt;li&gt;can it modify infrastructure code?&lt;/li&gt;
&lt;li&gt;can it call cloud APIs?&lt;/li&gt;
&lt;li&gt;can it open pull requests?&lt;/li&gt;
&lt;li&gt;can it inspect secrets?&lt;/li&gt;
&lt;li&gt;can it trigger deployments?&lt;/li&gt;
&lt;li&gt;can it run commands against customer data?&lt;/li&gt;
&lt;li&gt;can it install a new plugin because the task seems to require it?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the real interface.&lt;/p&gt;

&lt;p&gt;The chat window is just how the human expresses intent. The permission boundary is where architecture happens.&lt;/p&gt;

&lt;p&gt;This is why GitHub’s adjacent MCP security announcements also matter: &lt;a href="https://github.blog/changelog/2026-05-05-secret-scanning-with-github-mcp-server-is-now-generally-available" rel="noopener noreferrer"&gt;secret scanning with GitHub MCP Server is generally available&lt;/a&gt;, and &lt;a href="https://github.blog/changelog/2026-05-05-dependency-scanning-with-github-mcp-server-is-in-public-preview" rel="noopener noreferrer"&gt;dependency scanning with GitHub MCP Server is in public preview&lt;/a&gt;. The direction is obvious: agents and assistants are being connected to tool ecosystems, and the security model is trying to catch up.&lt;/p&gt;

&lt;p&gt;Good.&lt;/p&gt;

&lt;p&gt;Because a world where agents can use tools but organizations cannot reason about tool permissions is not developer empowerment. It is unattended automation with nicer copywriting.&lt;/p&gt;

&lt;h2&gt;
  
  
  we are rebuilding internal platforms inside developer machines
&lt;/h2&gt;

&lt;p&gt;The funny part is that this looks new, but the organizational pattern is old.&lt;/p&gt;

&lt;p&gt;Platform teams spent years building internal developer platforms so teams would not have to remember every scary detail of infrastructure. Golden paths, templates, policy checks, paved roads, deployment workflows, observability defaults. All the boring stuff that makes delivery repeatable.&lt;/p&gt;

&lt;p&gt;Now AI assistants are moving some of that action back into the developer’s local loop.&lt;/p&gt;

&lt;p&gt;The terminal becomes a place where the assistant can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;query internal docs&lt;/li&gt;
&lt;li&gt;call approved service APIs&lt;/li&gt;
&lt;li&gt;generate infrastructure changes&lt;/li&gt;
&lt;li&gt;run validation commands&lt;/li&gt;
&lt;li&gt;open tickets or pull requests&lt;/li&gt;
&lt;li&gt;inspect CI failures&lt;/li&gt;
&lt;li&gt;apply team-specific workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is convenient. But it also means the local developer environment is becoming a thin edge of the internal platform.&lt;/p&gt;

&lt;p&gt;If that edge is unmanaged, every laptop becomes a snowflake platform.&lt;/p&gt;

&lt;p&gt;If that edge is overmanaged, developers will route around it with their own tools.&lt;/p&gt;

&lt;p&gt;The hard part is the middle: enough governance to make AI actions safe, not so much governance that the assistant becomes a slower way to file a ticket.&lt;/p&gt;

&lt;h2&gt;
  
  
  the boring design rule: approve capabilities, not vibes
&lt;/h2&gt;

&lt;p&gt;If I were designing this inside a company, I would avoid starting with a giant “AI policy” document that nobody reads.&lt;/p&gt;

&lt;p&gt;I would start with capabilities.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the assistant may summarize logs, but not access raw customer PII&lt;/li&gt;
&lt;li&gt;the assistant may draft Terraform, but not apply it&lt;/li&gt;
&lt;li&gt;the assistant may open a pull request, but not merge it&lt;/li&gt;
&lt;li&gt;the assistant may run tests, but not deploy to production&lt;/li&gt;
&lt;li&gt;the assistant may query dependency risk, but not auto-upgrade critical packages without review&lt;/li&gt;
&lt;li&gt;the assistant may use approved internal plugins, but not install arbitrary external ones during a task&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is much clearer than “use AI responsibly.”&lt;/p&gt;

&lt;p&gt;Responsible according to whom? Under what permissions? With what audit trail? In which repositories? Against what data?&lt;/p&gt;

&lt;p&gt;Vibes do not scale. Capability boundaries do.&lt;/p&gt;

&lt;p&gt;And once you define capabilities, enterprise-managed plugins start to make sense. They are not just a catalog feature. They are a way to package what the assistant is allowed to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  developers still need escape hatches
&lt;/h2&gt;

&lt;p&gt;There is a trap here, though.&lt;/p&gt;

&lt;p&gt;If companies turn AI-in-the-terminal into another locked-down enterprise sadness machine, developers will hate it, and they will be right.&lt;/p&gt;

&lt;p&gt;The terminal is powerful because it supports exploration. Sometimes you need to run a weird command, inspect a strange failure, test a new tool, or build a tiny script that would never survive a platform review meeting.&lt;/p&gt;

&lt;p&gt;So the goal is not to make the terminal sterile. The goal is to separate exploration from delegated authority.&lt;/p&gt;

&lt;p&gt;A human experimenting locally is one risk shape. An assistant calling tools with organization-approved permissions is another. Good platforms understand that difference. They allow local weirdness, but put review, audit, and ownership around actions that touch shared systems.&lt;/p&gt;

&lt;p&gt;Not one giant allow button.&lt;/p&gt;

&lt;h2&gt;
  
  
  this is senior engineering work now
&lt;/h2&gt;

&lt;p&gt;This is where I think the career conversation gets more interesting than “will AI replace developers?”&lt;/p&gt;

&lt;p&gt;Somebody has to define the boundaries.&lt;/p&gt;

&lt;p&gt;Somebody has to decide which commands are safe for an assistant to run. Somebody has to package internal workflows as plugins. Somebody has to make sure generated changes leave durable artifacts. Somebody has to connect audit logs to reality. Somebody has to notice when the assistant is technically allowed to do a thing but organizationally should not.&lt;/p&gt;

&lt;p&gt;That is engineering work.&lt;/p&gt;

&lt;p&gt;Not glamorous, maybe. But very real.&lt;/p&gt;

&lt;p&gt;The future of developer productivity is not only better models. It is better delegation contracts.&lt;/p&gt;

&lt;p&gt;The assistant can be brilliant, but if every useful action ends in “please ask an admin,” nobody will use it. If every useful action is silently allowed, eventually it will do something expensive, unsafe, or deeply annoying.&lt;/p&gt;

&lt;p&gt;The valuable layer is the one that makes the right action easy, the risky action explicit, and the forbidden action impossible.&lt;/p&gt;

&lt;p&gt;That is platform engineering with an AI accent.&lt;/p&gt;

&lt;h2&gt;
  
  
  the punchline
&lt;/h2&gt;

&lt;p&gt;Enterprise-managed Copilot CLI plugins are not just a GitHub feature checkbox. They are a signal that the terminal is being pulled into the same governance story as the rest of the engineering system.&lt;/p&gt;

&lt;p&gt;That was inevitable.&lt;/p&gt;

&lt;p&gt;Once AI assistants can operate tools, the question is no longer “which assistant gives the best answer?”&lt;/p&gt;

&lt;p&gt;The question is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What is this assistant allowed to do when the answer becomes an action?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is the line between a neat demo and a production system.&lt;/p&gt;

&lt;p&gt;The terminal is still going to be weird. I hope it stays weird. Software would be worse if every shell session had the personality of an expense report.&lt;/p&gt;

&lt;p&gt;But the parts of the terminal that act on behalf of the company are going to become governed, packaged, permissioned, and audited.&lt;/p&gt;

&lt;p&gt;Not because the terminal lost its soul.&lt;/p&gt;

&lt;p&gt;Because AI gave it hands.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devtools</category>
      <category>platformengineering</category>
      <category>governance</category>
    </item>
    <item>
      <title>AI Tools Have Shorter Half-Lives Than the Workflows They Automate</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Wed, 06 May 2026 00:04:16 +0000</pubDate>
      <link>https://forem.com/pvgomes/ai-tools-have-shorter-half-lives-than-the-workflows-they-automate-43d8</link>
      <guid>https://forem.com/pvgomes/ai-tools-have-shorter-half-lives-than-the-workflows-they-automate-43d8</guid>
      <description>&lt;p&gt;There is a very boring announcement that engineering teams should take more seriously than most AI product launches.&lt;/p&gt;

&lt;p&gt;AWS announced the &lt;a href="https://aws.amazon.com/blogs/devops/amazon-q-developer-end-of-support-announcement/" rel="noopener noreferrer"&gt;end of support for Amazon Q Developer in the AWS Console mobile app&lt;/a&gt;. GitHub recently announced the &lt;a href="https://github.blog/changelog/2026-05-01-upcoming-deprecation-of-gpt-5-2-and-gpt-5-2-codex/" rel="noopener noreferrer"&gt;upcoming deprecation of GPT-5.2 and GPT-5.2-Codex&lt;/a&gt;. Every week another AI product gets renamed, bundled, sunset, repriced, rate-limited, or quietly converted from “the future of software engineering” into “please migrate before June.”&lt;/p&gt;

&lt;p&gt;This is architecture.&lt;/p&gt;

&lt;p&gt;The thing we pretend is stable — the AI tool — is often the most disposable part of the system.&lt;/p&gt;

&lt;p&gt;The thing we treat as informal — the workflow around it — is usually the part that survives.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwrm9bybqi04vxmes9wp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwrm9bybqi04vxmes9wp.gif" alt="AI tooling churn" width="550" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My take is simple: &lt;strong&gt;engineering teams should design AI-assisted workflows as if every specific assistant, model name, IDE integration, and hosted feature has a short half-life.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because it does.&lt;/p&gt;

&lt;h2&gt;
  
  
  tools churn faster than habits
&lt;/h2&gt;

&lt;p&gt;Developers love tools, but organizations run on habits.&lt;/p&gt;

&lt;p&gt;A tool can be replaced in a procurement cycle. A habit gets embedded in onboarding docs, pull request norms, incident response, security review, and the weird tribal rules people only learn after breaking production once.&lt;/p&gt;

&lt;p&gt;That is why AI tooling churn matters.&lt;/p&gt;

&lt;p&gt;If a team uses an assistant to summarize logs, generate test cases, write migration plans, or review Terraform, the dangerous dependency is not only the vendor API. It is the assumption that the tool-shaped workflow will keep existing in the same form.&lt;/p&gt;

&lt;p&gt;Today the workflow is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;open IDE plugin&lt;/li&gt;
&lt;li&gt;select code&lt;/li&gt;
&lt;li&gt;ask model X&lt;/li&gt;
&lt;li&gt;paste result into PR&lt;/li&gt;
&lt;li&gt;hope reviewer notices the spooky part&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Tomorrow the IDE plugin is deprecated, model X is renamed, the context window changes, the pricing changes, the security team disables paste access, and the assistant now lives inside a chat tab with a different memory model.&lt;/p&gt;

&lt;p&gt;The work did not disappear.&lt;/p&gt;

&lt;p&gt;Only the interface did.&lt;/p&gt;

&lt;p&gt;That is the annoying part. AI assistants are marketed like durable coworkers, but they currently behave more like SaaS features during a land grab. Some will become stable products. Many will not. They will keep changing faster than your engineering process should.&lt;/p&gt;

&lt;h2&gt;
  
  
  model names are not architecture
&lt;/h2&gt;

&lt;p&gt;One smell I keep seeing is teams documenting workflows around specific model names.&lt;/p&gt;

&lt;p&gt;“Use Claude X for refactors.”&lt;br&gt;
“Use GPT Y for test generation.”&lt;br&gt;
“Use Amazon Q for AWS questions.”&lt;br&gt;
“Use Copilot for pull request summaries.”&lt;/p&gt;

&lt;p&gt;That is fine as a preference. It is not fine as architecture.&lt;/p&gt;

&lt;p&gt;A model name is a versioned implementation detail.&lt;/p&gt;

&lt;p&gt;The architectural object should be the capability you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;propose a small refactor with constraints&lt;/li&gt;
&lt;li&gt;summarize an incident timeline from logs&lt;/li&gt;
&lt;li&gt;explain a cloud bill anomaly&lt;/li&gt;
&lt;li&gt;generate tests from observed behavior&lt;/li&gt;
&lt;li&gt;check a migration plan for rollback gaps&lt;/li&gt;
&lt;li&gt;classify dependency risk before merge&lt;/li&gt;
&lt;li&gt;produce a human-readable design review draft&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those capabilities can be routed through different assistants, models, prompts, policies, and environments over time. If the workflow is written around the capability, you can swap the tool. If it is written around the product surface, every vendor change becomes a tiny migration project.&lt;/p&gt;

&lt;p&gt;This is the same lesson we learned with cloud services, CI providers, observability tools, and message queues. The abstraction should not pretend the implementation does not matter, but it should make the replaceable part obvious.&lt;/p&gt;

&lt;p&gt;With AI tooling, that replaceable part is very often the branded assistant.&lt;/p&gt;

&lt;h2&gt;
  
  
  the stable unit is the engineering contract
&lt;/h2&gt;

&lt;p&gt;So what should be stable?&lt;/p&gt;

&lt;p&gt;Not the model.&lt;br&gt;
Not the chat UI.&lt;br&gt;
Not the plugin.&lt;br&gt;
Not the button named “auto mode.”&lt;/p&gt;

&lt;p&gt;The stable unit should be the engineering contract around the workflow.&lt;/p&gt;

&lt;p&gt;If your team uses AI to help with database migrations, the contract might be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the assistant can draft the migration plan&lt;/li&gt;
&lt;li&gt;the plan must include rollback steps&lt;/li&gt;
&lt;li&gt;it must identify locking risks&lt;/li&gt;
&lt;li&gt;it must include expected runtime and blast radius&lt;/li&gt;
&lt;li&gt;it must cite the schema diff it used&lt;/li&gt;
&lt;li&gt;a human owner must approve it before execution&lt;/li&gt;
&lt;li&gt;the final artifact lives in the repo, not inside chat history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That contract can survive a tool migration.&lt;/p&gt;

&lt;p&gt;Maybe today it runs through one assistant. Next quarter it goes through another. Later it becomes a GitHub Action or an internal platform feature with model routing, policy checks, and audit logs.&lt;/p&gt;

&lt;p&gt;Good. That is how useful automation grows up.&lt;/p&gt;

&lt;p&gt;The mistake is letting the first convenient UI become the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  approval gates are the real product
&lt;/h2&gt;

&lt;p&gt;This is why I find the industry’s recent obsession with “autonomous coding” slightly funny.&lt;/p&gt;

&lt;p&gt;The demo always wants to show the agent doing everything. The production system usually becomes interesting at the approval gates.&lt;/p&gt;

&lt;p&gt;Who can let the agent modify infrastructure? Who can approve package upgrades? Which paths can it edit without review? When does it need a test run or security approval? What does it do when CI fails? Where is the audit trail?&lt;/p&gt;

&lt;p&gt;That is the real product surface.&lt;/p&gt;

&lt;p&gt;Not the cute animation where an agent opens twelve files and looks busy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxuvg2xf9pcylz0tcxnu.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxuvg2xf9pcylz0tcxnu.gif" alt="approval gate energy" width="400" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a specific AI tool goes away, the team that built around approval gates, artifacts, and clear ownership can migrate. The team that built around vibes has to rediscover its process under pressure.&lt;/p&gt;

&lt;p&gt;This is also where senior engineers should spend more attention. Not “which AI tool writes the best boilerplate this month?” That question decays quickly.&lt;/p&gt;

&lt;p&gt;The better question is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Which parts of our engineering process become safer, faster, or more observable if an AI can draft work but not silently change the contract?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  keep the artifacts outside the assistant
&lt;/h2&gt;

&lt;p&gt;If I had to give one practical rule, it would be this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Never let the assistant be the only place where the work exists.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Prompts, generated plans, evaluations, test outputs, review notes, and operational decisions should end up somewhere durable when they matter: repository files, PR comments, design docs, tickets, incident timelines, runbooks, audit logs.&lt;/p&gt;

&lt;p&gt;Chat history is not a system of record. It is a scratchpad with better autocomplete.&lt;/p&gt;

&lt;p&gt;AI tools do not just disappear. They mutate. Memory formats change. Export behavior changes. Enterprise retention settings change. Context windows change. Integrations get rebuilt. A workflow that depends on “the assistant remembers” has amnesia scheduled for a future date.&lt;/p&gt;

&lt;p&gt;For small personal tasks, who cares. Let the chat be messy.&lt;/p&gt;

&lt;p&gt;For engineering work that affects production, compliance, money movement, customer data, or infrastructure, the artifact needs to survive the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  design for boring replacement
&lt;/h2&gt;

&lt;p&gt;The healthiest AI-assisted engineering stacks will probably look less magical than the demos.&lt;/p&gt;

&lt;p&gt;They will have boring properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;prompts versioned near the code they affect&lt;/li&gt;
&lt;li&gt;model and vendor configuration separated from workflow logic&lt;/li&gt;
&lt;li&gt;output schemas for important generated artifacts&lt;/li&gt;
&lt;li&gt;tests and policy checks around agent-written changes&lt;/li&gt;
&lt;li&gt;approval gates for destructive or expensive actions&lt;/li&gt;
&lt;li&gt;audit trails for who asked what and what changed&lt;/li&gt;
&lt;li&gt;fallback paths when a model or provider is unavailable&lt;/li&gt;
&lt;li&gt;enough documentation that a human can perform the workflow manually&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this is glamorous. That is why it is probably correct.&lt;/p&gt;

&lt;p&gt;The teams that win with AI will not be the ones who bet the company on one assistant being magical forever. They will be the ones who turn useful AI behavior into replaceable workflow components.&lt;/p&gt;

&lt;p&gt;That does not mean all AI tools are bad. Some are excellent. I use them constantly. The point is almost the opposite: because they are useful, we should stop treating them like toys.&lt;/p&gt;

&lt;p&gt;Adult architecture assumes dependencies change.&lt;/p&gt;

&lt;h2&gt;
  
  
  the uncomfortable vendor lesson
&lt;/h2&gt;

&lt;p&gt;Vendors are going to keep moving quickly because the market is still unstable. Model costs change, safety requirements change, partnerships change, enterprise controls change, and product teams are still figuring out what people actually use after the demo high wears off.&lt;/p&gt;

&lt;p&gt;So yes, use the tools.&lt;/p&gt;

&lt;p&gt;But do not confuse vendor velocity with platform stability.&lt;/p&gt;

&lt;p&gt;If an assistant becomes central to your delivery process, ask the same boring questions you would ask about any other critical dependency: can we export the artifacts, switch providers, audit usage after an incident, and keep releasing when a model is deprecated or unavailable?&lt;/p&gt;

&lt;p&gt;Boring questions are where production lives.&lt;/p&gt;

&lt;h2&gt;
  
  
  the punchline
&lt;/h2&gt;

&lt;p&gt;The end of support for one Amazon Q surface is not the end of the world. A GitHub model deprecation is not a crisis. Most individual AI tooling changes are small.&lt;/p&gt;

&lt;p&gt;But the pattern matters. AI developer tools are still in a fast-churn phase, and engineering teams should stop acting surprised when fast-churn things churn.&lt;/p&gt;

&lt;p&gt;The durable investment is not memorizing this month’s assistant UI. It is building workflows where AI can help, humans can approve, artifacts can survive, and vendors can be swapped without turning delivery into archaeology.&lt;/p&gt;

&lt;p&gt;The assistant is not the architecture.&lt;/p&gt;

&lt;p&gt;The workflow is.&lt;/p&gt;

&lt;p&gt;And if the workflow only works while one branded assistant exists in exactly its current shape, it is not a workflow yet.&lt;/p&gt;

&lt;p&gt;It is a demo with a calendar invite for future pain.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devtools</category>
      <category>architecture</category>
      <category>software</category>
    </item>
    <item>
      <title>How to build an accrual-based credit ledger</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Tue, 05 May 2026 15:04:26 +0000</pubDate>
      <link>https://forem.com/pvgomes/how-to-build-an-accrual-based-credit-ledger-1dpj</link>
      <guid>https://forem.com/pvgomes/how-to-build-an-accrual-based-credit-ledger-1dpj</guid>
      <description>&lt;p&gt;Ledgers are the heartbeat of any financial companies, fintech or old school financial. Not the API gateway, not the mobile app, not the underwriting model. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The ledger&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oc5e0yhryqwmpmk36sf.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oc5e0yhryqwmpmk36sf.gif" alt="trust me" width="498" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Banks have known this for centuries. Fintechs sometimes need to rediscover it the hard way, or they learn during the way. &lt;/p&gt;

&lt;p&gt;On fintech industry &lt;strong&gt;Revolut&lt;/strong&gt; runs multi-currency, multi-product financial infrastructure across 35+ countries and counting 🚀... &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stripe&lt;/strong&gt; moves money and extends credit infrastructure for millions of businesses, sometimes you don't even know, but stripe is there.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ojbqiaasiqbsyl405rn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ojbqiaasiqbsyl405rn.gif" alt="stripe everywhere" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nubank&lt;/strong&gt; serves more than 130 million customers 🤯 and had to make credit work at Latin American scale, starting with high brazilian, my beloved country complexity and competitive bank system. Seriously, Nubank is just unique.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstw1tb4xvo0zbw4nehd7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstw1tb4xvo0zbw4nehd7.gif" alt="nubank is unique" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Chime&lt;/strong&gt; built credit-builder products on top of a US neobank model, how to compete? Just use them 😌.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Klarna&lt;/strong&gt; and &lt;strong&gt;Affirm&lt;/strong&gt; made deferred payments mainstream, which also means ledger complexity at global BNPL scale, they didn't change only finance, they scale and enable retail growth.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb93i2i5lk2208s3ygswn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb93i2i5lk2208s3ygswn.gif" alt="delivery everywhere" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point you got me right? Different products, different geographies. But you know what they have in common? Their accounting HAVE to be in a good shape. So they share the same pressure: &lt;strong&gt;move fast without corrupting the books.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is the uncomfortable thing about ledgers. The product team wants speed. The regulator wants auditability(is this a word?). Finance wants reconciliation. Engineering wants evolvability and not creating  technical debts. &lt;/p&gt;

&lt;p&gt;Customers just want their balance to be correct.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rn174ck4vkmdb7u2wfu.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8rn174ck4vkmdb7u2wfu.gif" alt="customer balance" width="360" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And ledger decisions made in year one become the ceiling in year five. The model that worked for 10,000 users becomes the bottleneck at 10 million, imagine 100 million? &lt;strong&gt;(Hello Nubank!!!)&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;The cron job that looked pragmatic becomes the reason nobody trusts end-of-day balances. &lt;/p&gt;

&lt;p&gt;The table that was “good enough for MVP” becomes the thing auditors stare at for three months.&lt;/p&gt;

&lt;p&gt;The famous sharding infrastructure worked well till some millions, later, sharding and infrastructure scaling is not necessarily the problem.&lt;/p&gt;

&lt;p&gt;The Synapse collapse in 2024 was a brutal reminder of this. When money movement companies cannot clearly reconcile who owns what, the failure is not only technical. It becomes customer harm. Millions of dollars can end up disputed, delayed, or unreconciled because the ledger was not strong enough to be the source of truth.&lt;/p&gt;

&lt;p&gt;This article is about designing a credit ledger before that pain arrives.&lt;/p&gt;

&lt;p&gt;One important boundary: this is intentionally about a &lt;strong&gt;single-geography credit ledger&lt;/strong&gt;. No multi-country posting rules, no cross-currency accounting, no global consistency model. Those are a different beast. I will cover them in the next article.&lt;/p&gt;
&lt;h2&gt;
  
  
  The evolution most fintechs go through
&lt;/h2&gt;

&lt;p&gt;Most credit products do not start with a beautiful accrual ledger. They start with a product requirement:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“We need to charge interest.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then a team adds a job. Later, events. Later, after enough incidents, a proper accrual model.&lt;/p&gt;

&lt;p&gt;That evolution is normal. The trick is not pretending the first version is the final one.&lt;/p&gt;
&lt;h2&gt;
  
  
  Stage 1: the job-based ledger
&lt;/h2&gt;

&lt;p&gt;The simplest ledger is a scheduled job.&lt;/p&gt;

&lt;p&gt;Every hour, every night, or every billing cycle, a cron job scans accounts and creates bookkeeping movements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;00:00 ───────────────────────────────────────────────► time

10:03  purchase happens
11:18  payment happens
14:42  fee is incurred

23:59  ledger_job runs
      ├─ posts purchase movement
      ├─ posts payment movement
      └─ posts fee movement
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works surprisingly well at small scale. It is easy to reason about. You can query the database, calculate what changed, and insert rows into a ledger table. Many early fintech systems start here because it lets the team ship.&lt;/p&gt;

&lt;p&gt;The problem is that job-based ledgers mix three different concepts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;when something happened,&lt;/li&gt;
&lt;li&gt;when the system noticed it happened,&lt;/li&gt;
&lt;li&gt;when accounting was posted.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At low volume, the difference is invisible. At scale, it becomes the whole problem.&lt;/p&gt;

&lt;p&gt;A customer pays at 11:18, but the ledger does not reflect it until 23:59. A fee is incurred at 14:42, but the job fails and retries at 01:10. Two jobs overlap and double-post. A reprocessing script tries to fix yesterday but accidentally changes today.&lt;/p&gt;

&lt;p&gt;The core issues are predictable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Delayed consistency&lt;/strong&gt;: balances are stale by design.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Race conditions&lt;/strong&gt;: two jobs can process the same account or overlapping time windows.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Poor auditability&lt;/strong&gt;: “what was the balance at 14:00?” becomes hard if entries are posted later without the correct effective date.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Painful reprocessing&lt;/strong&gt;: fixing a bad job means deciding whether to delete, update, or overwrite ledger rows. All three are dangerous.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scaling by brute force&lt;/strong&gt;: when volume increases, you make the job faster. Eventually the job becomes a monster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The job-based ledger is not evil. It is just a starting point. The mistake is letting it become the foundation of a serious credit platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 2: the event-based ledger
&lt;/h2&gt;

&lt;p&gt;The next step is to post ledger entries when business events happen.&lt;/p&gt;

&lt;p&gt;A payment is received. A charge is applied. A late fee is created. A statement closes. Instead of waiting for a batch job, the system reacts immediately.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Domain event                         Ledger reaction
────────────                         ───────────────
PaymentReceived ───────────────────► post payment entry
PurchaseAuthorized ────────────────► post authorization entry
FeeIncurred ───────────────────────► post fee entry
StatementClosed ───────────────────► post billing entries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is much better. The ledger is closer to real time, and the accounting logic sits near the business event that created it. Martin Fowler’s accounting patterns make this connection explicit: domain events and accounting entries should be linked, because accounting is a record of business reality, not an isolated reporting table.&lt;/p&gt;

&lt;p&gt;But event-based does not automatically mean robust.&lt;/p&gt;

&lt;p&gt;A common implementation looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;API request
  ├─ update product state
  ├─ publish event
  └─ write ledger entry
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or worse:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;API request
  ├─ update product state
  ├─ call ledger service synchronously
  └─ return success to customer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the ledger is coupled to the transaction path. If the ledger service is slow, the product is slow. If the event publish fails after the product state changes, the books are wrong. If the same event is delivered twice, you double-post. If events arrive out of order, the ledger reflects a reality that never existed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;        ┌──────────────┐
        │ Credit API   │
        └──────┬───────┘
               │ emits
               ▼
        ┌──────────────┐        tight coupling risk
        │ Domain event │ ──────────────────────────┐
        └──────┬───────┘                           │
               │ consumed                           ▼
               ▼                            ┌──────────────┐
        ┌──────────────┐                    │ User request │
        │ Ledger write │                    │ latency path │
        └──────────────┘                    └──────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Event-based ledgering is the middle ground. It is a necessary evolution from cron jobs, but it still needs three things to become safe:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;immutable events,&lt;/li&gt;
&lt;li&gt;idempotent processing,&lt;/li&gt;
&lt;li&gt;deterministic replay.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without those, you do not have a ledger architecture. You have a distributed system hoping the happy path stays happy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 3: the accrual-based ledger
&lt;/h2&gt;

&lt;p&gt;A credit ledger should not only record when cash moves. It should record when value is &lt;strong&gt;earned or incurred&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That is the core idea of accrual accounting.&lt;/p&gt;

&lt;p&gt;Interest is earned daily. Fees are incurred when the customer triggers them. A settlement is a separate event from the revenue already earned. Billing is a presentation and collection mechanism, not the moment the economics magically appear.&lt;/p&gt;

&lt;p&gt;For a credit product, this distinction matters a lot.&lt;/p&gt;

&lt;p&gt;If a customer carries a balance for 20 days, the platform is earning interest across those 20 days. Waiting until the statement closes to create one giant interest entry may look simpler, but it hides the actual economics. It also makes “balance as of” queries, partial reversals, mid-cycle adjustments, and audit trails harder than they need to be.&lt;/p&gt;

&lt;p&gt;An accrual-based credit ledger treats accounting like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Day 1      Day 2      Day 3      ...      Cycle close        Payment
│          │          │                    │                  │
▼          ▼          ▼                    ▼                  ▼
Accrue     Accrue     Accrue               Bill accrued       Settle cash
interest   interest   interest             interest           receivable

Accounting view:
- daily: debit interest receivable, credit interest income
- cycle close: move accrued amounts into statement balance
- payment: debit cash/settlement account, credit customer receivable
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is how accounting actually works. Fintechs that skip it often end up retrofitting it later, usually after finance, risk, or regulators start asking questions the old model cannot answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Double-entry is the checksum
&lt;/h2&gt;

&lt;p&gt;At the center of this design is double-entry bookkeeping.&lt;/p&gt;

&lt;p&gt;Every ledger entry has two sides. One account is debited. Another account is credited. The total must balance.&lt;/p&gt;

&lt;p&gt;For a daily interest accrual, the entry might be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Debit:  Interest receivable      1.25 USD
Credit: Interest income          1.25 USD
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The customer owes more. The company earned revenue. Two sides of the same fact.&lt;/p&gt;

&lt;p&gt;This is not accounting ceremony. It is a system invariant. If debits and credits do not net to zero, the ledger rejects the entry. That gives you a built-in checksum for every financial movement.&lt;/p&gt;

&lt;p&gt;A single-sided balance table can tell you what you think a customer owes. A double-entry ledger can tell you whether the books still make sense.&lt;/p&gt;

&lt;p&gt;That difference is everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  The ledger should be append-only
&lt;/h2&gt;

&lt;p&gt;A serious ledger does not update old entries. It does not delete them either.&lt;/p&gt;

&lt;p&gt;It appends.&lt;/p&gt;

&lt;p&gt;If you posted the wrong amount, you post a reversal and then a corrected entry. The history remains sacred.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;entry_id   type        debit_account          credit_account       amount
────────   ────────    ─────────────────      ─────────────────    ──────
E1         ACCRUAL     interest_receivable    interest_income      10.00
E2         REVERSAL    interest_income        interest_receivable  10.00
E3         ACCRUAL     interest_receivable    interest_income       9.50
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That pattern feels annoying when you are moving fast. It is also the reason you can answer audit questions later.&lt;/p&gt;

&lt;p&gt;Why was the customer balance different yesterday? Read the log.&lt;/p&gt;

&lt;p&gt;What did the system believe at 10:35? Query entries with &lt;code&gt;effective_date &amp;lt;= 10:35&lt;/code&gt; and &lt;code&gt;created_at &amp;lt;= investigation_cutoff&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Can we reproduce last month’s statement? Replay the events and ledger entries as they existed then.&lt;/p&gt;

&lt;p&gt;This is where Mettle’s Write Once Double Entry pattern is a great real-world reference. WODE is basically the grown-up version of what many fintech teams eventually learn: write once, balance always, correct by appending, and make the ledger boring enough to trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event sourcing is the backbone
&lt;/h2&gt;

&lt;p&gt;Accrual ledgering and event sourcing fit naturally together.&lt;/p&gt;

&lt;p&gt;The product emits immutable domain events:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;PurchasePosted&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;PaymentSettled&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;InterestAccrued&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;LateFeeIncurred&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;StatementClosed&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ChargeReversed&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The ledger consumes those events and produces immutable accounting entries. Each entry points back to the event that caused it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────┐
│ Domain command  │
│ "apply payment"│
└────────┬────────┘
         ▼
┌─────────────────┐
│ Domain event    │
│ PaymentSettled  │
└────────┬────────┘
         ▼
┌─────────────────┐
│ Accrual /       │
│ posting rules   │
└────────┬────────┘
         ▼
┌────────────────────────────────────────────┐
│ Immutable double-entry ledger entries       │
│ source_event_id = PaymentSettled.event_id   │
└────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives you a clean rule: product state is derived from product events, and accounting state is derived from accounting entries produced from those events.&lt;/p&gt;

&lt;p&gt;If you replay all events through the same posting rules, you should get the same ledger. If you do not, either the rules are not deterministic or the event log is incomplete. Both are bugs worth finding early.&lt;/p&gt;

&lt;p&gt;Event sourcing also changes how you think about failures. A failed projection is not a data-loss incident if the event is still there. You fix the processor, replay, and rebuild. That is the difference between a recoverable system and a spreadsheet with APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Eventual consistency is fine. Wrong money is not.
&lt;/h2&gt;

&lt;p&gt;A common objection is: “But if the ledger is event-driven, it may be slightly behind.”&lt;/p&gt;

&lt;p&gt;Yes. That is usually fine for credit.&lt;/p&gt;

&lt;p&gt;A credit ledger being 200 milliseconds behind the authorization system is not a disaster. A ledger double-posting interest is. A missing payment is. A fee applied before the balance it depends on is. An out-of-order reversal that leaves the customer owing money they do not owe is.&lt;/p&gt;

&lt;p&gt;The trade-off is not strong consistency versus chaos. The trade-off is where you need immediate consistency and where you need deterministic eventual consistency.&lt;/p&gt;

&lt;p&gt;For most credit ledgers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the customer-facing authorization path may need immediate answers,&lt;/li&gt;
&lt;li&gt;the ledger may process asynchronously,&lt;/li&gt;
&lt;li&gt;the processing must be idempotent,&lt;/li&gt;
&lt;li&gt;events must have ordering guarantees per account or credit line,&lt;/li&gt;
&lt;li&gt;every ledger entry must carry an idempotency key,&lt;/li&gt;
&lt;li&gt;replay must not create duplicate entries.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An idempotency key can be simple and powerful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;interest-accrual:{credit_account_id}:{accrual_date}:{rate_version}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the processor retries, the ledger sees the same key and refuses to post the same economic fact twice.&lt;/p&gt;

&lt;p&gt;That is the difference between eventual consistency and eventual regret.&lt;/p&gt;

&lt;h2&gt;
  
  
  A practical ledger entry shape
&lt;/h2&gt;

&lt;p&gt;You can make the schema more sophisticated later, but a useful starting point looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ledger_entries(
  entry_id,
  type,
  debit_account,
  credit_account,
  amount,
  currency,
  effective_date,
  created_at,
  idempotency_key,
  source_event_id
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few fields matter more than they look:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;effective_date&lt;/code&gt; is when the economic event applies.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;created_at&lt;/code&gt; is when the system recorded it.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;idempotency_key&lt;/code&gt; prevents duplicate posting.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;source_event_id&lt;/code&gt; links accounting back to business reality.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;currency&lt;/code&gt; should be explicit even in a single-geography system. You will thank yourself later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For a serious platform, you will also add account types, journal batches, metadata, posting rule versions, actor/service identifiers, and partition keys. But do not lose the core shape: debit, credit, amount, currency, time, idempotency, source.&lt;/p&gt;

&lt;h2&gt;
  
  
  Daily interest accrual pseudocode
&lt;/h2&gt;

&lt;p&gt;Here is the kind of logic I would expect in a first version. Not production code, but the shape is right.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for credit_account in active_credit_accounts:
    accrual_date = today_in_product_timezone()

    balance = principal_balance_as_of(
        credit_account.id,
        accrual_date.start
    )

    if balance &amp;lt;= 0:
        continue

    rate = interest_rate_for(
        credit_account.id,
        accrual_date
    )

    daily_interest = round_money(
        balance * rate.annual_percentage / days_in_year(accrual_date)
    )

    if daily_interest == 0:
        continue

    idempotency_key = "interest-accrual:" +
        credit_account.id + ":" +
        accrual_date + ":" +
        rate.version

    post_double_entry(
        type = "INTEREST_ACCRUAL",
        debit_account = account("interest_receivable", credit_account.id),
        credit_account = account("interest_income"),
        amount = daily_interest,
        currency = credit_account.currency,
        effective_date = accrual_date,
        source_event_id = current_accrual_event.id,
        idempotency_key = idempotency_key
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are many details hidden here: day-count convention, rounding policy, grace periods, promotional rates, delinquency state, local regulation, charge-off treatment. Those are product and accounting rules, not reasons to avoid the accrual model.&lt;/p&gt;

&lt;p&gt;Actually, they are reasons to prefer it. Complex rules are easier to manage when every economic fact has a precise entry and a source event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Correction entries, not edits
&lt;/h2&gt;

&lt;p&gt;Imagine the system accrued 10.00 USD of interest, but later you discover the correct amount was 9.50 USD because a payment was effective one day earlier.&lt;/p&gt;

&lt;p&gt;Do not update the original row.&lt;/p&gt;

&lt;p&gt;Post this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Original:
  Dr interest_receivable  10.00
  Cr interest_income      10.00

Reversal:
  Dr interest_income      10.00
  Cr interest_receivable  10.00

Corrected:
  Dr interest_receivable   9.50
  Cr interest_income       9.50
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the ledger tells the truth twice:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;what the system originally believed,&lt;/li&gt;
&lt;li&gt;how that belief was corrected.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That second part matters. A ledger that only stores the latest truth is not an audit trail. It is a mutable cache.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where teams usually get hurt
&lt;/h2&gt;

&lt;p&gt;The hardest part of building a credit ledger is not the table design. It is resisting shortcuts that feel harmless.&lt;/p&gt;

&lt;p&gt;Shortcuts like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;storing only customer balances instead of entries,&lt;/li&gt;
&lt;li&gt;treating billing as the moment revenue is earned,&lt;/li&gt;
&lt;li&gt;letting jobs update historical rows,&lt;/li&gt;
&lt;li&gt;using timestamps without separating &lt;code&gt;effective_date&lt;/code&gt; from &lt;code&gt;created_at&lt;/code&gt;,&lt;/li&gt;
&lt;li&gt;allowing ledger writes without source events,&lt;/li&gt;
&lt;li&gt;ignoring idempotency because “the event only fires once,”&lt;/li&gt;
&lt;li&gt;building reconciliation after launch instead of as part of the ledger.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every one of those shortcuts is understandable. Every one becomes expensive.&lt;/p&gt;

&lt;p&gt;The better architecture is boring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Domain events are immutable.
Posting rules are deterministic.
Ledger entries are double-entry.
Ledger writes are append-only.
Corrections are reversals.
Consumers are eventually consistent.
Reconciliation is continuous.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is not overengineering. That is the minimum foundation for money.&lt;/p&gt;

&lt;h2&gt;
  
  
  Build versus buy
&lt;/h2&gt;

&lt;p&gt;There is a reason ledger-as-a-service companies and open-source ledgers are getting attention. Formance and Blnk are examples of the community moving toward purpose-built ledger infrastructure instead of every fintech reinventing the same accounting core. Temporal’s work on high-performance ledger patterns also points in the same direction: reliability, replayability, and operational resilience are not optional features.&lt;/p&gt;

&lt;p&gt;My opinion: most fintechs should not casually build a ledger from scratch. If the ledger is not a core differentiator, buying or adopting a proven ledger engine is rational.&lt;/p&gt;

&lt;p&gt;But credit platforms often have enough product-specific accounting behavior that the team still needs to deeply understand the model. Even if you buy the ledger infrastructure, you still own the posting rules. You still own correctness.&lt;/p&gt;

&lt;p&gt;A vendor can provide the engine. It cannot decide your economics.&lt;/p&gt;

&lt;h2&gt;
  
  
  The simple standard
&lt;/h2&gt;

&lt;p&gt;A good credit ledger should let you answer these questions without panic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What does this customer owe right now?&lt;/li&gt;
&lt;li&gt;What did they owe at the end of last month?&lt;/li&gt;
&lt;li&gt;Which event created this entry?&lt;/li&gt;
&lt;li&gt;Did every debit have a matching credit?&lt;/li&gt;
&lt;li&gt;Can we reverse this without deleting history?&lt;/li&gt;
&lt;li&gt;Can we replay the ledger from events?&lt;/li&gt;
&lt;li&gt;Can finance reconcile cash, receivables, income, fees, and adjustments?&lt;/li&gt;
&lt;li&gt;Can auditors see both the original mistake and the correction?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the answer is no, the platform is not ready for scale. It may still work as a product. It may even grow quickly. But the ledger is already putting a ceiling on the company.&lt;/p&gt;

&lt;p&gt;That ceiling always gets lower as volume grows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thought
&lt;/h2&gt;

&lt;p&gt;The right ledger architecture will not make your fintech move slower. It will let you keep moving after the product becomes serious.&lt;/p&gt;

&lt;p&gt;Job-based ledgers help you start. Event-based ledgers help you react. Accrual-based ledgers help you tell the economic truth.&lt;/p&gt;

&lt;p&gt;For credit, that truth matters every day interest is earned, every time a fee is incurred, every time a customer pays, and every time finance needs to close the books.&lt;/p&gt;

&lt;p&gt;This article deliberately stayed inside a single-geography model. Multi-geography ledgers, currency conversion at the ledger layer, local accounting rules, and global consistency are out of scope here.&lt;/p&gt;

&lt;p&gt;That is the next article... I'm tired...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59fpq4m6o08udlcxh3gd.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59fpq4m6o08udlcxh3gd.gif" alt="Im tired" width="500" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;My experience 👨🏻‍💻🇧🇷😀&lt;/li&gt;
&lt;li&gt;The downfall of Synapse &lt;a href="https://finance.yahoo.com/personal-finance/banking/article/synapse-bankruptcy-fintech-safety-183845965.html" rel="noopener noreferrer"&gt;yahoo finance&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Martin Fowler, &lt;a href="https://martinfowler.com/eaaDev/AccountingNarrative.html" rel="noopener noreferrer"&gt;Accounting Narrative&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Microsoft Azure Architecture Center, &lt;a href="https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing" rel="noopener noreferrer"&gt;Event Sourcing pattern&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Mettle, &lt;a href="https://www.mettle.co.uk/blog/innovation-at-mettle-double-entry-and-event-sourcing" rel="noopener noreferrer"&gt;Innovation at Mettle: Double Entry and Event Sourcing&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Temporal, &lt;a href="https://temporal.io/blog/designing-high-performance-financial-ledgers-with-temporal" rel="noopener noreferrer"&gt;Designing high-performance financial ledgers with Temporal&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Apideck, &lt;a href="https://www.apideck.com/blog/money-movement-infrastructure-fintech-ledger-as-a-service" rel="noopener noreferrer"&gt;Money movement infrastructure: fintech ledger as a service&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Martin Fowler, &lt;a href="https://martinfowler.com/eaaDev/AccountingEntry.html" rel="noopener noreferrer"&gt;Accounting Entry&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Formance, &lt;a href="https://github.com/formancehq/ledger" rel="noopener noreferrer"&gt;Open-source programmable ledger&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Blnk Finance, &lt;a href="https://www.blnkfinance.com" rel="noopener noreferrer"&gt;Open-source immutable ledger&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>softwareengineering</category>
      <category>opinion</category>
      <category>fintech</category>
    </item>
    <item>
      <title>Tactical Debt Is the Silent Killer of Engineering Velocity</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Tue, 05 May 2026 09:03:22 +0000</pubDate>
      <link>https://forem.com/pvgomes/tactical-debt-is-the-silent-killer-of-engineering-velocity-gpj</link>
      <guid>https://forem.com/pvgomes/tactical-debt-is-the-silent-killer-of-engineering-velocity-gpj</guid>
      <description>&lt;p&gt;There is a kind of engineering debt that does not show up in static analysis, does not trigger your test suite, and will not be fixed by refactoring a bad class.&lt;/p&gt;

&lt;p&gt;It lives in the way the team operates.&lt;/p&gt;

&lt;p&gt;Nobody owns the service. Deployments need a human ritual. The sprint plan dies on Tuesday because production is on fire again. The one person who understands payments is on vacation and suddenly everyone is politely pretending not to panic.&lt;/p&gt;

&lt;p&gt;This is tactical debt.&lt;/p&gt;

&lt;p&gt;And I think it destroys more engineering velocity than most teams are willing to admit.&lt;/p&gt;

&lt;p&gt;Technical debt gets all the attention because it is visible to engineers. You can point to the messy module. You can complain about the missing tests. You can open an IDE and feel the pain directly.&lt;/p&gt;

&lt;p&gt;Tactical debt is sneakier. It is the boulder chained to the team while everyone is still debating whether the codebase is clean enough.&lt;/p&gt;

&lt;p&gt;The cruel part is that tactical debt absorbs productivity gains.&lt;/p&gt;

&lt;p&gt;You can improve your test suite. You can adopt better tooling. You can hire strong engineers. You can use AI to write code faster.&lt;/p&gt;

&lt;p&gt;But if the path from idea to production is full of unclear ownership, manual steps, approval mazes, status theater, and tribal knowledge, the extra productivity just gets converted into waiting.&lt;/p&gt;

&lt;p&gt;AI may help you write the pull request in twenty minutes.&lt;/p&gt;

&lt;p&gt;Tactical debt makes sure it still takes three weeks to reach production.&lt;/p&gt;

&lt;h2&gt;
  
  
  tactical debt is not technical debt
&lt;/h2&gt;

&lt;p&gt;This distinction matters.&lt;/p&gt;

&lt;p&gt;Technical debt is about the shape and quality of the software system. Tactical debt is about the shape and quality of the operating system around the software system.&lt;/p&gt;

&lt;p&gt;They are related, but they are not the same thing.&lt;/p&gt;

&lt;p&gt;You can have ugly code and a high-velocity team if ownership is clear, deployment is automated, decisions are fast, and operational feedback loops are healthy.&lt;/p&gt;

&lt;p&gt;You can also have a beautifully designed codebase trapped inside an organization where every change requires five meetings, three approvals, a release coordinator, and someone named Rodrigo who is the only person allowed to touch production because of an incident in 2021.&lt;/p&gt;

&lt;p&gt;The code may be clean.&lt;/p&gt;

&lt;p&gt;The team is still slow.&lt;/p&gt;

&lt;p&gt;This is why some technical-debt programs fail. Teams spend a quarter improving internals and then wonder why delivery still feels stuck. The answer is uncomfortable: the bottleneck was never only in the code.&lt;/p&gt;

&lt;p&gt;The bottleneck was in the way work moves.&lt;/p&gt;

&lt;p&gt;Technical debt usually asks: “Is the system easy to change?”&lt;/p&gt;

&lt;p&gt;Tactical debt asks: “Can the team actually get a change safely into the hands of users?”&lt;/p&gt;

&lt;p&gt;Those are different questions. Mature engineering organizations need to care about both.&lt;/p&gt;

&lt;h2&gt;
  
  
  unclear ownership turns small questions into archaeology
&lt;/h2&gt;

&lt;p&gt;A healthy system has an answer to a boring question: who owns this?&lt;/p&gt;

&lt;p&gt;Not in an abstract Confluence sense. In the practical sense.&lt;/p&gt;

&lt;p&gt;Who gets paged? Who approves risky changes? Who understands the business constraints? Who can decide whether this API behavior is intentional or accidental?&lt;/p&gt;

&lt;p&gt;When ownership is unclear, simple work becomes detective work.&lt;/p&gt;

&lt;p&gt;A product manager asks for a small change in a service. The engineer opens the repository, checks the README, sees that it was last meaningfully updated two years ago, asks in Slack, gets three conflicting answers, and eventually discovers that the service was built by a team that no longer exists.&lt;/p&gt;

&lt;p&gt;Now the ticket is not “change this behavior.”&lt;/p&gt;

&lt;p&gt;The ticket is “perform organizational archaeology until someone is brave enough to say yes.”&lt;/p&gt;

&lt;p&gt;This is tactical debt.&lt;/p&gt;

&lt;p&gt;And it is expensive because it taxes every future change, not just the first one.&lt;/p&gt;

&lt;h2&gt;
  
  
  manual work is latency disguised as caution
&lt;/h2&gt;

&lt;p&gt;Manual work often enters the system with good intentions.&lt;/p&gt;

&lt;p&gt;A production deploy needed care. A migration was risky. A release checklist caught one important mistake once. So the team kept the ritual.&lt;/p&gt;

&lt;p&gt;Then the ritual became normal.&lt;/p&gt;

&lt;p&gt;Now a deployment requires someone to click through a checklist at midnight, update a spreadsheet, paste output into a channel, wait for a human approval, run a script from their laptop, and hope the VPN behaves.&lt;/p&gt;

&lt;p&gt;Everyone agrees this is “safer.”&lt;/p&gt;

&lt;p&gt;Often it is not.&lt;/p&gt;

&lt;p&gt;Manual work is inconsistent automation executed by tired humans.&lt;/p&gt;

&lt;p&gt;It adds latency, creates hidden dependencies, and makes releases emotionally expensive. When deployment hurts, teams batch changes. When teams batch changes, releases get riskier. When releases get riskier, the organization adds more manual control.&lt;/p&gt;

&lt;p&gt;The loop feeds itself.&lt;/p&gt;

&lt;p&gt;Good automation is not about moving fast recklessly. It is about making the safe path the easy path.&lt;/p&gt;

&lt;p&gt;If shipping requires heroics, velocity will eventually collapse into negotiation.&lt;/p&gt;

&lt;h2&gt;
  
  
  status meetings are usually a symptom
&lt;/h2&gt;

&lt;p&gt;I am not anti-meeting. Some meetings are useful. Talking to humans is not a failure mode.&lt;/p&gt;

&lt;p&gt;But many status meetings are not coordination. They are compensation.&lt;/p&gt;

&lt;p&gt;They exist because the actual system for communicating progress does not work.&lt;/p&gt;

&lt;p&gt;The board is stale. The tickets are vague. Decisions are buried in private chats. Dependencies are invisible. Nobody trusts async updates. So the organization creates a recurring ceremony where everyone verbally reconstructs reality.&lt;/p&gt;

&lt;p&gt;This is synchronous theater.&lt;/p&gt;

&lt;p&gt;It feels productive because people are talking. But the output is often just temporary shared awareness that expires after lunch.&lt;/p&gt;

&lt;p&gt;The fix is rarely “cancel all meetings.” The fix is making the work legible.&lt;/p&gt;

&lt;p&gt;Clear ownership. Written decisions. Useful tickets. Visible dependencies. Async updates that people actually trust.&lt;/p&gt;

&lt;p&gt;Once the system is legible, meetings can return to what they are good for: judgment, conflict, alignment, and decisions.&lt;/p&gt;

&lt;p&gt;Not reading the Jira board out loud.&lt;/p&gt;

&lt;h2&gt;
  
  
  hero dependencies are not a compliment
&lt;/h2&gt;

&lt;p&gt;Every company has the person.&lt;/p&gt;

&lt;p&gt;The one who understands payments. Or Kubernetes. Or the pricing engine. Or why settlement fails on the last business day of the month in one specific market.&lt;/p&gt;

&lt;p&gt;At first, this person looks like an asset. They are fast, helpful, reliable, and terrifyingly knowledgeable.&lt;/p&gt;

&lt;p&gt;Then they go on vacation.&lt;/p&gt;

&lt;p&gt;Suddenly the whole organization discovers that “we have documentation” meant “we have a Slack thread from March and maybe a diagram in someone’s Google Drive.”&lt;/p&gt;

&lt;p&gt;Hero dependencies are tactical debt with a friendly face.&lt;/p&gt;

&lt;p&gt;They feel good because heroes save the day. They are dangerous because the system learns to depend on being saved.&lt;/p&gt;

&lt;p&gt;The answer is not to punish strong engineers. The answer is to stop turning competence into a single point of failure.&lt;/p&gt;

&lt;p&gt;Pair on risky domains. Rotate operational ownership. Write down the weird parts. Make onboarding to critical systems a real activity. Reward people for making themselves less required, not more indispensable.&lt;/p&gt;

&lt;p&gt;A team that cannot survive one engineer taking a holiday is not high-performing.&lt;/p&gt;

&lt;p&gt;It is fragile.&lt;/p&gt;

&lt;h2&gt;
  
  
  reactive fire drills make planning fictional
&lt;/h2&gt;

&lt;p&gt;A sprint plan is a hypothesis.&lt;/p&gt;

&lt;p&gt;In some teams, it is also fiction.&lt;/p&gt;

&lt;p&gt;Not because engineers are lazy. Not because product is bad at prioritization. But because the organization has accepted constant interruption as normal.&lt;/p&gt;

&lt;p&gt;Production is always on fire. Incidents are frequent. Customer escalations bypass prioritization. Leadership asks for “quick checks” that become full projects. Every week starts with a plan and ends with a pile of emergency context switches.&lt;/p&gt;

&lt;p&gt;This is one of the most brutal forms of tactical debt because it attacks deep work.&lt;/p&gt;

&lt;p&gt;Software engineering needs sustained attention. You cannot design well in twenty-minute fragments between escalations. You cannot reason about distributed systems while half your brain is waiting for the next alert.&lt;/p&gt;

&lt;p&gt;Fire drills also create a nasty management illusion: the team looks busy, responsive, and important.&lt;/p&gt;

&lt;p&gt;But responsiveness is not the same as progress.&lt;/p&gt;

&lt;p&gt;If every week is exceptional, nothing is exceptional. The system is just underdesigned.&lt;/p&gt;

&lt;p&gt;Fixing this usually requires boring operational discipline: incident review, error budgets, better alert quality, platform investment, clearer escalation paths, and the courage to say that not every interruption deserves immediate engineering attention.&lt;/p&gt;

&lt;p&gt;Velocity is not how fast you react to chaos.&lt;/p&gt;

&lt;p&gt;Velocity is how much chaos you no longer create.&lt;/p&gt;

&lt;h2&gt;
  
  
  poor handoffs are where context goes to die
&lt;/h2&gt;

&lt;p&gt;Bad handoffs are incredibly expensive because they usually happen at boundaries: between product and engineering, engineering and QA, development and operations, one team and another team.&lt;/p&gt;

&lt;p&gt;The classic version is “it works on my machine.”&lt;/p&gt;

&lt;p&gt;But the more subtle version is worse.&lt;/p&gt;

&lt;p&gt;A team finishes a project and throws it over the wall with incomplete context. The receiving team gets code but not the assumptions. A migration plan but not the failure modes. An API contract but not the customer promises. A dashboard but not the meaning of the alerts.&lt;/p&gt;

&lt;p&gt;Then everyone acts surprised when the handoff creates rework.&lt;/p&gt;

&lt;p&gt;Good handoffs are not ceremonies. They are context transfer.&lt;/p&gt;

&lt;p&gt;What changed? Why? What tradeoffs were made? What should the next team watch? What is safe to modify? What is load-bearing and ugly but intentional?&lt;/p&gt;

&lt;p&gt;If that information is not transferred, the receiving team pays the tax later.&lt;/p&gt;

&lt;p&gt;Usually during an incident.&lt;/p&gt;

&lt;h2&gt;
  
  
  information silos turn companies into slow databases
&lt;/h2&gt;

&lt;p&gt;Information silos are not just a documentation problem. They are a query problem.&lt;/p&gt;

&lt;p&gt;In a healthy organization, an engineer can ask, “Why does this work this way?” and find a reliable answer.&lt;/p&gt;

&lt;p&gt;In a siloed organization, the answer exists somewhere, but the retrieval mechanism is social luck.&lt;/p&gt;

&lt;p&gt;Maybe it is in a private channel. Maybe it was discussed in a meeting that was never documented. Maybe it lives in the head of someone who moved teams. Maybe it was in an ADR, but the ADR folder has thirteen conflicting versions and no one knows which one matters.&lt;/p&gt;

&lt;p&gt;So engineers query the company like a bad distributed database.&lt;/p&gt;

&lt;p&gt;They ask around. They wait. They get partial answers. They merge gossip with code reading. Then they make a decision with low confidence.&lt;/p&gt;

&lt;p&gt;This is slow, but worse than slow: it makes teams conservative.&lt;/p&gt;

&lt;p&gt;When people cannot understand the system, they avoid changing it. When they must change it, they over-escalate. When they over-escalate, the organization adds process. The boulder gets heavier.&lt;/p&gt;

&lt;p&gt;Documentation helps, but only if it is part of the work, not a guilt ritual after the work.&lt;/p&gt;

&lt;p&gt;A useful rule: if a decision will matter in three months, write it where future engineers will actually look.&lt;/p&gt;

&lt;h2&gt;
  
  
  excess approvals create responsibility without authority
&lt;/h2&gt;

&lt;p&gt;Approvals are seductive.&lt;/p&gt;

&lt;p&gt;They make risk feel managed. They create a trail. They give leadership a sense that important changes are being controlled.&lt;/p&gt;

&lt;p&gt;Sometimes approvals are necessary. Regulated systems, financial flows, security-sensitive changes, and irreversible actions need real governance.&lt;/p&gt;

&lt;p&gt;But excess approvals are different. They are what happens when an organization does not trust its own operating model.&lt;/p&gt;

&lt;p&gt;Instead of giving teams clear guardrails and authority, it inserts humans into every meaningful step.&lt;/p&gt;

&lt;p&gt;Architecture review. Security review. Platform review. Product review. Release review. Manager review. Sometimes each one is valid in isolation. Together, they create a system where everyone is responsible and nobody is empowered.&lt;/p&gt;

&lt;p&gt;The cost is not only waiting time.&lt;/p&gt;

&lt;p&gt;The cost is learned helplessness.&lt;/p&gt;

&lt;p&gt;Engineers stop making decisions because decisions will be relitigated anyway. Teams optimize for approval rather than outcomes. Documents become defensive. Meetings become political. Velocity becomes the speed at which consensus can be manufactured.&lt;/p&gt;

&lt;p&gt;Good governance should make the safe decisions obvious and the risky decisions explicit.&lt;/p&gt;

&lt;p&gt;Bad governance makes every decision slow.&lt;/p&gt;

&lt;h2&gt;
  
  
  tribal knowledge is a loan with terrible interest
&lt;/h2&gt;

&lt;p&gt;Tribal knowledge is not automatically bad. Every team has local knowledge. Some things are easier to learn from people than from documents.&lt;/p&gt;

&lt;p&gt;The problem starts when tribal knowledge becomes the primary storage layer for critical information.&lt;/p&gt;

&lt;p&gt;Why do we deploy on Thursdays? Ask Mariana.&lt;/p&gt;

&lt;p&gt;Why does this job run twice? Ask Ahmed.&lt;/p&gt;

&lt;p&gt;Why is this field nullable? Ask the person who left last year.&lt;/p&gt;

&lt;p&gt;At that point, the organization is not saving time by avoiding documentation. It is taking a loan.&lt;/p&gt;

&lt;p&gt;The interest is paid by every future engineer who has to rediscover the same context.&lt;/p&gt;

&lt;p&gt;AI makes this more interesting, and not always in a good way.&lt;/p&gt;

&lt;p&gt;Generated code can move faster than organizational memory. If your team cannot explain why the system works the way it does, AI will happily produce changes that look reasonable and violate assumptions nobody wrote down.&lt;/p&gt;

&lt;p&gt;The problem is not that AI is bad.&lt;/p&gt;

&lt;p&gt;The problem is that AI amplifies the quality of the surrounding system.&lt;/p&gt;

&lt;p&gt;If the surrounding system is full of tribal knowledge, unclear ownership, and manual release paths, AI will not save velocity. It will generate more work waiting to get stuck.&lt;/p&gt;

&lt;h2&gt;
  
  
  why tactical debt compounds
&lt;/h2&gt;

&lt;p&gt;The dangerous thing about tactical debt is that each piece reinforces the others.&lt;/p&gt;

&lt;p&gt;Unclear ownership creates more meetings.&lt;/p&gt;

&lt;p&gt;Manual deployments require more approvals.&lt;/p&gt;

&lt;p&gt;Hero dependencies create poor handoffs.&lt;/p&gt;

&lt;p&gt;Poor handoffs create fire drills.&lt;/p&gt;

&lt;p&gt;Fire drills prevent documentation.&lt;/p&gt;

&lt;p&gt;Missing documentation strengthens tribal knowledge.&lt;/p&gt;

&lt;p&gt;Tribal knowledge makes ownership harder to clarify.&lt;/p&gt;

&lt;p&gt;And around we go.&lt;/p&gt;

&lt;p&gt;This is why tactical debt feels normal from the inside. It does not arrive as one catastrophic decision. It accumulates as reasonable local compromises.&lt;/p&gt;

&lt;p&gt;One manual step.&lt;/p&gt;

&lt;p&gt;One exception process.&lt;/p&gt;

&lt;p&gt;One undocumented decision.&lt;/p&gt;

&lt;p&gt;One “let’s just ask Ana.”&lt;/p&gt;

&lt;p&gt;One urgent escalation that bypasses the roadmap.&lt;/p&gt;

&lt;p&gt;None of these feel fatal. Together, they become the operating system of the team.&lt;/p&gt;

&lt;p&gt;Then leadership asks why engineering velocity is down.&lt;/p&gt;

&lt;p&gt;The answer is chained to the team in plain sight.&lt;/p&gt;

&lt;h2&gt;
  
  
  the fix is operational design
&lt;/h2&gt;

&lt;p&gt;You do not fix tactical debt by telling engineers to “move faster.”&lt;/p&gt;

&lt;p&gt;You fix it by redesigning how work flows.&lt;/p&gt;

&lt;p&gt;A few questions are more useful than most maturity models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can we name the owner of every production service?&lt;/li&gt;
&lt;li&gt;Can a normal change reach production without heroics?&lt;/li&gt;
&lt;li&gt;Can someone understand the current state of work without attending a meeting?&lt;/li&gt;
&lt;li&gt;Can critical people take vacation without the team freezing?&lt;/li&gt;
&lt;li&gt;Do incidents produce system improvements, or just exhaustion?&lt;/li&gt;
&lt;li&gt;Are decisions written where future engineers will find them?&lt;/li&gt;
&lt;li&gt;Are approvals protecting real risk, or compensating for missing trust?&lt;/li&gt;
&lt;li&gt;Can a new engineer safely change an important system without a treasure hunt?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the answer to most of these is no, the problem is not motivation.&lt;/p&gt;

&lt;p&gt;It is tactical debt.&lt;/p&gt;

&lt;p&gt;And the best teams treat it as engineering work.&lt;/p&gt;

&lt;p&gt;They automate the manual path. They clarify ownership. They make decisions searchable. They reduce approval scope. They rotate knowledge. They improve handoffs. They measure interruption. They design communication channels instead of letting them decay into notification soup.&lt;/p&gt;

&lt;p&gt;This work is not glamorous. It rarely produces a launch announcement. Nobody gets promoted because the deployment checklist got deleted.&lt;/p&gt;

&lt;p&gt;But this is the work that makes every future feature cheaper.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI does not remove the boulder
&lt;/h2&gt;

&lt;p&gt;This is the part I think many companies are about to learn painfully.&lt;/p&gt;

&lt;p&gt;AI can increase code production. It can help write tests, generate scaffolding, explain unfamiliar code, draft migrations, and accelerate repetitive engineering tasks.&lt;/p&gt;

&lt;p&gt;That is useful.&lt;/p&gt;

&lt;p&gt;But velocity is not code output.&lt;/p&gt;

&lt;p&gt;Velocity is validated change delivered safely to users.&lt;/p&gt;

&lt;p&gt;If your organization cannot absorb change, faster code generation just creates a larger queue in front of the same bottlenecks.&lt;/p&gt;

&lt;p&gt;More pull requests waiting for unclear owners.&lt;/p&gt;

&lt;p&gt;More generated changes waiting for manual deployment.&lt;/p&gt;

&lt;p&gt;More code paths nobody understands because the original assumptions lived in someone’s head.&lt;/p&gt;

&lt;p&gt;More review burden on the same heroes.&lt;/p&gt;

&lt;p&gt;More incidents because the system got changed faster than the operating model improved.&lt;/p&gt;

&lt;p&gt;AI does not magically fix tactical debt. In many teams, it will expose it.&lt;/p&gt;

&lt;p&gt;That is not a reason to avoid AI. It is a reason to stop pretending that engineering productivity is only about typing speed.&lt;/p&gt;

&lt;p&gt;The bottleneck was rarely the keyboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  remove weight before adding horsepower
&lt;/h2&gt;

&lt;p&gt;If a team is chained to a boulder, buying a faster bicycle is not the first-order fix.&lt;/p&gt;

&lt;p&gt;You need to remove weight.&lt;/p&gt;

&lt;p&gt;Not all at once. Not with a grand transformation program that creates three more steering committees and somehow makes the boulder larger.&lt;/p&gt;

&lt;p&gt;Start smaller and sharper.&lt;/p&gt;

&lt;p&gt;Pick one painful release process and automate it.&lt;/p&gt;

&lt;p&gt;Pick one orphaned service and assign real ownership.&lt;/p&gt;

&lt;p&gt;Pick one recurring status meeting and replace it with a written operating rhythm people trust.&lt;/p&gt;

&lt;p&gt;Pick one hero dependency and deliberately spread the knowledge.&lt;/p&gt;

&lt;p&gt;Pick one approval step and ask what risk it actually controls.&lt;/p&gt;

&lt;p&gt;Pick one incident pattern and make it less likely to happen again.&lt;/p&gt;

&lt;p&gt;The point is not process minimalism for its own sake. Some process is good. Adults need coordination.&lt;/p&gt;

&lt;p&gt;The point is to make the organization lighter.&lt;/p&gt;

&lt;p&gt;Because engineering velocity is not only a property of engineers. It is a property of the system engineers work inside.&lt;/p&gt;

&lt;p&gt;And tactical debt is what happens when that system quietly gets heavier every month.&lt;/p&gt;

&lt;p&gt;Clean code helps.&lt;/p&gt;

&lt;p&gt;AI helps.&lt;/p&gt;

&lt;p&gt;Better tools help.&lt;/p&gt;

&lt;p&gt;But if the boulder stays chained to the team, every productivity gain will be absorbed by the same old drag.&lt;/p&gt;

&lt;p&gt;The best engineering organizations I have seen are not the ones with zero debt. That does not exist.&lt;/p&gt;

&lt;p&gt;They are the ones that can tell the difference between code that needs refactoring and an operating model that needs repair.&lt;/p&gt;

&lt;p&gt;Then they repair both.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Martin Fowler, &lt;a href="https://martinfowler.com/bliki/TechnicalDebt.html" rel="noopener noreferrer"&gt;TechnicalDebt&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Google SRE Book, &lt;a href="https://sre.google/sre-book/postmortem-culture/" rel="noopener noreferrer"&gt;Postmortem Culture: Learning from Failure&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Accelerate / DORA research, &lt;a href="https://dora.dev/guides/dora-metrics-four-keys/" rel="noopener noreferrer"&gt;Four Keys metrics&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Team Topologies, &lt;a href="https://teamtopologies.com/" rel="noopener noreferrer"&gt;Team Topologies patterns&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>opinion</category>
      <category>devops</category>
    </item>
    <item>
      <title>Pod-Level Resources Are Kubernetes Admitting Containers Were the Wrong Accounting Unit</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Tue, 05 May 2026 00:03:42 +0000</pubDate>
      <link>https://forem.com/pvgomes/pod-level-resources-are-kubernetes-admitting-containers-were-the-wrong-accounting-unit-1id4</link>
      <guid>https://forem.com/pvgomes/pod-level-resources-are-kubernetes-admitting-containers-were-the-wrong-accounting-unit-1id4</guid>
      <description>&lt;p&gt;Kubernetes has a funny habit of announcing very practical features that accidentally say something philosophical.&lt;/p&gt;

&lt;p&gt;Pod-level resource management is one of those.&lt;/p&gt;

&lt;p&gt;On paper, the Kubernetes v1.36 updates are straightforward: &lt;a href="https://kubernetes.io/blog/2026/05/01/kubernetes-v1-36-feature-pod-level-resource-managers-alpha/" rel="noopener noreferrer"&gt;pod-level resource managers are now alpha&lt;/a&gt;, &lt;a href="https://kubernetes.io/blog/2026/04/30/kubernetes-v1-36-inplace-pod-level-resources-beta/" rel="noopener noreferrer"&gt;in-place vertical scaling for pod-level resources is beta&lt;/a&gt;, and the kubelet is getting better at treating the pod as a shared resource boundary instead of only a bag of independent containers.&lt;/p&gt;

&lt;p&gt;That sounds like release-note plumbing.&lt;/p&gt;

&lt;p&gt;But I think the bigger story is this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Kubernetes is quietly admitting that the container was never quite the right accounting unit for modern workloads.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not the wrong isolation unit. Not the wrong packaging unit. Containers are still extremely useful.&lt;/p&gt;

&lt;p&gt;But for budgeting CPU, memory, locality, sidecars, and operational responsibility, the pod is increasingly the unit that actually matches reality.&lt;/p&gt;

&lt;p&gt;And once you see that, a lot of current platform pain starts making more sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  containers were the clean story
&lt;/h2&gt;

&lt;p&gt;The clean version of the container story was beautiful.&lt;/p&gt;

&lt;p&gt;A service goes in a container. The container declares its CPU and memory. The scheduler finds a node. The kubelet enforces limits. Everyone pretends this is tidy.&lt;/p&gt;

&lt;p&gt;For simple workloads, it mostly is.&lt;/p&gt;

&lt;p&gt;The problem is that production pods are not always simple workloads anymore. They are little neighborhoods.&lt;/p&gt;

&lt;p&gt;You have the main application container. Then maybe a service mesh sidecar. A log shipper. A metrics exporter. A backup helper. An init container. A data loader. Some agent-ish helper process that definitely started as “temporary” and is now load-bearing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzgu3hiae5nlit5zn7l.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhzgu3hiae5nlit5zn7l.gif" alt="everything is fine in the pod" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If every container needs its own resource story, the accounting gets weird fast.&lt;/p&gt;

&lt;p&gt;The main workload may need exclusive CPUs, NUMA alignment, predictable memory, or tight latency behavior. The sidecars often do not. They need enough room to not fall over, but dedicating the same kind of premium resource treatment to every little helper container is wasteful.&lt;/p&gt;

&lt;p&gt;Before pod-level resource managers, Kubernetes made that tradeoff awkward for performance-sensitive pods. If you wanted the pod to land in the right QoS and topology behavior, you could end up over-specifying resources for sidecars just to keep the whole thing eligible for the performance guarantees the primary workload needed.&lt;/p&gt;

&lt;p&gt;That is not elegant resource management.&lt;/p&gt;

&lt;p&gt;That is paperwork.&lt;/p&gt;

&lt;h2&gt;
  
  
  the pod is the real budget envelope
&lt;/h2&gt;

&lt;p&gt;The useful mental model is simple: the pod is becoming the budget envelope.&lt;/p&gt;

&lt;p&gt;Kubernetes v1.36 pod-level resource managers extend CPU, memory, and topology management so the kubelet can reason about &lt;code&gt;.spec.resources&lt;/code&gt; at the pod level. Instead of treating every container as a totally separate accounting island, Kubernetes can allocate a pod-level budget and then let different containers consume from that envelope in more flexible ways.&lt;/p&gt;

&lt;p&gt;For a latency-sensitive database pod, that means the database container can get the exclusive, NUMA-aligned slices it actually needs, while metrics or backup sidecars share from the pod’s remaining pool.&lt;/p&gt;

&lt;p&gt;For an ML training pod, the training container can get the serious locality and CPU treatment, while a service mesh sidecar can stay in a more generic shared pool.&lt;/p&gt;

&lt;p&gt;That distinction matters because it separates two ideas we often accidentally merge:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;which container needs premium placement or isolation&lt;/li&gt;
&lt;li&gt;how much total budget the workload should be allowed to consume&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Those are not the same question.&lt;/p&gt;

&lt;p&gt;The container is still a useful boundary for packaging and process-level isolation. But the pod is often the better boundary for cost, scheduling intent, performance shape, and ownership.&lt;/p&gt;

&lt;p&gt;That is the part platform teams should care about.&lt;/p&gt;

&lt;h2&gt;
  
  
  sidecars made the old model leak
&lt;/h2&gt;

&lt;p&gt;Sidecars are the obvious pressure point here.&lt;/p&gt;

&lt;p&gt;A lot of Kubernetes architecture assumes sidecars are auxiliary, but operationally they are not free. They consume CPU. They consume memory. They affect startup time. They participate in failure modes. They turn a “single service” into a small distributed system inside one pod.&lt;/p&gt;

&lt;p&gt;Service meshes made this visible years ago. AI workloads are making it louder.&lt;/p&gt;

&lt;p&gt;An AI-era workload may include data movement, model serving, telemetry, policy checks, local caches, sandbox helpers, and control processes around the actual thing the business cares about. Treating each container as if it deserves a fully independent resource negotiation can be both too rigid and too noisy.&lt;/p&gt;

&lt;p&gt;The pod-level model is basically Kubernetes saying: yes, we know these containers are related. Yes, we know some are more important than others. Yes, we know the old per-container model made people choose between performance guarantees and waste.&lt;/p&gt;

&lt;p&gt;Good.&lt;/p&gt;

&lt;p&gt;Because the platform should model the shape of the work, not force the work to cosplay as a simpler architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  in-place resizing makes this more than a cleaner YAML shape
&lt;/h2&gt;

&lt;p&gt;The other v1.36 signal is in-place vertical scaling for pod-level resources moving to beta.&lt;/p&gt;

&lt;p&gt;That matters because resource accounting is not only a deployment-time problem. Workloads change while they are running.&lt;/p&gt;

&lt;p&gt;If a pod has a shared CPU budget and demand increases, being able to adjust the pod-level envelope without recreating the whole pod is a much better operational primitive. Kubernetes can track resize conditions, check node feasibility, and coordinate cgroup updates without forcing every change through the old “kill it and let the replacement be different” path.&lt;/p&gt;

&lt;p&gt;This is especially useful for pods where containers inherit their effective boundaries from the pod-level budget. Instead of recalculating every container limit by hand, the platform can grow or shrink the shared envelope.&lt;/p&gt;

&lt;p&gt;That sounds small until you operate systems where restart behavior is expensive, state is warm, caches matter, or the workload is tied to GPU allocation and data locality.&lt;/p&gt;

&lt;p&gt;Then it sounds like basic hygiene.&lt;/p&gt;

&lt;p&gt;The interesting future piece is Vertical Pod Autoscaler integration. Once VPA can recommend and actuate pod-level changes more naturally, the platform starts getting closer to how people actually describe capacity needs:&lt;/p&gt;

&lt;p&gt;“This workload needs more room.”&lt;/p&gt;

&lt;p&gt;Not:&lt;/p&gt;

&lt;p&gt;“Please individually adjust seven containers, three of which exist because our observability stack had opinions.”&lt;/p&gt;

&lt;h2&gt;
  
  
  this is also a cost conversation
&lt;/h2&gt;

&lt;p&gt;Resource units are never just technical units. They become accounting units.&lt;/p&gt;

&lt;p&gt;That is why this feature feels bigger than it looks.&lt;/p&gt;

&lt;p&gt;If your cost model is container-centric but your ownership model is pod-centric, reporting gets weird. If your autoscaling is container-centric but your performance objective is pod-centric, tuning gets weird. If your platform charges teams for usage, and every sidecar is part of a shared platform decision, the fairness conversation gets weird.&lt;/p&gt;

&lt;p&gt;Who pays for the service mesh sidecar?&lt;/p&gt;

&lt;p&gt;The application team because it is in their pod?&lt;/p&gt;

&lt;p&gt;The platform team because they required it?&lt;/p&gt;

&lt;p&gt;The security team because mutual TLS was mandatory?&lt;/p&gt;

&lt;p&gt;This sounds like finance trivia until cloud bills arrive with enough zeros to make everyone suddenly philosophical.&lt;/p&gt;

&lt;p&gt;Pod-level resources will not solve chargeback by themselves. But they do point toward a better abstraction: budget the workload as the thing a team actually owns, then make internal container-level details visible without pretending they are the main economic contract.&lt;/p&gt;

&lt;p&gt;That is healthier.&lt;/p&gt;

&lt;h2&gt;
  
  
  the container is not dead; it just got demoted
&lt;/h2&gt;

&lt;p&gt;I know the internet likes clean replacement stories.&lt;/p&gt;

&lt;p&gt;VMs are dead. Containers are dead. Kubernetes is dead. Serverless is dead. Everything is dead except whatever vendor keynote starts in five minutes.&lt;/p&gt;

&lt;p&gt;Reality is more boring and more interesting.&lt;/p&gt;

&lt;p&gt;Containers are not dead. They are still the packaging and runtime primitive that made modern platform engineering possible.&lt;/p&gt;

&lt;p&gt;But the industry is learning, again, that packaging units, security units, scheduling units, ownership units, and billing units do not have to be identical.&lt;/p&gt;

&lt;p&gt;In fact, when they are forced to be identical, systems get awkward.&lt;/p&gt;

&lt;p&gt;The pod was always Kubernetes’ way of saying “these containers belong together.” What is changing is that more of Kubernetes resource management is catching up with that original idea.&lt;/p&gt;

&lt;p&gt;The pod is not just a deployment convenience. It is becoming the place where the platform expresses workload economics.&lt;/p&gt;

&lt;h2&gt;
  
  
  platform teams should pay attention now
&lt;/h2&gt;

&lt;p&gt;Because this is alpha and beta territory, I would not rush to rebuild production assumptions around it tomorrow morning.&lt;/p&gt;

&lt;p&gt;But I would absolutely start paying attention.&lt;/p&gt;

&lt;p&gt;The direction is clear:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;pods are getting stronger as resource boundaries&lt;/li&gt;
&lt;li&gt;sidecar-heavy workloads need less wasteful accounting&lt;/li&gt;
&lt;li&gt;in-place resizing is becoming a more serious operational tool&lt;/li&gt;
&lt;li&gt;performance-sensitive workloads need pod-aware locality and isolation&lt;/li&gt;
&lt;li&gt;AI infrastructure will make these problems more common, not less&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you run a platform, this is the kind of Kubernetes evolution that matters more than a flashy new abstraction. It is not glamorous. It is the scheduler, kubelet, cgroups, QoS, and topology slowly becoming less naive about how production workloads are shaped.&lt;/p&gt;

&lt;p&gt;That is usually where the real platform shifts happen.&lt;/p&gt;

&lt;p&gt;Not in the keynote.&lt;/p&gt;

&lt;p&gt;In the accounting model.&lt;/p&gt;

&lt;p&gt;And right now Kubernetes seems to be moving the accounting model one level up.&lt;/p&gt;

&lt;p&gt;That is a small API change with a very large smell of inevitability.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>cloud</category>
      <category>ai</category>
    </item>
    <item>
      <title>Hyped and Overhyped Programming Languages in 2026</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Mon, 04 May 2026 14:05:42 +0000</pubDate>
      <link>https://forem.com/pvgomes/hyped-and-overhyped-programming-languages-in-2026-1ahc</link>
      <guid>https://forem.com/pvgomes/hyped-and-overhyped-programming-languages-in-2026-1ahc</guid>
      <description>&lt;p&gt;Every few years the industry rediscovers that programming languages are not religions.&lt;/p&gt;

&lt;p&gt;Then we immediately behave like they are religions.&lt;/p&gt;

&lt;p&gt;Someone posts a benchmark. Someone else says memory safety. Someone says developer experience. A distributed systems person appears from under a bridge and whispers “Erlang solved this in 1998.” A startup founder announces they are rewriting their CRUD app in Rust because “performance.” A senior engineer quietly opens another Java service and gets paid.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhjbc01d3kndr3k0yprh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhjbc01d3kndr3k0yprh.gif" alt="hype train leaving the station" width="370" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So let’s talk honestly about programming language hype in 2026.&lt;/p&gt;

&lt;p&gt;Not “which language should I learn?”&lt;/p&gt;

&lt;p&gt;That question is usually a proxy for anxiety, not engineering strategy.&lt;/p&gt;

&lt;p&gt;The better question is:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;which languages are getting real adoption for new work, which ones are quietly useful, and which ones are mostly powered by conference talks, nostalgia, or logo slides?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My bias up front: a language being used by a big company does not mean the language is growing. It often just means the company has old systems, large teams, and enough money to keep an ecosystem alive inside its own walls.&lt;/p&gt;

&lt;p&gt;That distinction matters.&lt;/p&gt;

&lt;p&gt;Scala at LinkedIn, Ruby at Shopify, PHP at Meta history, Erlang at Ericsson, COBOL at banks — all real. Also not the same signal.&lt;/p&gt;

&lt;p&gt;A big logo means “someone important has code in this language.”&lt;/p&gt;

&lt;p&gt;It does &lt;strong&gt;not&lt;/strong&gt; mean “you should start a new project in it in 2026.”&lt;/p&gt;

&lt;h2&gt;
  
  
  the signals that actually matter
&lt;/h2&gt;

&lt;p&gt;When I look at a language, I care about four signals:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hiring volume&lt;/strong&gt;: are companies hiring for it outside of a few specialist niches?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;New project energy&lt;/strong&gt;: are people choosing it for greenfield work?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ecosystem growth&lt;/strong&gt;: libraries, tooling, docs, package quality, deployment paths.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational fit&lt;/strong&gt;: does it make production easier, or just make the code look impressive during review?&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Logo collection is much lower on the list.&lt;/p&gt;

&lt;p&gt;This is where a lot of language debates go wrong. Engineers love saying “Company X uses language Y.” Sure. Company X also has internal frameworks, staff engineers, migration budgets, old decisions, and a Slack channel called &lt;code&gt;#why-is-this-still-running&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You are not Company X.&lt;/p&gt;

&lt;h2&gt;
  
  
  the 2026 scoreboard
&lt;/h2&gt;

&lt;p&gt;Here is the rough landscape I would use when talking to an engineering team in 2026. The status is intentionally opinionated, not a scientific taxonomy.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;2026 status&lt;/th&gt;
&lt;th&gt;What it is used for&lt;/th&gt;
&lt;th&gt;Notable companies / ecosystems&lt;/th&gt;
&lt;th&gt;My read&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Python&lt;/td&gt;
&lt;td&gt;📈 Dominant&lt;/td&gt;
&lt;td&gt;AI, data, backend, automation&lt;/td&gt;
&lt;td&gt;Google, Meta, OpenAI&lt;/td&gt;
&lt;td&gt;The default language of AI-era glue. Not elegant everywhere, but unavoidable.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;TypeScript&lt;/td&gt;
&lt;td&gt;📈 Dominant&lt;/td&gt;
&lt;td&gt;Web apps, full-stack apps, tooling&lt;/td&gt;
&lt;td&gt;Microsoft, Slack, Airbnb&lt;/td&gt;
&lt;td&gt;The web won. TypeScript is JavaScript after it went to therapy.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JavaScript&lt;/td&gt;
&lt;td&gt;🧊 Ubiquitous&lt;/td&gt;
&lt;td&gt;Web, scripting, edge runtimes&lt;/td&gt;
&lt;td&gt;Everyone, unfortunately and fortunately&lt;/td&gt;
&lt;td&gt;Still everywhere, but serious teams increasingly want TypeScript.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Java&lt;/td&gt;
&lt;td&gt;🧊 Strong&lt;/td&gt;
&lt;td&gt;Enterprise backend, Android legacy, data systems&lt;/td&gt;
&lt;td&gt;Amazon, Uber, Netflix&lt;/td&gt;
&lt;td&gt;Boring, employable, fast enough, operationally understood. Never bet against boring.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C#&lt;/td&gt;
&lt;td&gt;🧊 Strong&lt;/td&gt;
&lt;td&gt;Enterprise, gaming, backend&lt;/td&gt;
&lt;td&gt;Microsoft, Unity ecosystem&lt;/td&gt;
&lt;td&gt;Quietly excellent if you live in the Microsoft universe.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Go&lt;/td&gt;
&lt;td&gt;📈 Strong&lt;/td&gt;
&lt;td&gt;Cloud backend, infra, CLIs&lt;/td&gt;
&lt;td&gt;Google, Uber, Dropbox&lt;/td&gt;
&lt;td&gt;Still one of the best choices when you want boring concurrency and cheap operations.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rust&lt;/td&gt;
&lt;td&gt;📈 Rising&lt;/td&gt;
&lt;td&gt;Systems, infra, security-sensitive services&lt;/td&gt;
&lt;td&gt;Microsoft, Cloudflare, Amazon&lt;/td&gt;
&lt;td&gt;Real adoption, real value, also real overuse in places that wanted Go.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kotlin&lt;/td&gt;
&lt;td&gt;📈 Strong&lt;/td&gt;
&lt;td&gt;Android, backend&lt;/td&gt;
&lt;td&gt;Google, Pinterest, Square&lt;/td&gt;
&lt;td&gt;Excellent language, but its hype is tied to Android and JVM shops.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Swift&lt;/td&gt;
&lt;td&gt;🧊 Strong niche&lt;/td&gt;
&lt;td&gt;Apple apps, some server-side experiments&lt;/td&gt;
&lt;td&gt;Apple ecosystem&lt;/td&gt;
&lt;td&gt;Great if your world is Apple. Less relevant outside it.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C++&lt;/td&gt;
&lt;td&gt;🧊 Strong&lt;/td&gt;
&lt;td&gt;Performance systems, games, infra&lt;/td&gt;
&lt;td&gt;Google, Meta, Adobe&lt;/td&gt;
&lt;td&gt;Still critical. Still dangerous. Still paying mortgages.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C&lt;/td&gt;
&lt;td&gt;🧊 Strong&lt;/td&gt;
&lt;td&gt;OS, embedded, runtimes&lt;/td&gt;
&lt;td&gt;Linux Foundation, Intel, AMD&lt;/td&gt;
&lt;td&gt;The floorboards of computing. You do not hype floorboards; you depend on them.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Zig&lt;/td&gt;
&lt;td&gt;📈 Emerging&lt;/td&gt;
&lt;td&gt;Systems, tooling, C replacement experiments&lt;/td&gt;
&lt;td&gt;Bun ecosystem, systems hackers&lt;/td&gt;
&lt;td&gt;Interesting and pragmatic. Early, but not vapor.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mojo&lt;/td&gt;
&lt;td&gt;🔥 Hyped early&lt;/td&gt;
&lt;td&gt;AI kernels, Python-adjacent performance&lt;/td&gt;
&lt;td&gt;Modular ecosystem&lt;/td&gt;
&lt;td&gt;Promising, but adoption signal is still tiny compared with the noise.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gleam&lt;/td&gt;
&lt;td&gt;🌱 Emerging niche&lt;/td&gt;
&lt;td&gt;Typed BEAM services&lt;/td&gt;
&lt;td&gt;BEAM community&lt;/td&gt;
&lt;td&gt;Small but genuinely tasteful. I like it more than the market currently does.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Elixir&lt;/td&gt;
&lt;td&gt;🧊 Niche&lt;/td&gt;
&lt;td&gt;Distributed systems, realtime apps&lt;/td&gt;
&lt;td&gt;Discord, PepsiCo, Bleacher Report&lt;/td&gt;
&lt;td&gt;Small, serious, productive. Not mainstream, and that is fine.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Erlang&lt;/td&gt;
&lt;td&gt;🧊 Niche&lt;/td&gt;
&lt;td&gt;Telecom, messaging, fault-tolerant systems&lt;/td&gt;
&lt;td&gt;Ericsson, WhatsApp, Klarna&lt;/td&gt;
&lt;td&gt;Less fashionable than Elixir, still ridiculously good at what it was built for.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Clojure&lt;/td&gt;
&lt;td&gt;🧊 Niche&lt;/td&gt;
&lt;td&gt;Backend, data pipelines&lt;/td&gt;
&lt;td&gt;Walmart, Nubank, CircleCI&lt;/td&gt;
&lt;td&gt;A small language used by people who tend to know exactly why they chose it.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Haskell&lt;/td&gt;
&lt;td&gt;🧊 Niche&lt;/td&gt;
&lt;td&gt;Finance, compilers, verification-heavy systems&lt;/td&gt;
&lt;td&gt;Standard Chartered, Meta, IOHK&lt;/td&gt;
&lt;td&gt;Brilliant, intimidating, not a general hiring strategy.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scala&lt;/td&gt;
&lt;td&gt;📉 Declining hype&lt;/td&gt;
&lt;td&gt;Data platforms, JVM services&lt;/td&gt;
&lt;td&gt;LinkedIn, Twitter legacy, Airbnb&lt;/td&gt;
&lt;td&gt;Important historically. Hard sell for greenfield backend work now.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ruby&lt;/td&gt;
&lt;td&gt;📉 Declining hype&lt;/td&gt;
&lt;td&gt;Web apps, Rails products&lt;/td&gt;
&lt;td&gt;Shopify, GitHub, Basecamp&lt;/td&gt;
&lt;td&gt;Still productive. Less fashionable. Rails is mature, not dead.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PHP&lt;/td&gt;
&lt;td&gt;📉 Declining hype&lt;/td&gt;
&lt;td&gt;Web, CMS, Laravel, WordPress&lt;/td&gt;
&lt;td&gt;Wikipedia, WordPress, Meta historically&lt;/td&gt;
&lt;td&gt;The internet’s old plumbing. Mock it carefully; it is probably serving your page.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dart&lt;/td&gt;
&lt;td&gt;📉 Mixed&lt;/td&gt;
&lt;td&gt;Flutter apps&lt;/td&gt;
&lt;td&gt;Google, Alibaba, BMW&lt;/td&gt;
&lt;td&gt;Flutter keeps it alive. Outside Flutter, the air gets thin fast.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;R&lt;/td&gt;
&lt;td&gt;🧊 Niche&lt;/td&gt;
&lt;td&gt;Statistics, research, pharma&lt;/td&gt;
&lt;td&gt;Pfizer, Novartis, Roche&lt;/td&gt;
&lt;td&gt;Still strong where statisticians, not backend engineers, run the room.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MATLAB&lt;/td&gt;
&lt;td&gt;🧊 Niche&lt;/td&gt;
&lt;td&gt;Engineering, simulation&lt;/td&gt;
&lt;td&gt;NASA, Siemens, Boeing&lt;/td&gt;
&lt;td&gt;Expensive, domain-specific, deeply entrenched.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Groovy&lt;/td&gt;
&lt;td&gt;📉 Declining&lt;/td&gt;
&lt;td&gt;Build scripts, Gradle history&lt;/td&gt;
&lt;td&gt;Gradle, Atlassian, Netflix historically&lt;/td&gt;
&lt;td&gt;Mostly not where new language energy is going.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Perl&lt;/td&gt;
&lt;td&gt;🪦 Legacy&lt;/td&gt;
&lt;td&gt;Legacy infra, text processing&lt;/td&gt;
&lt;td&gt;Booking.com legacy, IMDb, cPanel&lt;/td&gt;
&lt;td&gt;The duct tape is still there. Please do not add more tape.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;COBOL&lt;/td&gt;
&lt;td&gt;🪦 Critical legacy&lt;/td&gt;
&lt;td&gt;Banking, insurance, mainframes&lt;/td&gt;
&lt;td&gt;JPMorgan Chase, Bank of America, IBM clients&lt;/td&gt;
&lt;td&gt;Not hype. More like archaeological load-bearing concrete.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The shape is clear: Python and TypeScript are the center of gravity, Java remains the enterprise cockroach in the best possible way, Go is still extremely sensible, and Rust is the most legitimate “new serious systems language” story of the last decade.&lt;/p&gt;

&lt;p&gt;Everything else needs context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Python is not hyped; Python is infrastructure now
&lt;/h2&gt;

&lt;p&gt;Python’s current dominance is not just beginner tutorials and data science notebooks anymore.&lt;/p&gt;

&lt;p&gt;The AI wave made Python even more central because the entire machine learning ecosystem already lived there: PyTorch, TensorFlow, Hugging Face, notebooks, evaluation scripts, data pipelines, glue code, SDKs, model serving wrappers, and a million tiny tools with names like &lt;code&gt;convert_final_final_v2.py&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Stack Overflow’s 2025 survey showed Python jumping significantly, with 57.9% of all respondents reporting extensive use. GitHub’s Octoverse 2025 headline was even louder: AI helped push TypeScript to number one, but Python remained one of the core languages of the AI/open-source wave.&lt;/p&gt;

&lt;p&gt;Python is overused in some production systems, yes.&lt;/p&gt;

&lt;p&gt;But it is not fake hype.&lt;/p&gt;

&lt;p&gt;It is the language people reach for when they want to connect things, test ideas, automate workflows, call models, parse weird files, and make the machine do something before lunch.&lt;/p&gt;

&lt;p&gt;That has enormous value.&lt;/p&gt;

&lt;p&gt;My criticism of Python is not adoption. It is operational laziness. Python makes prototypes cheap, then teams pretend the prototype is a platform. That is how you end up with a critical revenue job running from a notebook nobody owns.&lt;/p&gt;

&lt;p&gt;Python is dominant. Python is useful. Python also needs adult supervision.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpqqai5gqi4fow8om981.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpqqai5gqi4fow8om981.gif" alt="this is fine" width="478" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TypeScript is the safest hype bet in 2026
&lt;/h2&gt;

&lt;p&gt;TypeScript is barely “hype” now. It is the default serious language of the web.&lt;/p&gt;

&lt;p&gt;The interesting thing is that GitHub Octoverse 2025 said TypeScript reached number one on GitHub, driven by AI, agents, and typed languages. That tracks with what I see in practice: when agents generate code, types become more valuable, not less.&lt;/p&gt;

&lt;p&gt;A type system is not magic, but it is friction against nonsense.&lt;/p&gt;

&lt;p&gt;In a world where humans and LLMs are both producing code, friction against nonsense is underrated.&lt;/p&gt;

&lt;p&gt;TypeScript won because it gave JavaScript teams a migration path instead of a purity lecture. That is why it beat most “better language for the web” dreams. It did not ask the industry to move house. It renovated the messy house everyone already lived in.&lt;/p&gt;

&lt;p&gt;Would I start a web product in plain JavaScript in 2026?&lt;/p&gt;

&lt;p&gt;No.&lt;/p&gt;

&lt;p&gt;Not because JavaScript is dead. Because TypeScript is the cheapest safety upgrade you can buy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go is underrated because it is boring on purpose
&lt;/h2&gt;

&lt;p&gt;Go is one of those languages that makes programming language enthusiasts slightly sad and production engineers quietly productive.&lt;/p&gt;

&lt;p&gt;It is not expressive in the way Scala people want.&lt;br&gt;
It is not clever in the way Haskell people want.&lt;br&gt;
It is not as safe as Rust.&lt;br&gt;
It is not as batteries-included as Java.&lt;/p&gt;

&lt;p&gt;And yet Go keeps winning in cloud infrastructure because it has the correct personality for a lot of backend work: simple binaries, fast builds, good concurrency, decent performance, easy deployment, easy hiring, low ceremony.&lt;/p&gt;

&lt;p&gt;That is not sexy. That is useful.&lt;/p&gt;

&lt;p&gt;A lot of engineering organizations do not need a language that lets the smartest person on the team feel brilliant. They need a language where the tired on-call engineer at 2 a.m. can understand the service quickly.&lt;/p&gt;

&lt;p&gt;Go is very good at that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rust is both genuinely important and definitely over-prescribed
&lt;/h2&gt;

&lt;p&gt;Rust has earned its hype.&lt;/p&gt;

&lt;p&gt;Memory safety without a garbage collector is a big deal. The security story is real. The tooling is good. The community has produced serious infrastructure. Microsoft, Amazon, Cloudflare, and others are not playing with it for vibes.&lt;/p&gt;

&lt;p&gt;Rust belongs in systems programming, performance-sensitive infrastructure, security-sensitive components, runtimes, networking, CLI tools, embedded work, and places where C or C++ bugs become expensive.&lt;/p&gt;

&lt;p&gt;But.&lt;/p&gt;

&lt;p&gt;There is always a but.&lt;/p&gt;

&lt;p&gt;Rust is also being recommended for things that do not need Rust.&lt;/p&gt;

&lt;p&gt;If your team wants to build a normal JSON-over-HTTP internal service and nobody on the team is already fluent in Rust, choosing Rust because “performance” can be architectural cosplay.&lt;/p&gt;

&lt;p&gt;You are not building a browser engine. You are returning invoices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx6f29x7ibau69ccxs4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapx6f29x7ibau69ccxs4.gif" alt="dramatic engineering meeting" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rust is a serious language. It is also a language where the learning curve is part of the cost model. If the cost is worth it, great. If not, Go, Java, Kotlin, C#, or TypeScript probably get you to production faster with fewer emotional support threads in Slack.&lt;/p&gt;

&lt;h2&gt;
  
  
  Java is the language everyone keeps predicting will die while it keeps getting paid
&lt;/h2&gt;

&lt;p&gt;Java is not fashionable. Java does not need to be fashionable.&lt;/p&gt;

&lt;p&gt;Java has hiring volume, libraries, observability, deployment patterns, mature frameworks, JVM performance, and decades of production scar tissue encoded in boring defaults.&lt;/p&gt;

&lt;p&gt;That matters.&lt;/p&gt;

&lt;p&gt;The RedMonk January 2026 ranking still had Java in the top three. Stack Overflow 2025 had Java around 29% among all respondents and nearly the same among professional developers. TIOBE still places Java near the top. You can argue with the methodology of each index — and you should — but when every imperfect index says “this thing is still very large,” believe the direction.&lt;/p&gt;

&lt;p&gt;Would I choose Java for every new backend? No.&lt;/p&gt;

&lt;p&gt;Would I be worried joining a serious company with a large Java estate? Also no.&lt;/p&gt;

&lt;p&gt;Java is not hype. Java is employment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scala and Ruby are the warning labels for logo-based thinking
&lt;/h2&gt;

&lt;p&gt;Scala and Ruby are the perfect examples of why “big companies use it” is not enough.&lt;/p&gt;

&lt;p&gt;Scala had a very real moment. It gave JVM teams functional programming, expressive types, and a path into big data tooling. Spark mattered. Twitter mattered. LinkedIn mattered.&lt;/p&gt;

&lt;p&gt;But the ecosystem became heavy. The learning curve was real. Build times hurt. The community split between “powerful language” and “why is this implicit doing crimes?” And many teams that wanted pragmatic backend work eventually chose Java, Kotlin, Go, or TypeScript instead.&lt;/p&gt;

&lt;p&gt;Scala is not dead. But its hype declined.&lt;/p&gt;

&lt;p&gt;Ruby is different. Ruby on Rails changed web development. Shopify, GitHub, and Basecamp are not fake examples. Rails remains productive and mature.&lt;/p&gt;

&lt;p&gt;But Ruby no longer feels like where the center of new backend energy is moving. It is a great way to build certain products if the team already likes Rails. It is not the obvious default in 2026 for a company trying to maximize hiring pool, cloud-native tooling, AI integration, and long-term ecosystem growth.&lt;/p&gt;

&lt;p&gt;This is the pattern:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;large installed base ≠ expanding frontier.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can make money in declining-hype ecosystems. Sometimes a lot of money. But do not confuse maintenance gravity with adoption energy.&lt;/p&gt;

&lt;h2&gt;
  
  
  PHP is funny until you remember the web runs on it
&lt;/h2&gt;

&lt;p&gt;PHP is easy to mock because PHP spent years earning the jokes.&lt;/p&gt;

&lt;p&gt;And yet, PHP is still everywhere.&lt;/p&gt;

&lt;p&gt;WordPress alone makes PHP impossible to ignore. Laravel keeps modern PHP much more pleasant than outsiders assume. Wikipedia exists. Huge amounts of commerce, content, and internal tooling still run on PHP.&lt;/p&gt;

&lt;p&gt;But would I start a greenfield backend platform in PHP in 2026?&lt;/p&gt;

&lt;p&gt;Probably not, unless the team had a strong PHP/Laravel advantage or the product lived naturally in that ecosystem.&lt;/p&gt;

&lt;p&gt;PHP is not dead. It is just not where I would go looking for broad new-language momentum.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clojure, Elixir, Erlang, and Haskell are small for grown-up reasons
&lt;/h2&gt;

&lt;p&gt;Niche does not mean unserious.&lt;/p&gt;

&lt;p&gt;This is where beginner language discourse becomes useless.&lt;/p&gt;

&lt;p&gt;Clojure, Elixir, Erlang, and Haskell are not mainstream hiring monsters. Stack Overflow’s 2025 numbers put Elixir and Scala around the low single digits, Erlang lower, and Haskell small enough that most recruiters will never accidentally find you.&lt;/p&gt;

&lt;p&gt;But these languages are often chosen by experienced teams for specific reasons.&lt;/p&gt;

&lt;p&gt;Clojure is excellent when you want a small language, immutable data, REPL-driven development, and a very different mental model from mainstream object-oriented backend work. Nubank using Clojure is not random. It is a company-level bet on leverage and simplicity at scale.&lt;/p&gt;

&lt;p&gt;Elixir and Erlang live on the BEAM, which remains one of the best runtime stories for fault tolerance, concurrency, and long-running distributed systems. Discord’s Elixir usage is a real signal, not a toy case study.&lt;/p&gt;

&lt;p&gt;Haskell is not “hard because academics.” Haskell is hard because it forces you to be explicit about things many codebases prefer to leave as runtime surprises. In finance, compilers, and correctness-heavy systems, that can be worth it.&lt;/p&gt;

&lt;p&gt;Would I recommend these languages to a random startup as default choices?&lt;/p&gt;

&lt;p&gt;No.&lt;/p&gt;

&lt;p&gt;Would I dismiss them because they are niche?&lt;/p&gt;

&lt;p&gt;Also no.&lt;/p&gt;

&lt;p&gt;Some niches are where the adults are.&lt;/p&gt;

&lt;h2&gt;
  
  
  Zig, Mojo, and Gleam: interesting, but bring a microscope
&lt;/h2&gt;

&lt;p&gt;New entrants are where hype is most dangerous.&lt;/p&gt;

&lt;p&gt;Zig is the most interesting of the current systems-language challengers. It is pragmatic, C-interoperable, simpler than Rust in important ways, and already visible in real tooling conversations. RedMonk noted Zig’s deliberate climb in its 2026 ranking, with stronger GitHub signal than Stack Overflow signal. That is exactly the kind of mismatch worth watching.&lt;/p&gt;

&lt;p&gt;Mojo is the spicy one because it attaches itself to the AI/Python performance story. The pitch is strong: Python-like ergonomics with systems-level performance potential for AI workloads. But Stack Overflow 2025 still showed Mojo at only 0.4% among all respondents and 0.3% among professional developers. That is not “ignore it.” That is “do not bet your hiring plan on it yet.”&lt;/p&gt;

&lt;p&gt;Gleam is tiny but tasteful: typed, friendly, BEAM-based, and refreshingly pragmatic. I would not call it mainstream. I would call it one of the few small languages where the design taste seems better than the hype machine.&lt;/p&gt;

&lt;p&gt;My rule for emerging languages: experiment freely, adopt carefully.&lt;/p&gt;

&lt;p&gt;A side project can be brave. Payroll systems should be boring.&lt;/p&gt;

&lt;h2&gt;
  
  
  the languages I think are overhyped in 2026
&lt;/h2&gt;

&lt;p&gt;Here is the spicy part.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rust, when used as a personality test
&lt;/h3&gt;

&lt;p&gt;Rust is excellent. Rust as a default answer to every backend problem is not excellent.&lt;/p&gt;

&lt;p&gt;If your problem is memory safety, performance, secure infrastructure, or replacing C/C++, Rust deserves serious consideration.&lt;/p&gt;

&lt;p&gt;If your problem is “we need a CRUD API by next quarter,” maybe stop trying to impress Hacker News.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mojo, relative to adoption
&lt;/h3&gt;

&lt;p&gt;Mojo might become important. I hope it does, because AI infrastructure needs better performance ergonomics.&lt;/p&gt;

&lt;p&gt;But right now the ratio of promise to production adoption is very high. That is literally what hype is.&lt;/p&gt;

&lt;h3&gt;
  
  
  Anything marketed as “the language for AI agents”
&lt;/h3&gt;

&lt;p&gt;No.&lt;/p&gt;

&lt;p&gt;Agents mostly need boring integration surfaces: HTTP, files, queues, logs, auth, sandboxing, tests, traceability. They do not need a magical new syntax for vibes.&lt;/p&gt;

&lt;p&gt;If a language claims it is special because agents will write it better, I want receipts.&lt;/p&gt;

&lt;h2&gt;
  
  
  the languages I think are underrated
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Go
&lt;/h3&gt;

&lt;p&gt;Because boring operational wins compound.&lt;/p&gt;

&lt;h3&gt;
  
  
  Elixir
&lt;/h3&gt;

&lt;p&gt;Because most web engineers underestimate how good the BEAM is for realtime and distributed systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Clojure
&lt;/h3&gt;

&lt;p&gt;Because small teams with high leverage can do ridiculous things with it, if they can hire and maintain the discipline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Java
&lt;/h3&gt;

&lt;p&gt;Yes, Java. I know.&lt;/p&gt;

&lt;p&gt;But the modern JVM ecosystem is far better than the 2008 trauma many engineers still carry around. If your opinion of Java is based on old enterprise XML nightmares, update the dependency in your head.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gleam
&lt;/h3&gt;

&lt;p&gt;Not because it is big. Because it is small in an interesting direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  the languages I would be very careful starting new projects with
&lt;/h2&gt;

&lt;p&gt;This does not mean “bad.” It means “the burden of proof is high.”&lt;/p&gt;

&lt;h3&gt;
  
  
  Scala
&lt;/h3&gt;

&lt;p&gt;I would only choose Scala today if the team already had strong Scala expertise and a specific reason. Otherwise Kotlin, Java, Go, or TypeScript will usually create less organizational friction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ruby
&lt;/h3&gt;

&lt;p&gt;Rails is still productive, but I would want a team and product reason, not nostalgia.&lt;/p&gt;

&lt;h3&gt;
  
  
  PHP
&lt;/h3&gt;

&lt;p&gt;Laravel can be good. WordPress is massive. But for a general-purpose new backend platform, I would need a strong ecosystem-specific argument.&lt;/p&gt;

&lt;h3&gt;
  
  
  Perl
&lt;/h3&gt;

&lt;p&gt;No.&lt;/p&gt;

&lt;p&gt;I respect Perl historically. I also respect asbestos historically as a material with useful properties. That does not mean I want more of it in the building.&lt;/p&gt;

&lt;h3&gt;
  
  
  COBOL
&lt;/h3&gt;

&lt;p&gt;Learn COBOL if you want a specific legacy/mainframe career niche. Do not pick it for a new product unless your product is a museum with uptime requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqqyv1r0h01j9vcq298j.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqqyv1r0h01j9vcq298j.gif" alt="nope" width="399" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  my practical advice
&lt;/h2&gt;

&lt;p&gt;If I were advising a team in 2026, my default recommendations would be boring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Web product&lt;/strong&gt;: TypeScript.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI/data/glue work&lt;/strong&gt;: Python, with production discipline.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloud backend / infrastructure service&lt;/strong&gt;: Go, Java, Kotlin, or C# depending on team context.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance/security-sensitive systems&lt;/strong&gt;: Rust, C++, C, or Zig experiments depending on maturity and risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Realtime distributed systems&lt;/strong&gt;: Elixir/Erlang if the team can support it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-leverage small expert team&lt;/strong&gt;: Clojure is still worth considering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Statistical research&lt;/strong&gt;: R is fine. Stop forcing statisticians to cosplay backend engineers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The language is not the architecture. But it shapes hiring, libraries, failure modes, deployment, debugging, and how much cleverness your team can afford before production starts charging interest.&lt;/p&gt;

&lt;p&gt;That is the part hype usually hides.&lt;/p&gt;

&lt;p&gt;A programming language is not just syntax.&lt;/p&gt;

&lt;p&gt;It is a labor market, a package ecosystem, a runtime, a set of defaults, a debugging culture, a deployment story, and a thousand small decisions already made for you.&lt;/p&gt;

&lt;p&gt;Choose the language where those defaults match the system you are actually building.&lt;/p&gt;

&lt;p&gt;Not the language with the best conference talk.&lt;br&gt;
Not the language with the biggest logo slide.&lt;br&gt;
Not the language that makes your team feel briefly smarter.&lt;/p&gt;

&lt;p&gt;The right language is usually the one that lets you ship, hire, operate, and sleep.&lt;/p&gt;

&lt;p&gt;Everything else is merchandise.&lt;/p&gt;

&lt;h2&gt;
  
  
  references
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://octoverse.github.com/" rel="noopener noreferrer"&gt;GitHub Octoverse 2025: The state of open source&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://survey.stackoverflow.co/2025/technology" rel="noopener noreferrer"&gt;Stack Overflow Developer Survey 2025: Technology&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://redmonk.com/sogrady/2026/04/14/language-rankings-1-26/" rel="noopener noreferrer"&gt;RedMonk Programming Language Rankings: January 2026&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.tiobe.com/tiobe-index/" rel="noopener noreferrer"&gt;TIOBE Index&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pypl.github.io/PYPL.html" rel="noopener noreferrer"&gt;PYPL PopularitY of Programming Language index&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>opinion</category>
      <category>devops</category>
    </item>
    <item>
      <title>Why today's AI skepticism mirrors yesterday's distrust of statistics</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Sun, 03 May 2026 09:00:57 +0000</pubDate>
      <link>https://forem.com/pvgomes/why-todays-ai-skepticism-mirrors-yesterdays-distrust-of-statistics-3pcf</link>
      <guid>https://forem.com/pvgomes/why-todays-ai-skepticism-mirrors-yesterdays-distrust-of-statistics-3pcf</guid>
      <description>&lt;p&gt;AI. It's the buzzword on everyone's lips, the technology promising to revolutionize… well, everything. And, predictably, it's met with a healthy dose of skepticism, if not outright disdain. "It's unreliable," some say. "It hallucinates," others lament. "It's a crutch for those who don't understand the real work."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqdbn996sse3ge7mazdp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvqdbn996sse3ge7mazdp.gif" alt="Funny statistics GIF" width="400" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sound familiar? It should. Because this isn't the first time humanity has grappled with a transformative tool that dared to challenge our most deeply held assumptions. We’ve seen this movie before, starring none other than… statistics.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Numbers Were Suspect: A Brief History of Skepticism
&lt;/h3&gt;

&lt;p&gt;Imagine a time when saying "the data shows" was met with suspicion, not respect. That was the early 20th century for statistics. It wasn't just a niche concern; it was mainstream. Eminent figures like Ernest Rutherford, the father of nuclear physics, famously declared, "If your experiment needs statistics, you ought to have done a better experiment." Ouch.&lt;/p&gt;

&lt;p&gt;And who could forget Mark Twain's biting quip, popularized but not originated by him: "There are three kinds of lies: lies, damned lies, and statistics." This wasn't a fringe sentiment; it reflected a widespread belief that statistics were, at best, a dubious simplification of complex realities, and at worst, a tool for deception. It was seen as reductive, dangerous to "real" science, and certainly not something a serious intellectual would rely on.&lt;/p&gt;

&lt;p&gt;The shift in perception didn't happen because people suddenly changed their minds about the inherent trustworthiness of numbers. It happened because the results became undeniable.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Pioneers Who Made Statistics Indispensable
&lt;/h3&gt;

&lt;p&gt;The real power of statistics was demonstrated not by eloquent arguments, but by people who wielded it to solve real-world problems, saving lives and shaping policy along the way.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Florence Nightingale&lt;/strong&gt; wasn't just a nurse; she was a data visionary. During the Crimean War, she used statistical graphics – her famous "rose diagrams" – to demonstrate that more soldiers died from preventable diseases in unsanitary hospitals than from battle wounds. Her data wasn't just interesting; it was a damning indictment of the military's medical practices, leading to fundamental reforms that saved countless lives. She didn't just care for the sick; she proved &lt;em&gt;why&lt;/em&gt; they were sick using numbers.&lt;/p&gt;

&lt;p&gt;Then came &lt;strong&gt;Ronald A. Fisher&lt;/strong&gt;, the undisputed architect of modern mathematical statistics. Fisher developed the bedrock concepts we now take for granted: hypothesis testing, p-values, and the rigorous principles of experimental design. Without his foundational work, modern medicine, agriculture, and countless scientific disciplines would lack a credible methodology. His "Statistical Methods for Research Workers," published in 1925, laid the groundwork for evidence-based everything.&lt;/p&gt;

&lt;p&gt;And to bring it home with a critical public health example, consider &lt;strong&gt;Richard Doll and Austin Bradford Hill&lt;/strong&gt;. In the 1950s, their groundbreaking statistical studies definitively proved the link between smoking and lung cancer. While anecdotal evidence had hinted at it, statistics provided the irrefutable proof: over 90% of lung cancer patients were smokers. This was a truth that individual intuition and observation struggled to grasp, but statistics, with its macroscopic view, made plain.&lt;/p&gt;

&lt;h3&gt;
  
  
  The AI Mirror: Same Tune, Different Instrument
&lt;/h3&gt;

&lt;p&gt;Fast forward to today, and the chorus of AI skepticism sings a remarkably similar song:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;"It's unreliable."&lt;/strong&gt; Funny, statistics was pretty unreliable too when misused, misinterpreted, or applied without understanding its underlying assumptions. Garbage in, garbage out, and all that.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;"It makes mistakes / It hallucinates."&lt;/strong&gt; Every tool, especially in its early, immature stages, makes mistakes. Recall the early days of personal computers, or even the first versions of your favorite programming language. Perfection isn't born; it's engineered, iterated upon, and refined.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;"It can be manipulated to say anything."&lt;/strong&gt; This is a classic statistical critique! You can torture data until it confesses to anything, as the saying goes. Yet, we didn't ban statistics; we developed ethical guidelines, best practices, and statistical literacy to combat misuse.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;"It's a crutch for people who don't understand the real work."&lt;/strong&gt; Hello, Rutherford! This echoes the sentiment that AI replaces understanding rather than augmenting it. The fear is that AI-generated code, text, or insights will become a substitute for genuine expertise.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The real issue, then and now, isn't fundamentally about the tool itself. It's about what the tool &lt;em&gt;forces us to confront&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Statistics forced people to accept that their intuition, however strong, is often wrong when faced with large-scale data.&lt;/li&gt;
&lt;li&gt;  AI is forcing people to accept that cognition itself, the very act of thinking, creating, and problem-solving, can be partially automated and augmented.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both challenge the same deeply held assumption: that human judgment, intuition, and intellectual effort are somehow uniquely irreplaceable in &lt;em&gt;all&lt;/em&gt; contexts.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Unseen Cost of Dismissal
&lt;/h3&gt;

&lt;p&gt;Imagine dismissing statistics in 1900. You'd miss the entirety of modern medicine, epidemiology, the development of evidence-based policy, and the scientific rigor that defines our world today. You'd miss the discovery of countless disease vectors, the efficacy of vaccines, and the understanding of public health on a societal scale. That's not just a missed opportunity; it's a catastrophic failure of foresight.&lt;/p&gt;

&lt;p&gt;So, what does wholesale dismissal of AI in 2026 cost us? We might miss breakthroughs in drug discovery, personalized medicine, climate modeling, material science, and even entirely new forms of creativity and problem-solving. We risk being the generation that clung to old paradigms while the world accelerated past us, powered by tools we refused to engage with.&lt;/p&gt;

&lt;p&gt;Being a critical engineer means understanding limitations, scrutinizing outputs, and building robust systems. It does &lt;em&gt;not&lt;/em&gt; mean burying our heads in the sand, rejecting powerful instruments out of hand, or repeating the historical mistakes of those who couldn't see past their skepticism. Let's learn from history, shall we?&lt;/p&gt;




&lt;h3&gt;
  
  
  References:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Nightingale, F. (1858). &lt;em&gt;Notes on Matters Affecting the Health of the British Army&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt;  Fisher, R. A. (1925). &lt;em&gt;Statistical Methods for Research Workers&lt;/em&gt;. Oliver and Boyd.&lt;/li&gt;
&lt;li&gt;  Doll, R., &amp;amp; Hill, A. B. (1950). Smoking and Carcinoma of the Lung. &lt;em&gt;British Medical Journal&lt;/em&gt;, 2(4682), 739–748.&lt;/li&gt;
&lt;li&gt;  Gigerenzer, G., Swijtink, Z., Porter, T., Daston, L., Beatty, J., &amp;amp; Krüger, L. (1989). &lt;em&gt;The Empire of Chance: How Probability Changed Science and Everyday Life&lt;/em&gt;. Cambridge University Press.&lt;/li&gt;
&lt;li&gt;  Porter, T. M. (1986). &lt;em&gt;The Rise of Statistical Thinking, 1820–1900&lt;/em&gt;. Princeton University Press.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>opinion</category>
      <category>devops</category>
    </item>
    <item>
      <title>tokens are now more expensive than juniors, and less predictable</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Fri, 01 May 2026 09:05:02 +0000</pubDate>
      <link>https://forem.com/pvgomes/tokens-are-now-more-expensive-than-juniors-and-less-predictable-ei5</link>
      <guid>https://forem.com/pvgomes/tokens-are-now-more-expensive-than-juniors-and-less-predictable-ei5</guid>
      <description>&lt;p&gt;I think a lot of companies are still telling themselves a very comforting story about AI costs.&lt;/p&gt;

&lt;p&gt;The story goes like this:&lt;/p&gt;

&lt;p&gt;Tokens are cheap.&lt;br&gt;
Models keep getting better.&lt;/p&gt;

&lt;p&gt;A few copilots here, a few agents there, maybe a chatbot for support, maybe some code generation in CI, lets buy macminis and put everything on hermes and openclaw, doesn't mater if they will take all our token and somehow this all stays in the “software subscription” bucket.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro7bqupfo76ua4zqqwzj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fro7bqupfo76ua4zqqwzj.gif" alt="keep buying" width="473" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Software engineers like me didn't buy it some years ago already, but now, it started hitting the non-technical people as well. Some of them? Investors...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2lk1o2h34k1sbpi1c3w.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl2lk1o2h34k1sbpi1c3w.gif" alt="investors" width="640" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My take is simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tokens are starting to behave less like a cheap productivity feature and more like a volatile labor line item.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And in a growing number of workflows, they are already expensive enough to compete with what companies would happily pay for junior humans. Not just junior developers. Junior assistants too!!! &lt;/p&gt;

&lt;p&gt;The worse part is not even the absolute price. It is the unpredictability.&lt;/p&gt;

&lt;p&gt;A junior hire has a salary.&lt;br&gt;
A token budget has moods.&lt;/p&gt;

&lt;p&gt;A human when makes a mistake, learns from it and tries to make it right next time. An LLM? Might a sorry will happen. in other words, someone "triggered it to do in this way", its not AI mistake in fact. Its just a tool, a powerful and revolutionary one, but still a tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltsypg74dfb450l3lmgx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltsypg74dfb450l3lmgx.gif" alt="just a tool" width="480" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  the spreadsheet starts lying very early
&lt;/h2&gt;

&lt;p&gt;On paper, token prices still look harmless.&lt;br&gt;
They are quoted per million tokens, which is a wonderful way to make real usage feel abstract.&lt;/p&gt;

&lt;p&gt;A few examples from current public pricing pages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI GPT-5.4: &lt;strong&gt;$2.50 / 1M input&lt;/strong&gt; and &lt;strong&gt;$15 / 1M output&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Anthropic Claude Sonnet 4.6: &lt;strong&gt;$3 / 1M input&lt;/strong&gt; and &lt;strong&gt;$15 / 1M output&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Google Gemini 2.5 Pro: &lt;strong&gt;$1.25 / 1M input&lt;/strong&gt; and &lt;strong&gt;$10 / 1M output&lt;/strong&gt; for prompts up to 200k tokens, then &lt;strong&gt;$2.50 input&lt;/strong&gt; and &lt;strong&gt;$15 output&lt;/strong&gt; beyond that threshold&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That still sounds cheap if you are thinking about a few prompts in a playground.&lt;br&gt;
It stops sounding cheap the moment AI stops being a toy and starts becoming part of your operating model.&lt;/p&gt;

&lt;p&gt;Let’s do slightly less fake math.&lt;/p&gt;

&lt;p&gt;Imagine a team with 10 people using coding agents, document summarizers, support drafting, and internal automation.&lt;br&gt;
Nothing science-fiction here.&lt;br&gt;
Just normal “we adopted AI everywhere” behavior.&lt;/p&gt;

&lt;p&gt;Assume each seat consumes &lt;strong&gt;5 million input tokens and 2 million output tokens per workday&lt;/strong&gt;.&lt;br&gt;
That is not tiny, but it is also not insane once you include long contexts, retries, tool traces, generated code, explanations, and review loops.&lt;/p&gt;

&lt;p&gt;Here is what that looks like over roughly 22 workdays:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider/model&lt;/th&gt;
&lt;th&gt;Approx monthly cost for 10 seats&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI GPT-5.4&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$9,350&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Claude Sonnet 4.6&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$9,900&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini 2.5 Pro&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$7,150 to $9,350&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;That range on Gemini is already part of the point.&lt;br&gt;
The same team can pay very different numbers depending on prompt size behavior.&lt;/p&gt;

&lt;p&gt;Now compare that with actual wage data.&lt;br&gt;
The U.S. Bureau of Labor Statistics lists:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;$47,460/year&lt;/strong&gt; as the 2024 median pay for secretaries and administrative assistants&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$133,080/year&lt;/strong&gt; as the 2024 median pay for software developers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$79,850/year&lt;/strong&gt; as the lower 10th percentile for software developers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monthly, that works out to roughly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;$3,955/month&lt;/strong&gt; for an administrative assistant at the median&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$6,654/month&lt;/strong&gt; for the lower 10th percentile of software developers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;$11,090/month&lt;/strong&gt; for the median software developer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So no, one engineer casually using a model is not suddenly more expensive than a junior developer.&lt;br&gt;
That would be a silly headline.&lt;/p&gt;

&lt;p&gt;But a company-wide AI workflow absolutely can become more expensive than junior labor, very fast.&lt;br&gt;
And in some cases it already is.&lt;/p&gt;

&lt;p&gt;Five heavy AI seats can outrun a median administrative assistant.&lt;br&gt;
Ten can get uncomfortably close to, or exceed, what many companies would budget for an early-career developer.&lt;br&gt;
That is before you count observability, vector databases, eval pipelines, orchestration glue, and the humans still needed to check whether the machine did something stupid.&lt;/p&gt;

&lt;h2&gt;
  
  
  token costs are worse than salaries because they are less stable
&lt;/h2&gt;

&lt;p&gt;This is the part I think many executives still do not fully internalize.&lt;/p&gt;

&lt;p&gt;A salary is expensive, yes.&lt;br&gt;
But it is legible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo904gmdqjpn8fqdotlj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo904gmdqjpn8fqdotlj.gif" alt="how much" width="326" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Token spend is worse in one important way:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;you often do not know the real cost profile until after the workflow becomes popular.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A few reasons:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. output is where the pain lives
&lt;/h3&gt;

&lt;p&gt;A lot of people anchor on input pricing because it looks small.&lt;br&gt;
That is the wrong anchor.&lt;/p&gt;

&lt;p&gt;The expensive part is often output.&lt;br&gt;
Especially when models reason longer, explain more, retry more, or emit giant blobs of code and text nobody asked them to be that verbose about.&lt;/p&gt;

&lt;p&gt;OpenAI GPT-5.4 is 6x more expensive on output than input.&lt;br&gt;
Claude Sonnet 4.6 is 5x more expensive on output than input.&lt;br&gt;
Gemini 2.5 Pro jumps hard on output too.&lt;/p&gt;

&lt;p&gt;So the team that says, “we only send a lot of context” is often missing the real bill.&lt;br&gt;
The bill usually shows up when the system starts talking back too much.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. the same work can suddenly tokenize differently
&lt;/h3&gt;

&lt;p&gt;Anthropic documents that Claude Opus 4.7 uses a new tokenizer that may consume &lt;strong&gt;up to 35% more tokens for the same fixed text&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That should make every finance person mildly uncomfortable.&lt;/p&gt;

&lt;p&gt;Imagine paying 35% more for the same semantic workload because the tokenizer changed.&lt;br&gt;
Not because your product changed.&lt;br&gt;
Not because customers changed.&lt;br&gt;
Just because the vendor changed how text gets counted.&lt;/p&gt;

&lt;p&gt;That is not labor-like.&lt;br&gt;
That is utility-bill-like.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. thresholds and modes quietly change the bill
&lt;/h3&gt;

&lt;p&gt;Gemini 2.5 Pro charges one rate for prompts up to 200k tokens and a higher one above that.&lt;br&gt;
Anthropic has regional multipliers and a fast mode with premium pricing.&lt;br&gt;
OpenAI offers batch discounts, but also a data residency premium.&lt;/p&gt;

&lt;p&gt;So even if the application behavior looks “the same” from the outside, the internal billing shape can move around because:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;prompts got longer&lt;/li&gt;
&lt;li&gt;cache hit rates dropped&lt;/li&gt;
&lt;li&gt;a team enabled a faster mode&lt;/li&gt;
&lt;li&gt;a product shifted regions&lt;/li&gt;
&lt;li&gt;grounding or search got added&lt;/li&gt;
&lt;li&gt;the model started generating more output than last month&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not predictable staffing.&lt;br&gt;
That is spend drift.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. agents multiply hidden tokens
&lt;/h3&gt;

&lt;p&gt;This gets worse with agents.&lt;/p&gt;

&lt;p&gt;A normal chat interaction is one thing.&lt;br&gt;
An agent loop is another beast entirely.&lt;/p&gt;

&lt;p&gt;Now you are paying for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the original prompt&lt;/li&gt;
&lt;li&gt;tool schemas&lt;/li&gt;
&lt;li&gt;tool results&lt;/li&gt;
&lt;li&gt;chain-of-thought-adjacent reasoning budgets, depending on platform semantics&lt;/li&gt;
&lt;li&gt;retries&lt;/li&gt;
&lt;li&gt;file context&lt;/li&gt;
&lt;li&gt;summaries of prior turns&lt;/li&gt;
&lt;li&gt;review passes&lt;/li&gt;
&lt;li&gt;self-correction loops&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;People love saying “the agent did this task in 8 minutes.”&lt;br&gt;
Cool.&lt;br&gt;
What they often do not say is that the agent may have consumed the token equivalent of several ordinary interactions to get there.&lt;/p&gt;

&lt;p&gt;That means your marginal cost per useful result is often much blurrier than the dashboard suggests.&lt;/p&gt;

&lt;h2&gt;
  
  
  this does not mean “stop using AI”
&lt;/h2&gt;

&lt;p&gt;To be clear, I am not making the boomer argument here.&lt;/p&gt;

&lt;p&gt;I am not saying, “AI is too expensive, go back to doing everything manually.”&lt;br&gt;
That would be dumb.&lt;/p&gt;

&lt;p&gt;AI is real leverage.&lt;br&gt;
It is already useful.&lt;br&gt;
It can absolutely make a strong person much stronger.&lt;/p&gt;

&lt;p&gt;But I think companies need to stop treating token spend as if it were automatically better than human spend.&lt;/p&gt;

&lt;p&gt;Sometimes it is.&lt;br&gt;
Sometimes it is not.&lt;br&gt;
And sometimes it is only better if a human is still clearly in charge of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;scope&lt;/li&gt;
&lt;li&gt;review&lt;/li&gt;
&lt;li&gt;escalation&lt;/li&gt;
&lt;li&gt;quality control&lt;/li&gt;
&lt;li&gt;budget discipline&lt;/li&gt;
&lt;li&gt;model selection&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The winning pattern is not “replace juniors with tokens.”&lt;br&gt;
The winning pattern is more like:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;use tokens to amplify good people, while good people remain the owners of correctness, cost, and consequences.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is a much more boring sentence.&lt;br&gt;
It is also the one that survives contact with finance.&lt;/p&gt;

&lt;h2&gt;
  
  
  my opinionated version
&lt;/h2&gt;

&lt;p&gt;I think a lot of AI adoption right now is being sold with the same bad habit we saw in early cloud conversations.&lt;/p&gt;

&lt;p&gt;People love the upside story.&lt;br&gt;
Nobody wants to dwell on the bill shape.&lt;/p&gt;

&lt;p&gt;So teams say things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“it is only a few dollars per million tokens”&lt;/li&gt;
&lt;li&gt;“the model is cheap enough”&lt;/li&gt;
&lt;li&gt;“we will optimize later”&lt;/li&gt;
&lt;li&gt;“let’s just let everyone use the best model for now”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is exactly how small variable costs become strategic costs.&lt;/p&gt;

&lt;p&gt;And unlike hiring, token spend can get uglier without any emotionally obvious moment.&lt;br&gt;
You do not interview a token.&lt;br&gt;
You do not onboard a token.&lt;br&gt;
You do not notice 14 small workflow expansions the same way you notice one new headcount request.&lt;/p&gt;

&lt;p&gt;That is why this category is dangerous.&lt;br&gt;
It slips past normal management instincts.&lt;/p&gt;

&lt;p&gt;You would debate a junior hire.&lt;br&gt;
You might not debate a bunch of “helpful” agent workflows until the invoice starts looking like a small payroll category.&lt;/p&gt;

&lt;h2&gt;
  
  
  what smart companies should do instead ?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzr7y6v4vu3pb9p1m0l98.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzr7y6v4vu3pb9p1m0l98.gif" alt="calm down" width="320" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My recommendation is not anti-AI.&lt;br&gt;
It is anti-delusion. 🌈&lt;/p&gt;

&lt;p&gt;If you are serious about using models across the company, then do a few boring things early:&lt;/p&gt;

&lt;h3&gt;
  
  
  price workflows, not prompts
&lt;/h3&gt;

&lt;p&gt;Do not benchmark one cute demo request.&lt;br&gt;
Measure the full workflow: retries, context growth, tool calls, review passes, and average output length.&lt;/p&gt;

&lt;h3&gt;
  
  
  assign model tiers intentionally
&lt;/h3&gt;

&lt;p&gt;Not every task deserves the frontier model.&lt;br&gt;
Most companies are massively overpaying because they use the most expensive reasoning setup for work that could be routed to a cheaper model.&lt;/p&gt;

&lt;h3&gt;
  
  
  put humans on the acceptance boundary
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3j7a9hbgxphh4mdlung.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3j7a9hbgxphh4mdlung.gif" alt="checking" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Do not use expensive models as a management substitute.&lt;br&gt;
If the output matters, a human should still own acceptance.&lt;br&gt;
Otherwise you are paying for generation and then paying again for the fallout.&lt;/p&gt;

&lt;h3&gt;
  
  
  treat token budgets like cloud budgets
&lt;/h3&gt;

&lt;p&gt;Tag them.&lt;br&gt;
Attribute them.&lt;br&gt;
Alert on them.&lt;br&gt;
Set hard ceilings where needed.&lt;/p&gt;

&lt;p&gt;Cloud taught us this already.&lt;br&gt;
Variable spend is only “efficient” when someone is actually watching it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgp7y4hfvngpbnac45gtg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgp7y4hfvngpbnac45gtg.gif" alt="budget" width="480" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  optimize for controlled leverage
&lt;/h3&gt;

&lt;p&gt;The right comparison is not “AI versus humans.”&lt;br&gt;
It is “AI plus one good human versus the old way of working.”&lt;/p&gt;

&lt;p&gt;That framing usually leads to better architecture and more honest economics.&lt;/p&gt;

&lt;h2&gt;
  
  
  my take
&lt;/h2&gt;

&lt;p&gt;Tokens are still useful. Sometimes incredibly useful!!! I get it.&lt;/p&gt;

&lt;p&gt;But they are no longer a cute rounding error.&lt;br&gt;
And they are definitely not predictable enough to treat as a harmless software snack.&lt;/p&gt;

&lt;p&gt;For many teams, token spend is becoming a real labor-adjacent budget category.&lt;br&gt;
In some workflows it is already expensive enough to beat junior human cost.&lt;br&gt;
In many more, it is at least expensive enough that the comparison should happen before the rollout, not after the invoice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So no, of course, I would not stop using AI, this is madness loosing good optimization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I would just stop pretending that tokens are magically cheaper than people.&lt;br&gt;
They are often cheaper than some kinds of work.&lt;br&gt;
That is different.&lt;/p&gt;

&lt;p&gt;And unlike people, tokens come with a billing model that can change under your feet, a cost profile that explodes with usage patterns, and a nasty habit of looking cheap right until they are not.&lt;/p&gt;

&lt;p&gt;That is why my current default is simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;use AI aggressively, but never let the token budget operate without adult supervision.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  references
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI, &lt;em&gt;API Pricing&lt;/em&gt; — &lt;a href="https://openai.com/api/pricing/" rel="noopener noreferrer"&gt;https://openai.com/api/pricing/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Anthropic, &lt;em&gt;Claude pricing&lt;/em&gt; — &lt;a href="https://docs.anthropic.com/en/docs/about-claude/pricing" rel="noopener noreferrer"&gt;https://docs.anthropic.com/en/docs/about-claude/pricing&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Google, &lt;em&gt;Gemini Developer API pricing&lt;/em&gt; — &lt;a href="https://ai.google.dev/gemini-api/docs/pricing" rel="noopener noreferrer"&gt;https://ai.google.dev/gemini-api/docs/pricing&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;U.S. Bureau of Labor Statistics, &lt;em&gt;Software Developers, Quality Assurance Analysts, and Testers&lt;/em&gt; — &lt;a href="https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm" rel="noopener noreferrer"&gt;https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;U.S. Bureau of Labor Statistics, &lt;em&gt;Secretaries and Administrative Assistants&lt;/em&gt; — &lt;a href="https://www.bls.gov/ooh/office-and-administrative-support/secretaries-and-administrative-assistants.htm" rel="noopener noreferrer"&gt;https://www.bls.gov/ooh/office-and-administrative-support/secretaries-and-administrative-assistants.htm&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>opinion</category>
      <category>programming</category>
    </item>
    <item>
      <title>controller staleness is the hidden tax of platform automation</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Fri, 01 May 2026 00:02:16 +0000</pubDate>
      <link>https://forem.com/pvgomes/controller-staleness-is-the-hidden-tax-of-platform-automation-45e</link>
      <guid>https://forem.com/pvgomes/controller-staleness-is-the-hidden-tax-of-platform-automation-45e</guid>
      <description>&lt;p&gt;I think a lot of platform engineering discourse still has a very annoying habit.&lt;/p&gt;

&lt;p&gt;We keep treating automation as if the main risk is not having enough of it.&lt;/p&gt;

&lt;p&gt;Not enough controllers.&lt;br&gt;
Not enough reconcilers.&lt;br&gt;
Not enough policy engines.&lt;br&gt;
Not enough workflows.&lt;br&gt;
Not enough AI copilots orchestrating the orchestrators.&lt;/p&gt;

&lt;p&gt;And sure, sometimes that is true.&lt;br&gt;
But once a system gets a bit serious, the failure mode changes.&lt;br&gt;
The problem is usually not that you lack automation.&lt;br&gt;
The problem is that you now have automation making decisions from a stale mental model of reality.&lt;/p&gt;

&lt;p&gt;That is why the Kubernetes v1.36 work on &lt;strong&gt;staleness mitigation and observability for controllers&lt;/strong&gt; is more important than it sounds.&lt;br&gt;
It is not just a controller-author quality-of-life improvement.&lt;br&gt;
It is a small but very clear signal about the next platform pain point.&lt;/p&gt;

&lt;p&gt;My take is simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;controller staleness is the hidden tax of platform automation, and the more teams automate, the more expensive that tax gets.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  automation is only smart if its view of the world is fresh enough
&lt;/h2&gt;

&lt;p&gt;A lot of infrastructure automation depends on a pretty fragile assumption:&lt;br&gt;
that the thing making a decision is acting on an acceptably current view of the system.&lt;/p&gt;

&lt;p&gt;That sounds obvious when you say it out loud.&lt;br&gt;
But a surprising amount of platform logic quietly assumes it anyway.&lt;/p&gt;

&lt;p&gt;Controllers watch resources, build a cached view of cluster state, and then reconcile toward some desired outcome.&lt;br&gt;
That model is powerful because it scales much better than constant direct reads.&lt;br&gt;
It is also exactly where the subtle bugs show up.&lt;/p&gt;

&lt;p&gt;Kubernetes described the problem pretty bluntly in the v1.36 post: controller staleness can lead to controllers taking incorrect actions, often because the author made assumptions that only fail once the cache falls behind reality.&lt;br&gt;
And that is the nasty part.&lt;br&gt;
These issues often do not look dramatic at first.&lt;br&gt;
They look like occasional weirdness.&lt;br&gt;
A duplicate action here.&lt;br&gt;
A delayed correction there.&lt;br&gt;
A reconciliation loop that technically succeeds while doing the wrong thing for a few minutes.&lt;/p&gt;

&lt;p&gt;That is why staleness is such a good platform topic.&lt;br&gt;
It sits right in the uncomfortable zone between “works fine in normal demos” and “causes expensive production behavior.”&lt;/p&gt;

&lt;h2&gt;
  
  
  the hard part of automation is not execution. it is timing and truth
&lt;/h2&gt;

&lt;p&gt;I think this is where a lot of modern platform thinking gets too romantic.&lt;/p&gt;

&lt;p&gt;People love the idea of automated systems because automated systems feel decisive.&lt;br&gt;
A desired state exists, a controller sees drift, the controller corrects it, everyone goes home happy.&lt;/p&gt;

&lt;p&gt;Real life is more annoying.&lt;/p&gt;

&lt;p&gt;In real systems, automation is constantly negotiating with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;partial visibility&lt;/li&gt;
&lt;li&gt;event delays&lt;/li&gt;
&lt;li&gt;retries&lt;/li&gt;
&lt;li&gt;caches&lt;/li&gt;
&lt;li&gt;race conditions&lt;/li&gt;
&lt;li&gt;eventual consistency&lt;/li&gt;
&lt;li&gt;competing controllers&lt;/li&gt;
&lt;li&gt;humans making changes at inconvenient times&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the real challenge is not only “can the system act?”&lt;br&gt;
It is “can the system act based on a trustworthy-enough view of reality?”&lt;/p&gt;

&lt;p&gt;That distinction matters a lot.&lt;br&gt;
Because if your automation gets stronger while your freshness guarantees stay fuzzy, you are not really scaling trust.&lt;br&gt;
You are scaling the blast radius of outdated assumptions.&lt;/p&gt;

&lt;p&gt;That is the hidden tax.&lt;br&gt;
Not the compute bill.&lt;br&gt;
Not the YAML sprawl.&lt;br&gt;
The cognitive and operational cost of having more autonomous behavior than your observability and consistency model can safely support.&lt;/p&gt;

&lt;h2&gt;
  
  
  this is not just a kubernetes problem
&lt;/h2&gt;

&lt;p&gt;Kubernetes controllers make the issue easy to see, but the pattern is much broader.&lt;/p&gt;

&lt;p&gt;You can find the same shape everywhere now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;internal platform workflows acting on lagging state from APIs&lt;/li&gt;
&lt;li&gt;cost automation reacting to yesterday’s data as if it were real time&lt;/li&gt;
&lt;li&gt;deployment systems assuming their inventory view is current when it is already drifting&lt;/li&gt;
&lt;li&gt;security automation revoking or granting based on incomplete propagation&lt;/li&gt;
&lt;li&gt;AI agents chaining actions across tools with a stale understanding of what the previous step actually changed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last one is where this gets especially relevant.&lt;br&gt;
A lot of "agentic" demos look impressive because they show automation doing more steps.&lt;br&gt;
Very few of them spend enough time on whether the agent is acting on fresh, verified state between steps.&lt;/p&gt;

&lt;p&gt;Honestly, that is why I keep being skeptical of the shallow version of AI platform enthusiasm.&lt;br&gt;
We are adding more decision-making loops into systems that already struggle with stale state in much simpler automation.&lt;br&gt;
The problem does not disappear because the interface got friendlier.&lt;br&gt;
It usually gets harder to see.&lt;/p&gt;

&lt;h2&gt;
  
  
  observability for controllers is really observability for trust
&lt;/h2&gt;

&lt;p&gt;One thing I like about the Kubernetes v1.36 direction here is that it treats staleness as something you should not just tolerate silently.&lt;br&gt;
You should be able to detect it, reason about it, and design around it.&lt;/p&gt;

&lt;p&gt;That sounds small.&lt;br&gt;
It is not.&lt;/p&gt;

&lt;p&gt;A lot of platform incidents happen because the system was technically doing what it was built to do, but under conditions the builders were not properly measuring.&lt;br&gt;
A stale controller is a great example.&lt;br&gt;
The logic might be correct.&lt;br&gt;
The intent might be correct.&lt;br&gt;
The action might still be wrong because the world moved and the automation did not notice fast enough.&lt;/p&gt;

&lt;p&gt;That means the observability question is bigger than metrics trivia.&lt;br&gt;
It is really a trust question:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;how stale can this controller become before its actions are unsafe?&lt;/li&gt;
&lt;li&gt;which reconciliations depend on fresh reads versus eventually consistent cache views?&lt;/li&gt;
&lt;li&gt;where are we assuming ordering that the platform does not really guarantee?&lt;/li&gt;
&lt;li&gt;which automation loops should refuse to act when their view of state is too old?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is the grown-up version of platform automation.&lt;br&gt;
Not “make it autonomous and hope.”&lt;br&gt;
More like “make it autonomous inside clearly observed truth boundaries.”&lt;/p&gt;

&lt;h2&gt;
  
  
  platform teams should think less about magic and more about control surfaces
&lt;/h2&gt;

&lt;p&gt;This is also why I think the most valuable platform engineering work right now is weirdly unglamorous.&lt;/p&gt;

&lt;p&gt;Not the giant internal developer portal launch.&lt;br&gt;
Not the seventh wrapper around LLM tool invocation.&lt;br&gt;
Not the architectural diagram where every box sounds intelligent.&lt;/p&gt;

&lt;p&gt;The valuable work is often things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;defining where freshness matters more than throughput&lt;/li&gt;
&lt;li&gt;making state lag visible before it becomes user-visible damage&lt;/li&gt;
&lt;li&gt;deciding which control loops need hard safeguards&lt;/li&gt;
&lt;li&gt;building reconciliation logic that can prove it is acting on current-enough information&lt;/li&gt;
&lt;li&gt;teaching teams that “eventually consistent” is not a decorative phrase&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is not as sexy as talking about fully autonomous platforms.&lt;br&gt;
But it is much closer to what keeps systems from becoming haunted.&lt;/p&gt;

&lt;p&gt;And yes, I said haunted.&lt;br&gt;
Because stale automation has exactly that vibe.&lt;br&gt;
Something changed.&lt;br&gt;
Some controller noticed too late.&lt;br&gt;
Another system reacted to the wrong intermediate state.&lt;br&gt;
And now everyone is trying to explain why the system behaved like it believed in ghosts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia.tenor.com%2F1T2mQK4h5vAAAAAC%2Fconfused-math.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmedia.tenor.com%2F1T2mQK4h5vAAAAAC%2Fconfused-math.gif" alt="haunted automation energy" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  more automation means more responsibility to constrain automation
&lt;/h2&gt;

&lt;p&gt;I think this is the part many teams still underestimate.&lt;/p&gt;

&lt;p&gt;When you increase automation, you do not only gain leverage.&lt;br&gt;
You also take on a stronger obligation to define the conditions under which that automation is trustworthy.&lt;/p&gt;

&lt;p&gt;That means automation design has to include things like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;freshness assumptions&lt;/li&gt;
&lt;li&gt;backoff behavior&lt;/li&gt;
&lt;li&gt;conflict handling&lt;/li&gt;
&lt;li&gt;idempotency&lt;/li&gt;
&lt;li&gt;safe no-op conditions&lt;/li&gt;
&lt;li&gt;clear refusal modes when state confidence is too low&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is one reason I think platform engineering is slowly becoming less about tooling assembly and more about operational philosophy.&lt;br&gt;
What do we allow the machine to do automatically?&lt;br&gt;
Under what evidence?&lt;br&gt;
With what rollback path?&lt;br&gt;
With what visibility?&lt;/p&gt;

&lt;p&gt;Those are not secondary implementation details anymore.&lt;br&gt;
They are the real product decisions of the platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  my take
&lt;/h2&gt;

&lt;p&gt;The Kubernetes controller staleness work matters because it highlights a problem that a lot of modern infrastructure is about to feel more sharply.&lt;/p&gt;

&lt;p&gt;As platforms add more controllers, more policy engines, more automation layers, and more AI-shaped orchestration, the scarce resource is not only compute or developer time.&lt;br&gt;
It is &lt;strong&gt;trustworthy system awareness&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If the automation loop cannot see reality clearly enough, then adding more automation does not reliably create more control.&lt;br&gt;
Sometimes it just creates faster confusion.&lt;/p&gt;

&lt;p&gt;That is why I think controller staleness is the hidden tax of platform automation.&lt;br&gt;
It is the price teams pay when automated systems are allowed to act with more confidence than their view of the world deserves.&lt;/p&gt;

&lt;p&gt;The next generation of strong platform teams will not just ask, “what can we automate?”&lt;br&gt;
They will ask a better question:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;how fresh does the truth need to be before we let the machine touch anything important?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is a much less flashy question.&lt;br&gt;
And a much more useful one.&lt;/p&gt;

&lt;h2&gt;
  
  
  references
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Kubernetes, &lt;em&gt;Kubernetes v1.36: Staleness Mitigation and Observability for Controllers&lt;/em&gt; — &lt;a href="https://kubernetes.io/blog/2026/04/28/kubernetes-v1-36-staleness-mitigation-for-controllers/" rel="noopener noreferrer"&gt;https://kubernetes.io/blog/2026/04/28/kubernetes-v1-36-staleness-mitigation-for-controllers/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Kubernetes, &lt;em&gt;Gateway API v1.5: Moving features to Stable&lt;/em&gt; — &lt;a href="https://kubernetes.io/blog/2026/04/24/gateway-api-v1-5/" rel="noopener noreferrer"&gt;https://kubernetes.io/blog/2026/04/24/gateway-api-v1-5/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Martin Fowler, &lt;em&gt;Structured-Prompt-Driven Development (SPDD)&lt;/em&gt; — &lt;a href="https://martinfowler.com/articles/structured-prompt-driven/" rel="noopener noreferrer"&gt;https://martinfowler.com/articles/structured-prompt-driven/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>platformengineering</category>
      <category>automation</category>
      <category>ai</category>
    </item>
    <item>
      <title>your second brain should not be a folder full of markdown</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Thu, 30 Apr 2026 15:02:24 +0000</pubDate>
      <link>https://forem.com/pvgomes/your-second-brain-should-not-be-a-folder-full-of-markdown-ka0</link>
      <guid>https://forem.com/pvgomes/your-second-brain-should-not-be-a-folder-full-of-markdown-ka0</guid>
      <description>&lt;p&gt;I like markdown.&lt;br&gt;
I really do.&lt;/p&gt;

&lt;p&gt;Markdown is simple, portable, git-friendly, easy to back up, and great for writing.&lt;br&gt;
But I also think a lot of “second brain” tools quietly fall apart at the exact moment they are supposed to become useful.&lt;/p&gt;

&lt;p&gt;They work nicely while your memory is small.&lt;br&gt;
Then one day you want to find that one thing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the exact hour you went to the gym three Tuesdays ago&lt;/li&gt;
&lt;li&gt;the bakery you liked in a city you visited once&lt;/li&gt;
&lt;li&gt;the detailed advice a friend gave you in a long conversation six months ago&lt;/li&gt;
&lt;li&gt;that random insight you had during a walk and saved from your phone&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And suddenly your “second brain” is just a polite pile of files.&lt;/p&gt;

&lt;p&gt;That is the moment where I think most markdown-first memory systems reveal their real limitation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;they are optimized for storing text, not for retrieving memory.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is why I find &lt;a href="https://github.com/brazanation/jurupari" rel="noopener noreferrer"&gt;Jurupari&lt;/a&gt; interesting.&lt;br&gt;
Not because it adds more note-taking ceremony.&lt;br&gt;
Quite the opposite.&lt;br&gt;
Because it strips the idea down to what actually matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;store memories in a real database&lt;/li&gt;
&lt;li&gt;search them semantically&lt;/li&gt;
&lt;li&gt;expose them through MCP and HTTP&lt;/li&gt;
&lt;li&gt;let any AI tool read and write them for you&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is much closer to what a real second brain should be.&lt;/p&gt;
&lt;h2&gt;
  
  
  the problem with markdown second brains
&lt;/h2&gt;

&lt;p&gt;A folder full of notes feels smart at first.&lt;br&gt;
Engineers especially love it because it feels open and under control.&lt;br&gt;
No vendor lock-in, no weird proprietary format, just files.&lt;/p&gt;

&lt;p&gt;I get the appeal.&lt;br&gt;
I have that instinct too.&lt;/p&gt;

&lt;p&gt;But once memory stops being “a few documents I can browse manually” and becomes “an extension of my day-to-day thinking,” files start getting awkward.&lt;/p&gt;

&lt;p&gt;The problem is not that markdown is bad.&lt;br&gt;
The problem is that &lt;strong&gt;memory retrieval is a search problem&lt;/strong&gt;, and search gets much better when you treat it like a database problem instead of a filesystem hobby.&lt;/p&gt;

&lt;p&gt;If your idea of memory is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;searchable history&lt;/li&gt;
&lt;li&gt;timeline fragments&lt;/li&gt;
&lt;li&gt;personal facts&lt;/li&gt;
&lt;li&gt;recurring preferences&lt;/li&gt;
&lt;li&gt;conversation details&lt;/li&gt;
&lt;li&gt;lightweight journaling&lt;/li&gt;
&lt;li&gt;structured and unstructured recall&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;...then embedded search plus a proper data model beats filename gymnastics every time.&lt;/p&gt;
&lt;h2&gt;
  
  
  what jurupari gets right
&lt;/h2&gt;

&lt;p&gt;Jurupari is basically a very simple personal knowledge base with the right primitives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; for storage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pgvector&lt;/strong&gt; for semantic search&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP&lt;/strong&gt; so AI tools can use it directly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTP API&lt;/strong&gt; for direct integration and automation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CRUD support&lt;/strong&gt;, not just retrieval&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That last part matters a lot.&lt;br&gt;
A lot of “memory” integrations are glorified search adapters.&lt;br&gt;
They can retrieve context, maybe rank snippets, maybe inject them into a prompt.&lt;br&gt;
But they cannot really behave like a durable memory system because writing is awkward or missing.&lt;/p&gt;

&lt;p&gt;Jurupari fixes that.&lt;/p&gt;

&lt;p&gt;With MCP in front of it, memory stops being a manual note-taking ritual and becomes something much more natural:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hey, save this on my Jurupari memory.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is the right abstraction.&lt;br&gt;
I do not want to stop what I am doing, open another app, decide on a folder, decide on a title, decide on tags, and become my own archivist.&lt;br&gt;
I want memory capture to be cheap.&lt;/p&gt;

&lt;p&gt;If the system is good, I should be able to talk to Claude, GPT, OpenClaw, Hermes, n8n, or any other MCP-capable tool and say:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;save this&lt;/li&gt;
&lt;li&gt;find that&lt;/li&gt;
&lt;li&gt;update this&lt;/li&gt;
&lt;li&gt;remove that&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is a second brain.&lt;br&gt;
Not a graveyard of notes.&lt;/p&gt;
&lt;h2&gt;
  
  
  semantic search is the real feature
&lt;/h2&gt;

&lt;p&gt;The real power here is not “you can store notes in Postgres.”&lt;br&gt;
That part is almost boring.&lt;/p&gt;

&lt;p&gt;The real feature is that semantic search changes how you interact with memory.&lt;/p&gt;

&lt;p&gt;You do not need to remember the exact words you used.&lt;br&gt;
You just need to remember what you meant.&lt;/p&gt;

&lt;p&gt;That is a huge difference.&lt;/p&gt;

&lt;p&gt;A filesystem usually rewards perfect recall:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;correct filename&lt;/li&gt;
&lt;li&gt;correct folder&lt;/li&gt;
&lt;li&gt;correct keyword&lt;/li&gt;
&lt;li&gt;correct tagging habit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A semantic memory system rewards approximate recall:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“find that thing I wrote about feeling tired after leg day”&lt;/li&gt;
&lt;li&gt;“what was the coffee place I liked near the station?”&lt;/li&gt;
&lt;li&gt;“search my memory for that conversation about changing jobs”&lt;/li&gt;
&lt;li&gt;“what did I say last month about sleep quality?”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That is much closer to how human memory actually works.&lt;/p&gt;
&lt;h2&gt;
  
  
  this is where a second brain becomes actually useful
&lt;/h2&gt;

&lt;p&gt;A lot of “second brain” marketing is weirdly grandiose.&lt;br&gt;
It talks like you are building a digital philosopher king inside your laptop.&lt;br&gt;
I do not think that is the useful framing.&lt;/p&gt;

&lt;p&gt;The useful framing is much simpler:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;your memory gets more valuable when it becomes easy to save and easy to find.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That means very normal things suddenly become worth recording.&lt;/p&gt;

&lt;p&gt;For example:&lt;/p&gt;
&lt;h3&gt;
  
  
  1. everyday activity logging
&lt;/h3&gt;

&lt;p&gt;You want to remember what time you did something.&lt;br&gt;
Not because it is deep or poetic, but because reality is slippery.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what time did I go to the gym?&lt;/li&gt;
&lt;li&gt;when did I stop by the bakery?&lt;/li&gt;
&lt;li&gt;what time did I take the dog out?&lt;/li&gt;
&lt;li&gt;when did I last call my parents?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prompt examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Save this on my Jurupari memory: I went to the gym today at 07:10.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Save this memory: I went to the bakery at 08:35 and bought sourdough and two pastéis de nata.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Later:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Search my Jurupari memory for the last time I went to the gym in the morning.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. detailed conversations
&lt;/h3&gt;

&lt;p&gt;Sometimes the most useful thing to remember is not a task.&lt;br&gt;
It is context.&lt;/p&gt;

&lt;p&gt;Maybe a friend told you something important.&lt;br&gt;
Maybe you had a subtle conversation with your partner.&lt;br&gt;
Maybe someone gave you advice that only makes sense when you preserve the detail.&lt;/p&gt;

&lt;p&gt;Prompt examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Save this on my Jurupari memory: today I talked to Daniel for an hour. He said he is thinking about leaving his job because the team structure changed, he feels blocked by management, and he wants to move closer to product strategy.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Later:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Find my memory about Daniel thinking of leaving his job.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is way better than hoping you named the note &lt;code&gt;career-chat-daniel-maybe-job-change-final-final.md&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. journal entries that you can actually recover
&lt;/h3&gt;

&lt;p&gt;This is the part I like most.&lt;br&gt;
Jurupari can work like a journal, but not the kind of journal you write and then lose inside your own archive.&lt;/p&gt;

&lt;p&gt;You can keep small fragments of life:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what made you anxious today&lt;/li&gt;
&lt;li&gt;what went well this week&lt;/li&gt;
&lt;li&gt;a lesson from a hard conversation&lt;/li&gt;
&lt;li&gt;a small win you want to remember&lt;/li&gt;
&lt;li&gt;a pattern you are noticing in your energy, habits, or mood&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Prompt examples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Save this memory: I felt unusually focused today after sleeping 8 hours and going for a 20-minute walk before work.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Search my memory for patterns involving focus, sleep, and walking.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is where the “second brain” idea stops being branding and starts becoming practical.&lt;/p&gt;

&lt;h2&gt;
  
  
  mcp is what makes this feel native instead of bolted on
&lt;/h2&gt;

&lt;p&gt;The reason this gets much more interesting now than a few years ago is MCP.&lt;/p&gt;

&lt;p&gt;Without MCP, a memory system is usually another app you have to remember to use.&lt;br&gt;
With MCP, memory becomes part of the interface layer of your AI tools.&lt;/p&gt;

&lt;p&gt;That changes behavior.&lt;/p&gt;

&lt;p&gt;Instead of thinking:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I should go open my note system and save this.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You think:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hey, save this.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is a much lower-friction action.&lt;br&gt;
And low friction is everything for personal memory systems.&lt;br&gt;
Because the best memory tool is not the one with the fanciest graph view.&lt;br&gt;
It is the one you actually keep feeding.&lt;/p&gt;

&lt;p&gt;Jurupari is especially nice here because it is not trying to trap you inside one product surface.&lt;br&gt;
You can plug it into:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude&lt;/li&gt;
&lt;li&gt;GPT&lt;/li&gt;
&lt;li&gt;OpenClaw&lt;/li&gt;
&lt;li&gt;Hermes&lt;/li&gt;
&lt;li&gt;n8n&lt;/li&gt;
&lt;li&gt;other MCP-capable tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So the memory follows your workflow instead of demanding a new one.&lt;/p&gt;
&lt;h2&gt;
  
  
  the real second brain is writable
&lt;/h2&gt;

&lt;p&gt;I think this is the most underrated idea in the whole space.&lt;/p&gt;

&lt;p&gt;A real second brain cannot be read-only.&lt;/p&gt;

&lt;p&gt;If an AI can search your memory but cannot update it, correct it, append to it, or save new facts when you ask, then it is not really your second brain.&lt;br&gt;
It is just a retrieval plugin.&lt;/p&gt;

&lt;p&gt;Jurupari exposing CRUD through MCP is the important design choice.&lt;br&gt;
That is what makes these flows possible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Save this on my Jurupari memory: the plumber said he will come on Friday between 14:00 and 16:00.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Update that memory: the plumber moved it to Saturday at 10:30.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Delete the duplicate note about the bakery. Keep the one with the exact time.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That sounds small, but it is the difference between “search over notes” and “persistent memory you can manage conversationally.”&lt;/p&gt;

&lt;h2&gt;
  
  
  how to run your own jurupari
&lt;/h2&gt;

&lt;p&gt;The nice part is that this is not some giant infrastructure project.&lt;br&gt;
The repo is refreshingly direct.&lt;/p&gt;

&lt;p&gt;At a high level, the setup is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;deploy Jurupari somewhere you like&lt;/li&gt;
&lt;li&gt;point it at a PostgreSQL database with pgvector&lt;/li&gt;
&lt;li&gt;set your environment variables&lt;/li&gt;
&lt;li&gt;run the API&lt;/li&gt;
&lt;li&gt;expose MCP so your AI tools can connect to it&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From the project README, the local dev flow is basically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;span class="c"&gt;# fill DATABASE_URL, OPENAI_API_KEY, JURUPARI_TOKEN&lt;/span&gt;

docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
pnpm &lt;span class="nb"&gt;install
&lt;/span&gt;pnpm db:push
pnpm dev:api
pnpm &lt;span class="nt"&gt;--filter&lt;/span&gt; @jurupari/mcp build &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; node packages/mcp/dist/index.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if you want a simple hosted version, the project explicitly mentions deployment on places like &lt;strong&gt;AWS&lt;/strong&gt; or &lt;strong&gt;Railway&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The mental model is straightforward:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Postgres + pgvector&lt;/strong&gt; stores and indexes your memory&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;the API&lt;/strong&gt; gives you direct application access&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;the MCP server&lt;/strong&gt; lets AI clients talk to the memory naturally&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;the token model&lt;/strong&gt; controls read/write access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There is also a nice split between remote and local MCP setups:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Remote SSE&lt;/strong&gt; for web clients and remote integrations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local stdio&lt;/strong&gt; for tools like Claude Desktop, Claude Code, or Cursor&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That means you can choose convenience or locality depending on your setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  who this is for
&lt;/h2&gt;

&lt;p&gt;I think Jurupari makes the most sense for people who:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;already use AI tools every day&lt;/li&gt;
&lt;li&gt;are tired of fragmented personal context&lt;/li&gt;
&lt;li&gt;want memory to be available across tools&lt;/li&gt;
&lt;li&gt;prefer owning their own stack&lt;/li&gt;
&lt;li&gt;understand that retrieval quality matters more than note aesthetics&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Especially engineers.&lt;br&gt;
Because engineers often over-romanticize plain files and under-invest in retrieval.&lt;/p&gt;

&lt;p&gt;I say that with love.&lt;br&gt;
We do this all the time.&lt;br&gt;
We will build a beautiful directory tree and call it knowledge management, then act surprised when finding anything becomes annoying.&lt;/p&gt;

&lt;h2&gt;
  
  
  my take
&lt;/h2&gt;

&lt;p&gt;If you want a writing system, markdown is still fantastic.&lt;br&gt;
If you want a durable searchable memory that can live behind your favorite AI tools, markdown folders are usually the wrong center of gravity.&lt;/p&gt;

&lt;p&gt;That is why I think Jurupari is a much more honest version of the “second brain” idea.&lt;/p&gt;

&lt;p&gt;It does not pretend memory is about collecting pretty notes.&lt;br&gt;
It treats memory like what it actually becomes at scale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a search problem&lt;/li&gt;
&lt;li&gt;a retrieval problem&lt;/li&gt;
&lt;li&gt;a write problem&lt;/li&gt;
&lt;li&gt;a data-model problem&lt;/li&gt;
&lt;li&gt;an interface problem&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And once you see it that way, the architecture becomes obvious.&lt;/p&gt;

&lt;p&gt;Use a real database.&lt;br&gt;
Use semantic search.&lt;br&gt;
Expose CRUD.&lt;br&gt;
Plug it into the tools you already talk to.&lt;/p&gt;

&lt;p&gt;That is much closer to a real second brain than a synced folder full of markdown will ever be.&lt;/p&gt;

&lt;h2&gt;
  
  
  references
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Jurupari GitHub repository — &lt;a href="https://github.com/brazanation/jurupari" rel="noopener noreferrer"&gt;https://github.com/brazanation/jurupari&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Jurupari README — &lt;a href="https://raw.githubusercontent.com/brazanation/jurupari/main/README.md" rel="noopener noreferrer"&gt;https://raw.githubusercontent.com/brazanation/jurupari/main/README.md&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>opinion</category>
      <category>devops</category>
    </item>
    <item>
      <title>Do know about the AI transparency index? you should</title>
      <dc:creator>Paulo Victor Leite Lima Gomes</dc:creator>
      <pubDate>Thu, 30 Apr 2026 10:00:12 +0000</pubDate>
      <link>https://forem.com/pvgomes/ai-transparency-index-on-pvgomescom-2p1k</link>
      <guid>https://forem.com/pvgomes/ai-transparency-index-on-pvgomescom-2p1k</guid>
      <description>&lt;h3&gt;
  
  
  AI Transparency index and the numbers are uncomfortable, still you should know
&lt;/h3&gt;

&lt;p&gt;stanford's &lt;a href="https://crfm.stanford.edu/fmti/December-2025/index.html" rel="noopener noreferrer"&gt;foundation model transparency index&lt;/a&gt; dropped its december 2025 edition and if you build anything on top of these models, you should probably read it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;the mean score dropped 17 points.&lt;/strong&gt; from 58 to 41. meta down 29, mistral down 37, openai down 14. this is not a documentation problem — these companies have entire policy teams. it's a choice.&lt;/p&gt;

&lt;p&gt;a few things that stood out to me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ibm scored 95.&lt;/strong&gt; first place across all three years. nobody talks about this.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;open-weight ≠ transparent.&lt;/strong&gt; deepseek and alibaba release weights and still scored 32 and 26. publishing weights is not the same as being auditable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;training data is still a black box everywhere.&lt;/strong&gt; what they trained on, whether they had licenses, how they handled pii — consistently the worst-scoring subdomain, three years running.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;anthropic didn't submit a report.&lt;/strong&gt; the fmti team built one manually. anthropic ranked 2nd. good score, bad signal.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;as engineers we're the ones building on top of these systems. when something goes wrong in production, "we didn't disclose how we trained it" is not an answer you can give anyone.&lt;/p&gt;

&lt;p&gt;the index doesn't fix that. but it names who's trying to be honest versus who's retreating as market share grows. that's useful signal when choosing what to build on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why you should know about the fmti?
&lt;/h3&gt;

&lt;p&gt;most people pick their ai provider based on benchmarks, pricing, or vibes. the foundation model transparency index measures something different: how honest a company is about what they actually built.&lt;br&gt;
that matters more than most engineers realize.&lt;br&gt;
when you integrate a model into a product, you inherit its risks — biased outputs, leaked training patterns, copyright exposure, opaque safety evaluations. you can't audit what was never disclosed. and when something breaks, you're the one explaining it to stakeholders, not the lab.&lt;br&gt;
the fmti gives you a structured way to ask: does this provider tell me enough to reason about what i'm building on?&lt;br&gt;
it's not perfect. scores can be gamed, and disclosure isn't the same as safety. but it's one of the few independent, recurring attempts to hold this industry accountable before regulators do it badly.&lt;br&gt;
if you're doing vendor evaluation, building on llms in a regulated domain, or just tired of treating "trust us" as an architecture decision — this index is worth bookmarking.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>opinion</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
