<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Z4J</title>
    <description>The latest articles on Forem by Z4J (@z4j).</description>
    <link>https://forem.com/z4j</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/z4j"/>
    <language>en</language>
    <item>
      <title>z4j: a self-hosted control plane for Python task queues</title>
      <dc:creator>Z4J</dc:creator>
      <pubDate>Thu, 07 May 2026 16:36:12 +0000</pubDate>
      <link>https://forem.com/z4j/z4j-a-self-hosted-control-plane-for-python-task-queues-541h</link>
      <guid>https://forem.com/z4j/z4j-a-self-hosted-control-plane-for-python-task-queues-541h</guid>
      <description>&lt;p&gt;z4j is an open-source dashboard for Python background-job systems. It connects to the task queue (or queues) running in production and gives operators a single place to observe, retry, schedule, and audit the jobs flowing through them&lt;/p&gt;

&lt;p&gt;The product page is at &lt;a href="https://z4j.com" rel="noopener noreferrer"&gt;z4j.com&lt;/a&gt;. Source lives at &lt;a href="https://github.com/z4jdev/z4j" rel="noopener noreferrer"&gt;github.com/z4jdev/z4j&lt;/a&gt; and the umbrella package on PyPI is &lt;code&gt;z4j&lt;/code&gt;. Documentation is at &lt;a href="https://z4j.dev" rel="noopener noreferrer"&gt;z4j.dev&lt;/a&gt;, and a live demo runs at &lt;a href="https://demo.z4j.dev" rel="noopener noreferrer"&gt;demo.z4j.dev&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem it addresses
&lt;/h2&gt;

&lt;p&gt;Most Python applications that run background work for long enough end up with more than one queue technology under the hood. A consolidation gets discussed every other quarter and rarely happens. There is usually a library or service boundary that pre-dates the consolidation effort, and rewriting the tasks across that boundary is expensive enough that the second queue stays&lt;/p&gt;

&lt;p&gt;The operational cost of that fragmentation is real: separate dashboards (or no dashboard at all for some engines), separate retry mechanisms, separate audit trails, separate scheduling stories. Incident response begins with figuring out which system to inspect before any actual investigation can start&lt;/p&gt;

&lt;p&gt;z4j is built around that fragmentation. Jobs from every supported engine appear in one list. The same retry action works against Celery and Dramatiq. Schedules from every backend are editable in the same form. The audit log records every action across every engine in a single HMAC-chained sequence&lt;/p&gt;

&lt;p&gt;Six engine adapters are supported today: Celery, RQ, Dramatiq, Huey, arq, and taskiq. Seven scheduler adapters cover APScheduler, Celery Beat, Huey periodic, RQ Scheduler, arq cron, taskiq scheduler, and z4j's own scheduler. Framework integrations exist for Django, Flask, and FastAPI. The complete public PyPI surface is nineteen packages, each one its own thin install&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;The system has two halves.&lt;/p&gt;

&lt;p&gt;The first half is the central control service: a FastAPI backend paired with a React dashboard (TanStack Start v1, React 19.2, TypeScript). Operators point a browser at it. State lives in Postgres for production, with SQLite supported for local development. The control service speaks an authenticated WebSocket protocol to every application it observes.&lt;/p&gt;

&lt;p&gt;The second half is a small pip library installed into each application. It captures task lifecycle events, discovers the registered task graph at startup, and executes commands sent back over the WebSocket. Adapters are shipped as separate optional installs, so the surface is pay-as-you-go:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;pip install z4j[django,celery]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The library always connects outward to the control service. Observed applications do not need to be reachable from the control service's network. That suits the common case where workers run in private subnets or behind NAT, and it removes a class of inbound-firewall configuration entirely.&lt;/p&gt;

&lt;p&gt;A note on licensing because this comes up early. The control service (the &lt;code&gt;z4j&lt;/code&gt; distribution) is licensed under AGPL v3. Every adapter library (&lt;code&gt;z4j-django&lt;/code&gt;, &lt;code&gt;z4j-celery&lt;/code&gt;, and the rest) is Apache 2.0. The split is intentional: dashboard forks distributed for commercial resale stay open under AGPL, while the application code that imports a z4j adapter is not subject to AGPL terms. Application teams can deploy z4j without it touching their own license posture&lt;/p&gt;

&lt;h2&gt;
  
  
  What works well today
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A single job list across every wired engine, with unified retry, cancel, and bulk-action controls.&lt;/li&gt;
&lt;li&gt;Persistent history. Failures from last week are still inspectable; the dashboard is not bound to whatever the broker has in memory right now.&lt;/li&gt;
&lt;li&gt;Schedule CRUD against any of the seven scheduler backends, with one shared form.&lt;/li&gt;
&lt;li&gt;HMAC-chained audit log. Every action is verifiable as untampered after the fact, which closes a class of "did someone retry that on purpose" questions during postmortems.&lt;/li&gt;
&lt;li&gt;Redaction by default on common credential patterns (&lt;code&gt;token&lt;/code&gt;, &lt;code&gt;password&lt;/code&gt;, &lt;code&gt;secret&lt;/code&gt;, &lt;code&gt;authorization&lt;/code&gt; and their conventional spellings).&lt;/li&gt;
&lt;li&gt;Single auth surface. There is no per-engine login.&lt;/li&gt;
&lt;li&gt;Outward-only connection model from observed applications. No reverse network path required from the control service to workers.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is not solid yet
&lt;/h2&gt;

&lt;p&gt;The product is honest about its rough edges and the docs flag them as well.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The schedule-edit form parses cron expressions correctly, but the ergonomics around timezone display and human-readable preview need another pass.&lt;/li&gt;
&lt;li&gt;Test coverage is thinner against unusual Celery configurations (custom result backends, signed messages, multi-vhost RabbitMQ topologies) than against the mainstream Redis and RabbitMQ paths.&lt;/li&gt;
&lt;li&gt;Dashboard mobile layout works but a couple of views (release notes, deep schedule detail) are not visually tight on narrow screens.&lt;/li&gt;
&lt;li&gt;Production hardening documentation (TLS termination, secret rotation, Kubernetes deployment with a managed Postgres) exists but is less polished than the local install path.&lt;/li&gt;
&lt;li&gt;Bulk operations have UI affordances for cancel and retry but not yet for re-prioritization across mixed engines.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where it fits, and where it does not
&lt;/h2&gt;

&lt;p&gt;z4j is most useful where there is more than one queue technology in production, or where a single auditable action and history surface across the queue infrastructure is a hard requirement. Compliance-driven environments fit naturally because of the chained audit log and the default redaction posture.&lt;/p&gt;

&lt;p&gt;For a single-engine Celery deployment that is already happy with Flower, the migration argument is weaker. Flower is purpose-built for the single-Celery case and remains a reasonable choice there. The clearest differentiation shows up either when a second engine has joined the stack, or when retention and audit requirements push beyond what an engine's own admin UI provides.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trying it
&lt;/h2&gt;

&lt;p&gt;The fastest read is the demo at &lt;a href="https://demo.z4j.dev" rel="noopener noreferrer"&gt;demo.z4j.dev&lt;/a&gt;. It is a static replay of a real install, seeded with jobs, schedules, alerts, and notification history. Every action button is wired to an in-memory mock, so the full flow can be clicked through without an install&lt;/p&gt;

&lt;p&gt;For a local install with Postgres:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;git clone https://github.com/z4jdev/z4j.git&lt;/code&gt;&lt;br&gt;
&lt;code&gt;cd z4j&lt;/code&gt;&lt;br&gt;
&lt;code&gt;docker compose -f docker-compose.postgres.yml up -d&lt;/code&gt;&lt;br&gt;
&lt;code&gt;docker compose -f docker-compose.postgres.yml logs -f z4j&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The control service prints a one-time admin setup URL on first boot. Clicking it sets the admin password and opens the dashboard. The container is named &lt;code&gt;z4j&lt;/code&gt; and the image published on Docker Hub is &lt;code&gt;z4jdev/z4j:latest&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Feedback and issues
&lt;/h2&gt;

&lt;p&gt;Bug reports, feature gaps, and integration questions go to &lt;a href="https://github.com/z4jdev/z4j/issues" rel="noopener noreferrer"&gt;github.com/z4jdev/z4j/issues&lt;/a&gt;. Response target on issues is 24 hours, and that target is currently being met&lt;/p&gt;

</description>
      <category>z4j</category>
      <category>celery</category>
      <category>rq</category>
      <category>django</category>
    </item>
  </channel>
</rss>
