<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Eduardo Sanhueso</title>
    <description>The latest articles on Forem by Eduardo Sanhueso (@edu2105).</description>
    <link>https://forem.com/edu2105</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/edu2105"/>
    <language>en</language>
    <item>
      <title>NiFi on Kubernetes without ZooKeeper: why your readiness probe needs to know about ordinals</title>
      <dc:creator>Eduardo Sanhueso</dc:creator>
      <pubDate>Fri, 17 Apr 2026 14:07:22 +0000</pubDate>
      <link>https://forem.com/edu2105/nifi-on-kubernetes-without-zookeeper-why-your-readiness-probe-needs-to-know-about-ordinals-85h</link>
      <guid>https://forem.com/edu2105/nifi-on-kubernetes-without-zookeeper-why-your-readiness-probe-needs-to-know-about-ordinals-85h</guid>
      <description>&lt;p&gt;&lt;em&gt;Running NiFi 2 on Kubernetes without ZooKeeper simplifies your infrastructure — but it shifts the responsibility for cluster stability onto a probe configuration most teams get wrong.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Why ZooKeeper-less?
&lt;/h2&gt;

&lt;p&gt;Apache NiFi 2 introduced &lt;code&gt;KubernetesLeaseLeaderElectionProvider&lt;/code&gt;, which lets NiFi use Kubernetes-native leader election instead of relying on an external ZooKeeper ensemble. Fewer moving parts, less infrastructure to manage, no separate ZooKeeper StatefulSet to maintain.&lt;/p&gt;

&lt;p&gt;The tradeoff: without an external coordinator, the Pods themselves are responsible for forming quorum. That makes your &lt;code&gt;readinessProbe&lt;/code&gt; configuration far more critical than it would be in a traditional NiFi deployment.&lt;/p&gt;

&lt;p&gt;Get it wrong and you face an uncomfortable choice: rolling updates that cause service outages, or full restarts that compromise data consistency.&lt;/p&gt;




&lt;h2&gt;
  
  
  The dilemma: lenient or strict?
&lt;/h2&gt;

&lt;p&gt;When configuring the Kubernetes readiness probe for NiFi, you quickly run into a technical crossroads.&lt;/p&gt;

&lt;h3&gt;
  
  
  The risk of a lenient probe
&lt;/h3&gt;

&lt;p&gt;The first thing most people try is checking that the Jetty server responds — a simple HTTP request to the NiFi API. It seems reasonable, but in a distributed environment it's dangerous.&lt;/p&gt;

&lt;p&gt;During a rolling update, Kubernetes restarts pods sequentially. With a lenient probe, Kubernetes sees that the new pod (e.g. &lt;code&gt;nifi-2&lt;/code&gt;) has a responsive Jetty server and immediately marks it as ready, then proceeds to terminate the next active pod (&lt;code&gt;nifi-1&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;The problem: &lt;code&gt;nifi-2&lt;/code&gt; may have started its Jetty server but hasn't joined the cluster or synchronized the flow yet. You've killed an active pod before the replacement is actually functional. In seconds, you lose quorum and the service goes down.&lt;/p&gt;

&lt;h3&gt;
  
  
  The problem with a strict probe
&lt;/h3&gt;

&lt;p&gt;To avoid that, the logical fix is to be strict: "the pod is only &lt;em&gt;Ready&lt;/em&gt; if it's &lt;code&gt;CONNECTED&lt;/code&gt; to the cluster." This solves rolling updates but breaks full restarts.&lt;/p&gt;

&lt;p&gt;When starting from scratch with &lt;code&gt;podManagementPolicy: OrderedReady&lt;/code&gt; (the default), &lt;code&gt;nifi-0&lt;/code&gt; starts first. Alone, it can't connect to a cluster that doesn't exist yet.&lt;/p&gt;

&lt;p&gt;Here's what happens: &lt;code&gt;nifi-0&lt;/code&gt; sits isolated, waiting for peers that won't arrive, until its election timeout expires (&lt;code&gt;nifi.cluster.flow.election.max.wait.time&lt;/code&gt;, default 5 minutes). Only then does it declare itself sole leader.&lt;/p&gt;

&lt;p&gt;This doesn't just delay startup unnecessarily — it &lt;strong&gt;breaks consensus&lt;/strong&gt;. &lt;code&gt;nifi-0&lt;/code&gt; imposes its version of &lt;code&gt;flow.json.gz&lt;/code&gt; without comparing it with anyone, creating a real risk of data inconsistency or loss of recent changes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Using &lt;code&gt;podManagementPolicy: Parallel&lt;/code&gt; does allow all pods to start simultaneously on a fresh restart, but introduces its own dependencies and failure modes that deserve a separate discussion.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The solution: ordinal-aware probe logic
&lt;/h2&gt;

&lt;p&gt;The answer isn't to pick one approach or the other — it's to apply each one based on the pod's role in the StatefulSet.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;nifi-0&lt;/code&gt; and &lt;code&gt;nifi-1+&lt;/code&gt; have fundamentally different responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;nifi-0&lt;/strong&gt; must prioritize startup. We need Kubernetes to bring up its peers as quickly as possible so that leader election happens democratically.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;nifi-1+&lt;/strong&gt; must prioritize stability. They should not receive traffic or allow the rollout to proceed until they are fully integrated into the cluster.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's the hybrid probe that implements this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;readinessProbe&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;exec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/bin/bash&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;-c&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;FQDN=$(hostname -f)&lt;/span&gt;
        &lt;span class="s"&gt;API_URL="https://${FQDN}:8443/nifi-api/controller/cluster"&lt;/span&gt;

        &lt;span class="s"&gt;# Step 1: base check — is the NiFi API reachable? (applies to all pods)&lt;/span&gt;
        &lt;span class="s"&gt;RESPONSE=$(curl -s -m 5 $CERT_ARGS $API_URL)&lt;/span&gt;
        &lt;span class="s"&gt;if [ $? -ne 0 ]; then exit 1; fi&lt;/span&gt;

        &lt;span class="s"&gt;# Step 2: hybrid logic based on StatefulSet ordinal&lt;/span&gt;
        &lt;span class="s"&gt;if [[ "$(hostname)" == *"nifi-0"* ]]; then&lt;/span&gt;
            &lt;span class="s"&gt;# nifi-0: Ready as soon as the API responds.&lt;/span&gt;
            &lt;span class="s"&gt;# This immediately unblocks the startup of nifi-1 and nifi-2,&lt;/span&gt;
            &lt;span class="s"&gt;# allowing democratic leader election instead of a solo timeout.&lt;/span&gt;
            &lt;span class="s"&gt;exit 0&lt;/span&gt;
        &lt;span class="s"&gt;else&lt;/span&gt;
            &lt;span class="s"&gt;# nifi-1+: strict validation.&lt;/span&gt;
            &lt;span class="s"&gt;# Must be CONNECTED before receiving traffic or allowing rollout to proceed.&lt;/span&gt;
            &lt;span class="s"&gt;echo "$RESPONSE" | grep -q "\"status\":\"CONNECTED\""&lt;/span&gt;
            &lt;span class="s"&gt;if [ $? -eq 0 ]; then exit 0; else exit 1; fi&lt;/span&gt;
        &lt;span class="s"&gt;fi&lt;/span&gt;
  &lt;span class="na"&gt;initialDelaySeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;90&lt;/span&gt;
  &lt;span class="na"&gt;periodSeconds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
  &lt;span class="na"&gt;failureThreshold&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;6&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note on &lt;code&gt;$CERT_ARGS&lt;/code&gt;:&lt;/strong&gt; this variable should contain your TLS certificate arguments for curl (e.g. &lt;code&gt;--cacert&lt;/code&gt;, &lt;code&gt;--cert&lt;/code&gt;, &lt;code&gt;--key&lt;/code&gt;). Define it in your pod environment or expand it inline based on your certificate setup.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Why this works better
&lt;/h2&gt;

&lt;h3&gt;
  
  
  During a rolling update
&lt;/h3&gt;

&lt;p&gt;Kubernetes rolls pods from highest to lowest ordinal. &lt;code&gt;nifi-2&lt;/code&gt; is restarted first and must reach &lt;code&gt;CONNECTED&lt;/code&gt; before Kubernetes touches &lt;code&gt;nifi-1&lt;/code&gt;, and so on. The probe on &lt;code&gt;nifi-1+&lt;/code&gt; acts as a gate — the rollout cannot advance until the current pod is genuinely integrated.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nifi-2 restarted
        |
        ↓
nifi-2 API up → probe passes base check
        |
        ↓
nifi-2 joins cluster → status: CONNECTED → probe passes strict check → Ready
        |
        ↓
Kubernetes proceeds to restart nifi-1
        |
        ↓
nifi-1 joins cluster → status: CONNECTED → probe passes strict check → Ready
        |
        ↓
Kubernetes proceeds to restart nifi-0
        |
        ↓
nifi-0 API up → probe passes base check → Ready (lenient rule)
        |
        ↓
Rolling update complete — zero downtime
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The only side effect: you may not be able to make flow changes in the NiFi UI while a pod is joining the cluster — a minor and temporary constraint.&lt;/p&gt;

&lt;h3&gt;
  
  
  During a full restart
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Without hybrid probe:            With hybrid probe:

nifi-0 starts alone              nifi-0 starts → API up → Ready immediately
        |                                |
        ↓                                ↓
waits 5 min timeout              nifi-1 starts right after
        |                                |
        ↓                                ↓
imposes its flow.json.gz         both compare flow.json.gz
(no consensus)                   → democratic leader election
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;nifi-0&lt;/code&gt; tells Kubernetes a white lie — "I'm ready" — as soon as the API responds. This triggers the immediate startup of the rest of the cluster. With &lt;code&gt;nifi-0&lt;/code&gt; and &lt;code&gt;nifi-1&lt;/code&gt; coming up almost simultaneously, leader election and &lt;code&gt;flow.json.gz&lt;/code&gt; comparison happen by real consensus rather than a unilateral decision made after a timeout.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Running NiFi on Kubernetes without ZooKeeper is fully viable and operationally simpler — but it requires your &lt;code&gt;readinessProbe&lt;/code&gt; to be aware of the StatefulSet topology. Don't treat all your pods equally: give &lt;code&gt;nifi-0&lt;/code&gt; the freedom to start the party, and require the others to join it properly.&lt;/p&gt;

&lt;p&gt;Kubernetes has no visibility into NiFi's internal cluster state. Your readiness probe does.&lt;br&gt;
And that difference is what keeps the cluster stable.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have questions or a different approach to this problem? Happy to discuss in the comments.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>nifi</category>
      <category>devops</category>
      <category>eks</category>
    </item>
    <item>
      <title>imnot: a YAML-defined stateful API mock server for external system integrations</title>
      <dc:creator>Eduardo Sanhueso</dc:creator>
      <pubDate>Fri, 17 Apr 2026 01:11:32 +0000</pubDate>
      <link>https://forem.com/edu2105/imnot-a-yaml-defined-stateful-api-mock-server-for-external-system-integrations-27bi</link>
      <guid>https://forem.com/edu2105/imnot-a-yaml-defined-stateful-api-mock-server-for-external-system-integrations-27bi</guid>
      <description>&lt;p&gt;&lt;em&gt;imnot is an open source stateful API mock server. This is the story of why I built it.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The ticket that changes your afternoon
&lt;/h2&gt;

&lt;p&gt;A support ticket arrives: &lt;em&gt;"For this specific transaction, the integration fails with a null pointer exception."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The data that triggered the bug is in production. The exact combination of field values exists only in that one real record.&lt;/p&gt;

&lt;p&gt;The right move is to reproduce it in your staging environment. But rebuilding that exact record manually in the external system's demo UI — matching every field value — can take hours. Sometimes it's practically impossible because the external system's demo environment doesn't support all the same configurations as production.&lt;/p&gt;

&lt;p&gt;What you actually want: take the exact production payload from the support ticket, upload it to a mock that returns it verbatim, point your staging system there, and reproduce the failure in minutes. No manual reconstruction. No touching production.&lt;/p&gt;

&lt;p&gt;That's one of the core use cases that motivated &lt;code&gt;imnot&lt;/code&gt;. I work as a Lead Integration Solutions Engineer at a Revenue Management System in the hospitality industry, where integrating with external partners — property management systems, booking platforms, channel managers — is daily work.&lt;/p&gt;




&lt;h2&gt;
  
  
  The NiFi workaround and its ceiling
&lt;/h2&gt;

&lt;p&gt;I've been using Apache NiFi for integration workflows for about six years. NiFi is a data flow orchestration tool — not designed for mocking, but flexible enough that you can build almost anything with it.&lt;/p&gt;

&lt;p&gt;Over time I built a collection of NiFi flows that simulated external system behavior. The pattern: upload a payload via HTTP, configure your application to point at the NiFi URL, and the flow responds exactly like the real external system would — including the full async sequences that some systems require. We used it to test integration changes without needing a live external environment, and to reproduce production bugs from support tickets without touching real data.&lt;/p&gt;

&lt;p&gt;The reason we used NiFi for this — rather than Postman mock servers or Mockoon — wasn't because NiFi is better at mocking. It was simply already there. We were using it for integration workflows, so when the need for mock endpoints arose, it was the natural tool to reach for.&lt;/p&gt;

&lt;p&gt;But it had a hard ceiling.&lt;/p&gt;

&lt;p&gt;Every new mock required specialist knowledge of NiFi. Building one took meaningful time, and when speed was the priority, quality suffered. The team has grown and we now have more people working with NiFi, but the underlying problem remained: the mock configuration lived inside NiFi flows, which meant it wasn't version-controlled alongside the integration code it was testing, and it wasn't accessible to anyone outside that specialist circle.&lt;/p&gt;

&lt;p&gt;When AI coding tools became widely available across our team, something clicked. People who weren't developers were suddenly building things — generating configs, automating tasks that previously required specialist knowledge. I thought: what if anyone could describe an external API and have a working mock in minutes, without knowing NiFi, without depending on a specialist?&lt;/p&gt;

&lt;p&gt;That was the seed of &lt;code&gt;imnot&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What makes it different: stateful flows in YAML
&lt;/h2&gt;

&lt;p&gt;Most mock servers handle the stateless case well — define a response for a given endpoint, return it every time. That covers a lot of ground, but it doesn't cover the patterns that appear constantly in B2B integrations.&lt;/p&gt;

&lt;p&gt;Consider a common async flow: your system POSTs a request to an external API, receives a &lt;code&gt;202 Accepted&lt;/code&gt; with a location reference, polls that location until the external system reports completion, then fetches the result. Three steps, each dependent on the previous one. The identifier generated in step one appears in the path of steps two and three. Call them out of order, and the real API rejects you.&lt;/p&gt;

&lt;p&gt;WireMock and Mockoon are excellent tools, but modeling this sequence declaratively — without writing code — isn't what they're built for. &lt;code&gt;imnot&lt;/code&gt; is built specifically for this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;data-sync&lt;/span&gt;
  &lt;span class="na"&gt;pattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;async&lt;/span&gt;
  &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;step&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POST&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/external/jobs&lt;/span&gt;
      &lt;span class="na"&gt;response&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;202&lt;/span&gt;
        &lt;span class="na"&gt;generates_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="na"&gt;id_header&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Location&lt;/span&gt;
        &lt;span class="na"&gt;id_header_value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/external/jobs/{id}&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;step&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/external/jobs/{id}&lt;/span&gt;
      &lt;span class="na"&gt;response&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;201&lt;/span&gt;
        &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;Status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;COMPLETED&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;step&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
      &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;GET&lt;/span&gt;
      &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/external/jobs/{id}&lt;/span&gt;
      &lt;span class="na"&gt;response&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
        &lt;span class="na"&gt;returns_payload&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;imnot start&lt;/code&gt; reads that YAML and registers the endpoints dynamically. Point your staging system there. The mock handles the sequence, the state, and the ID propagation automatically.&lt;/p&gt;

&lt;p&gt;For the support ticket scenario: upload the exact production payload via a single API call, point your staging system at &lt;code&gt;imnot&lt;/code&gt;, and the integration processes it exactly as it would in production — in a safe, controlled environment.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI-ready by design
&lt;/h2&gt;

&lt;p&gt;The YAML schema is intentionally simple enough that Claude, ChatGPT, or Copilot can generate a valid partner definition from a plain description or an OpenAPI spec. The README ships with ready-to-use prompts for both cases.&lt;/p&gt;

&lt;p&gt;On my team, people who've never written YAML are already using &lt;code&gt;imnot&lt;/code&gt;: describe what the external API does, paste the output into &lt;code&gt;imnot generate&lt;/code&gt;, and have a working mock running. No NiFi knowledge required.&lt;/p&gt;

&lt;p&gt;This felt like the right design decision — and it also felt honest, because &lt;code&gt;imnot&lt;/code&gt; itself was built with Claude Code as the primary coding tool. Using AI to build a tool designed to work well with AI seemed appropriately coherent.&lt;/p&gt;




&lt;h2&gt;
  
  
  Running in production — local and cloud
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;imnot&lt;/code&gt; runs anywhere Docker runs. For local development, three commands are all you need:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pipx &lt;span class="nb"&gt;install &lt;/span&gt;imnot
imnot init        &lt;span class="c"&gt;# scaffolds partners/ with working examples&lt;/span&gt;
imnot start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the server is running, &lt;code&gt;imnot routes&lt;/code&gt; lists all registered endpoints without restarting.&lt;/p&gt;

&lt;p&gt;For teams who want a shared instance, it deploys as a container on any cloud platform. In our case it runs in the same EKS cluster as our NiFi deployment, with its own Helm chart. Every member of the integrations team can upload payloads, reproduce bugs, and run tests against it — no local setup required, no NiFi knowledge needed.&lt;/p&gt;

&lt;p&gt;The only infrastructure requirements: a persistent volume at &lt;code&gt;/app/data&lt;/code&gt; for the SQLite session store, an &lt;code&gt;IMNOT_ADMIN_KEY&lt;/code&gt; environment variable to protect the admin endpoints, and &lt;code&gt;--host 0.0.0.0&lt;/code&gt; so the container port is reachable from outside.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Built with
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FastAPI&lt;/strong&gt; — HTTP server and dynamic route registration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SQLite&lt;/strong&gt; — session and payload persistence, zero infrastructure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyYAML&lt;/strong&gt; — partner definition parsing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Click&lt;/strong&gt; — CLI (&lt;code&gt;imnot init&lt;/code&gt;, &lt;code&gt;imnot start&lt;/code&gt;, &lt;code&gt;imnot routes&lt;/code&gt;, &lt;code&gt;imnot generate&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uvicorn&lt;/strong&gt; — ASGI server&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pipx &lt;span class="nb"&gt;install &lt;/span&gt;imnot
imnot init
imnot start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The repo includes two example partner definitions — StayLink and BookingCo — demonstrating the main patterns. &lt;code&gt;partners/README.md&lt;/code&gt; has the full YAML schema reference.&lt;/p&gt;

&lt;p&gt;Once your partners are defined, &lt;code&gt;imnot export postman&lt;/code&gt; generates a Postman collection v2.1 covering all consumer and admin endpoints — useful for manual testing and sharing with QA without having to document endpoints by hand.&lt;/p&gt;

&lt;p&gt;If you work on integrations and recognize any of this — the missing staging environments, the production payload debugging, the specialist everyone depends on to build the mocks — &lt;code&gt;imnot&lt;/code&gt; was built for that situation.&lt;/p&gt;

&lt;p&gt;→ &lt;a href="https://github.com/edu2105/imnot" rel="noopener noreferrer"&gt;github.com/edu2105/imnot&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>testing</category>
      <category>devops</category>
      <category>api</category>
    </item>
  </channel>
</rss>
