<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: codelluis</title>
    <description>The latest articles on Forem by codelluis (@codelluis).</description>
    <link>https://forem.com/codelluis</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/codelluis"/>
    <language>en</language>
    <item>
      <title>Distribute your Python app without rewriting it</title>
      <dc:creator>codelluis</dc:creator>
      <pubDate>Mon, 27 Apr 2026 15:00:00 +0000</pubDate>
      <link>https://forem.com/codelluis/distribute-your-python-app-without-rewriting-it-3e6i</link>
      <guid>https://forem.com/codelluis/distribute-your-python-app-without-rewriting-it-3e6i</guid>
      <description>&lt;p&gt;You have a Python function that processes one item. You call it in a loop over a list. The list grows. The loop slows down. The work is real — an LLM API call, an embedding, a scrape, a database query, a model inference — the kind of thing that does not get faster with prettier code.&lt;/p&gt;

&lt;p&gt;Distribution is the answer. Distribution usually means rewriting every call site to handle queues, futures, and result objects. So the loop stays slow and a progress bar gets added.&lt;/p&gt;

&lt;p&gt;This post is about removing the migration cost. &lt;strong&gt;One decorator. One environment variable. Five reports go from 2.51 seconds to 0.54 seconds. Zero call sites change.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The whole demo is in the &lt;a href="https://github.com/pynenc/samples/tree/main/direct_task_demo" rel="noopener noreferrer"&gt;direct_task_demo&lt;/a&gt; sample of the &lt;a href="https://github.com/pynenc/samples" rel="noopener noreferrer"&gt;pynenc samples&lt;/a&gt; repository. The example happens to generate sales reports because it needs a concrete I/O-bound function with a list-shaped input — but the pattern is the same for batch LLM calls, embedding generation, RAG indexing, web scraping, ETL enrichment, or any workload of the form "slow function, list of items, want it parallel".&lt;/p&gt;

&lt;h2&gt;
  
  
  The original code
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;tasks_original.py&lt;/code&gt; is plain Python. No decorators, no imports from any framework, no infrastructure assumptions. It does what the existing codebase already does:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# tasks_original.py
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;hashlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;md5&lt;/span&gt;

&lt;span class="n"&gt;PERIODS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q1-2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q2-2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q3-2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q4-2025&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Q1-2026&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_build_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# simulates DB queries + aggregation
&lt;/span&gt;    &lt;span class="n"&gt;seed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;int&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;md5&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;hexdigest&lt;/span&gt;&lt;span class="p"&gt;()[:&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;revenue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;50_000&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;seed&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;950_000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;orders&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;seed&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;9_900&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;period&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;revenue&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;revenue&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;orders&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;avg_order_value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;revenue&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;orders&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;_build_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_reports&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;_build_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Running it produces five reports in 2.51 seconds. That is the baseline.&lt;/p&gt;

&lt;h2&gt;
  
  
  The migration
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;tasks.py&lt;/code&gt; is the same file with three additions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gi"&gt;+ from pynenc import Pynenc
+ app = Pynenc()
&lt;/span&gt;&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="gi"&gt;+ @app.direct_task
&lt;/span&gt;  def generate_report(period: str) -&amp;gt; dict:
      return _build_report(period)
&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="gi"&gt;+ @app.direct_task(parallel_func=_per_period, aggregate_func=_flatten)
&lt;/span&gt;  def generate_reports(periods: list[str]) -&amp;gt; list[dict]:
      return [_build_report(p) for p in periods]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Function bodies, signatures, and return types are identical. The two helpers &lt;code&gt;_per_period&lt;/code&gt; and &lt;code&gt;_flatten&lt;/code&gt; are added to support the parallel decorator — they read the caller's actual arguments, they do not synthesize anything out of thin air:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_per_period&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;tuple&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]]]:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[([&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;],)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;periods&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;_flatten&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;report&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chunks&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;report&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;_per_period&lt;/code&gt; reads the &lt;code&gt;periods&lt;/code&gt; argument the caller passed and yields one period per worker. &lt;code&gt;_flatten&lt;/code&gt; collects the per-worker results back into a single list. The decorator does the routing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sync mode: the decorators are inert
&lt;/h2&gt;

&lt;p&gt;Setting &lt;code&gt;PYNENC__DEV_MODE_FORCE_SYNC_TASKS=True&lt;/code&gt; runs every decorated call inline in the caller's thread — no runner, no broker, no database writes. Behaviour is identical to &lt;code&gt;tasks_original.py&lt;/code&gt;: 5 reports in 2.52s, same values, same order. This is the strangler-fig migration pattern: decorate one function at a time, keep the env var on so existing tests stay green, then remove it in production. No call site needs to change.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ PYNENC__DEV_MODE_FORCE_SYNC_TASKS=True python sample_sync.py

Sync mode: 5 reports in 2.52s (expected ~2.5s — sequential, like the original)
  Q1-2025     revenue=$  477,381  orders=  381  AOV=$1252.97
  Q2-2025     revenue=$  798,638  orders= 7838  AOV=$101.89
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Distributed mode: the same calls, with workers
&lt;/h2&gt;

&lt;p&gt;Removing the env var and starting a &lt;code&gt;ThreadRunner&lt;/code&gt; makes the decorators distribute work over a SQLite-backed broker. The call sites do not change:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python sample_distributed.py

Sequential calls on runner: 5 reports in 3.18s (each call blocks before the next starts)

Concurrent caller threads: 5 reports in 0.54s (N caller threads -&amp;gt; N workers running in parallel)
  Q1-2025     revenue=$  477,381  ...
  ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Two patterns appear here. The sequential loop is the original code, unchanged — each &lt;code&gt;generate_report(p)&lt;/code&gt; blocks before the next call starts. That is by design: &lt;code&gt;@app.direct_task&lt;/code&gt; preserves the calling contract of a regular Python function. The caller waits, gets the value back, and exception handling works as it always did. That guarantee is what makes the migration zero-cost.&lt;/p&gt;

&lt;p&gt;For caller-side concurrency, &lt;code&gt;ThreadPoolExecutor&lt;/code&gt; is the standard Python pattern, and it composes naturally:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;concurrent.futures&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ThreadPoolExecutor&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nc"&gt;ThreadPoolExecutor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_workers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PERIODS&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;reports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;generate_report&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;PERIODS&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each thread blocks on its own call; the runner processes them in parallel. Five reports in 0.54 seconds — five times faster on the same machine, with no broker change.&lt;/p&gt;

&lt;h2&gt;
  
  
  Single-call fan-out
&lt;/h2&gt;

&lt;p&gt;Sometimes the parallelism belongs inside the function rather than at the call site. The caller passes a list, expects a list back, and does not need to change a single line of code. That is what &lt;code&gt;parallel_func&lt;/code&gt; is for: a small helper that describes how to split the arguments into individual work items. Pynenc dispatches one task per item — across whatever workers are running — then reassembles the results via &lt;code&gt;aggregate_func&lt;/code&gt; before returning to the caller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# tasks.py
&lt;/span&gt;&lt;span class="nd"&gt;@app.direct_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parallel_func&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;_per_period&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;aggregate_func&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;_flatten&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_reports&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;_build_report&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The caller calls it exactly as in &lt;code&gt;tasks_original.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;reports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generate_reports&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;periods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PERIODS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Behind the decorator, &lt;code&gt;_per_period&lt;/code&gt; reads &lt;code&gt;args["periods"]&lt;/code&gt; and yields one argument tuple per period. Pynenc triggers one task per tuple and routes each to an available worker. &lt;code&gt;_flatten&lt;/code&gt; collects the per-worker results back into a single list. The caller receives the same shape it always did:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ python sample_parallel.py

Parallel fan-out: 5 reports in 0.65s (one call, 5 workers running in parallel)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function signature is honest. Nothing is "ignored". The argument the caller passes is the argument &lt;code&gt;parallel_func&lt;/code&gt; reads.&lt;/p&gt;

&lt;p&gt;For higher throughput, pynenc's native parallel API goes further: instead of aggregating before returning, the function exposes a result group that the caller can iterate as results arrive. Each item is available as soon as the worker that produced it finishes — no waiting for the slowest one. The &lt;code&gt;parallel_func&lt;/code&gt; pattern shown here is the zero-migration-cost option: same signature, same return type, same call site, parallelism handled entirely by the decorator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not just use &lt;code&gt;asyncio&lt;/code&gt; / &lt;code&gt;multiprocessing&lt;/code&gt; / Celery?
&lt;/h2&gt;

&lt;p&gt;These are the obvious alternatives and each one solves a different slice of the problem.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;asyncio.gather&lt;/code&gt;&lt;/strong&gt; parallelises async I/O on a single event loop. It works only if the function is already &lt;code&gt;async&lt;/code&gt;, only on one machine, and only for I/O-bound work. Synchronous functions need to be rewritten.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;multiprocessing.Pool.map&lt;/code&gt;&lt;/strong&gt; parallelises across CPU cores on a single host. It cannot scale beyond one machine, struggles with large arguments (everything is pickled and copied), and the call site changes from &lt;code&gt;f(x)&lt;/code&gt; to &lt;code&gt;pool.map(f, xs)&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;concurrent.futures.ThreadPoolExecutor&lt;/code&gt;&lt;/strong&gt; is a clean primitive but stops at the process boundary. With &lt;code&gt;@app.direct_task&lt;/code&gt; it composes — use it on the caller side and pynenc handles the worker side, optionally on different machines.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Celery / RQ / Dramatiq&lt;/strong&gt; scale across machines but break the calling contract: &lt;code&gt;f(x)&lt;/code&gt; becomes &lt;code&gt;f.delay(x).get()&lt;/code&gt; or similar. Every call site has to change. There is no in-process sync mode for unit tests — you run a worker or you mock.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;@app.direct_task&lt;/code&gt; is the option that gives you all three properties at once: distributed across machines, the call site does not change, and a single environment variable runs everything inline for tests and local development.&lt;/p&gt;

&lt;h2&gt;
  
  
  When &lt;code&gt;direct_task&lt;/code&gt; is the right tool
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;@app.direct_task&lt;/code&gt; always blocks the caller. That is the point: it preserves the calling contract that the original code already relied on. Migration is a copy-the-decorator operation, not a rewrite.&lt;/p&gt;

&lt;p&gt;For fire-and-forget semantics — enqueue work and continue without blocking — &lt;code&gt;@app.task&lt;/code&gt; is the right decorator. It returns an &lt;code&gt;Invocation&lt;/code&gt; and exposes &lt;code&gt;.result&lt;/code&gt; for explicit waiting. The two decorators are complementary; the right choice is whichever one preserves the call pattern the codebase already has.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# uv: https://docs.astral.sh/uv/getting-started/installation/&lt;/span&gt;
git clone https://github.com/pynenc/samples.git
&lt;span class="nb"&gt;cd &lt;/span&gt;samples/direct_task_demo
uv &lt;span class="nb"&gt;sync

&lt;/span&gt;uv run python tasks_original.py                                       &lt;span class="c"&gt;# baseline&lt;/span&gt;
&lt;span class="nv"&gt;PYNENC__DEV_MODE_FORCE_SYNC_TASKS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;True uv run python sample_sync.py   &lt;span class="c"&gt;# decorators inert&lt;/span&gt;
uv run python sample_distributed.py                                   &lt;span class="c"&gt;# workers, two patterns&lt;/span&gt;
uv run python sample_parallel.py                                      &lt;span class="c"&gt;# single-call fan-out&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Further reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/pynenc" rel="noopener noreferrer"&gt;pynenc&lt;/a&gt; — the framework&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.pynenc.org/usage_guide/use_case_008_direct_task.html" rel="noopener noreferrer"&gt;direct_task usage guide&lt;/a&gt; — full documentation&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/samples" rel="noopener noreferrer"&gt;pynenc samples&lt;/a&gt; — runnable demos&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/pynenc/discussions" rel="noopener noreferrer"&gt;GitHub Discussions&lt;/a&gt; — open questions, feedback&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>python</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>webdev</category>
    </item>
    <item>
      <title>I Killed a Python Worker Mid-Task. Here's What Should Have Happened.</title>
      <dc:creator>codelluis</dc:creator>
      <pubDate>Sun, 19 Apr 2026 13:59:56 +0000</pubDate>
      <link>https://forem.com/codelluis/i-killed-a-python-worker-mid-task-heres-what-should-have-happened-1kpl</link>
      <guid>https://forem.com/codelluis/i-killed-a-python-worker-mid-task-heres-what-should-have-happened-1kpl</guid>
      <description>&lt;p&gt;I ran &lt;code&gt;kill -9&lt;/code&gt; on a worker that was processing three tasks. They vanished. No error. No retry. I checked the queue: empty. I checked the results: nothing. The work was just gone.&lt;/p&gt;

&lt;p&gt;This is not a bug. This is the default behavior of many Python task frameworks. A worker dies mid-execution, and whatever it was doing disappears.&lt;/p&gt;

&lt;p&gt;So I built a framework where the system heals itself. Here is what that looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem nobody talks about
&lt;/h2&gt;

&lt;p&gt;Here is what usually happens when a worker crashes in the middle of a task:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A task starts running on Worker-1.&lt;/li&gt;
&lt;li&gt;Worker-1 gets OOM-killed (or crashes, or the host dies).&lt;/li&gt;
&lt;li&gt;The task message was already acknowledged and removed from the queue.&lt;/li&gt;
&lt;li&gt;The task is gone: no record, no detection, no recovery.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Typical workarounds teams build by hand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Late acknowledgement, which reduces task loss but increases duplicate execution risk.&lt;/li&gt;
&lt;li&gt;External monitoring, which detects failures but still requires manual re-queueing.&lt;/li&gt;
&lt;li&gt;Strict idempotency layers everywhere, which are useful but still need a recovery trigger.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not complete solutions. They are patches around a missing core capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  So I killed a worker. Here is what happened
&lt;/h2&gt;

&lt;p&gt;I ran the same crash scenario with &lt;a href="https://github.com/pynenc/pynenc" rel="noopener noreferrer"&gt;pynenc&lt;/a&gt;: three tasks running, then &lt;code&gt;SIGKILL&lt;/code&gt;, then a second worker.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;STEP 1: Starting Worker-1...
  Worker-1 started (PID 12345)

STEP 2: Submitting 3 long-running tasks...
  -&amp;gt; Submitted slow_task(0)
  -&amp;gt; Submitted slow_task(1)
  -&amp;gt; Submitted slow_task(2)

  Waiting for Worker-1 to pick up and start running tasks...

STEP 3: Simulating a worker crash!
  X Killing Worker-1 (PID 12345) with SIGKILL...
  X Worker-1 terminated (exit code -9)

  The in-progress task is now orphaned — no worker owns it.

STEP 4: Starting Worker-2 (the recovery worker)...
  Worker-2 started (PID 12346)

STEP 5: Waiting for recovery and task completion...
  OK slow_task completed: task_0_completed
  OK slow_task completed: task_1_completed
  OK slow_task completed: task_2_completed

  ALL 3 TASKS COMPLETED SUCCESSFULLY
  Tasks from the crashed worker were recovered automatically!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Worker-1 died mid-execution. Worker-2 detected the stale heartbeat, recovered orphaned tasks, and finished all three with zero manual intervention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring view
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxur6pdslguc8lhkmwqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxur6pdslguc8lhkmwqm.png" alt="Pynmon monitoring view during recovery demo" width="800" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Click to open the image at full size.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the same monitoring view used during the run. From here you can inspect the timeline across runners, open each invocation detail, and follow the logs around state changes to understand what happened step by step.&lt;/p&gt;

&lt;h2&gt;
  
  
  How recovery works
&lt;/h2&gt;

&lt;p&gt;Every runner sends periodic heartbeats. As long as heartbeats arrive, the runner is healthy.&lt;/p&gt;

&lt;p&gt;When heartbeats stop:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The recovery service marks the runner as stale.&lt;/li&gt;
&lt;li&gt;Orphaned running invocations are claimed safely.&lt;/li&gt;
&lt;li&gt;Tasks are re-routed to the broker.&lt;/li&gt;
&lt;li&gt;Healthy runners pick them up.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is built in. No external watcher process required.&lt;/p&gt;

&lt;p&gt;Recovery re-executes the full task, so designing tasks to be idempotent remains a best practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  The code
&lt;/h2&gt;

&lt;p&gt;The task:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# tasks.py (simplified — full version in the repo)
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pynenc&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Pynenc&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Pynenc&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@app.task&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;slow_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_num&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;slow_task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[slow_task(&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)] Starting — will run for 8 seconds&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;second&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;slow_task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;info&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[slow_task(&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)] progress &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;second&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;task_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;task_num&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;_completed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The demo configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="c"&gt;# pyproject.toml (key settings — full config in the repo)&lt;/span&gt;
&lt;span class="nn"&gt;[tool.pynenc]&lt;/span&gt;
&lt;span class="py"&gt;app_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"recovery_demo"&lt;/span&gt;
&lt;span class="py"&gt;orchestrator_cls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"SQLiteOrchestrator"&lt;/span&gt;
&lt;span class="py"&gt;broker_cls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"SQLiteBroker"&lt;/span&gt;
&lt;span class="py"&gt;state_backend_cls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"SQLiteStateBackend"&lt;/span&gt;
&lt;span class="py"&gt;runner_cls&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"ThreadRunner"&lt;/span&gt;

&lt;span class="c"&gt;# Fast recovery timeouts for demo purposes.&lt;/span&gt;
&lt;span class="c"&gt;# Production systems use much higher values (defaults: 10 min heartbeat, 15 min recovery cron).&lt;/span&gt;
&lt;span class="py"&gt;runner_considered_dead_after_minutes&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;          &lt;span class="c"&gt;# 6 seconds — heartbeat expiry&lt;/span&gt;
&lt;span class="py"&gt;recover_running_invocations_cron&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"* * * * *"&lt;/span&gt;      &lt;span class="c"&gt;# every minute (fastest cron resolution)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full demo is in the public &lt;a href="https://github.com/pynenc/samples/tree/main/recovery_demo" rel="noopener noreferrer"&gt;recovery_demo&lt;/a&gt; folder of the samples repository.&lt;/p&gt;

&lt;p&gt;The entrypoint script is &lt;a href="https://github.com/pynenc/samples/blob/main/recovery_demo/sample.py" rel="noopener noreferrer"&gt;recovery_demo/sample.py&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try it yourself
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Requires uv — install: https://docs.astral.sh/uv/getting-started/installation/&lt;/span&gt;
git clone https://github.com/pynenc/samples.git
&lt;span class="nb"&gt;cd &lt;/span&gt;samples/recovery_demo
uv &lt;span class="nb"&gt;sync
&lt;/span&gt;uv run python sample.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No Docker. No Redis. No external services. One demo.&lt;/p&gt;

&lt;h2&gt;
  
  
  What teams usually build by hand
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;The problem&lt;/th&gt;
&lt;th&gt;Typical approach&lt;/th&gt;
&lt;th&gt;What pynenc does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Worker dies mid-task&lt;/td&gt;
&lt;td&gt;Lost task or duplicate retries&lt;/td&gt;
&lt;td&gt;Automatic recovery via heartbeat detection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Detecting dead workers&lt;/td&gt;
&lt;td&gt;External monitoring stack&lt;/td&gt;
&lt;td&gt;Built-in runner heartbeat checks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Re-queuing orphaned tasks&lt;/td&gt;
&lt;td&gt;Manual scripts and intervention&lt;/td&gt;
&lt;td&gt;Automatic re-routing to broker&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Recovery in clusters&lt;/td&gt;
&lt;td&gt;Custom distributed locking&lt;/td&gt;
&lt;td&gt;Atomic global recovery service&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Understanding incidents&lt;/td&gt;
&lt;td&gt;Log spelunking&lt;/td&gt;
&lt;td&gt;Invocation state history and timeline views&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  What is next
&lt;/h2&gt;

&lt;p&gt;Pynenc is open source and actively maintained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/pynenc" rel="noopener noreferrer"&gt;pynenc&lt;/a&gt; - core framework&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/pynenc/samples" rel="noopener noreferrer"&gt;samples&lt;/a&gt; - runnable demos&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.pynenc.org" rel="noopener noreferrer"&gt;docs&lt;/a&gt; - full documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How does your team handle crashed workers today? Join the conversation in &lt;a href="https://github.com/pynenc/pynenc/discussions" rel="noopener noreferrer"&gt;GitHub Discussions&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>python</category>
      <category>backend</category>
      <category>distributedsystems</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
