<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Marijan Smetko</title>
    <description>The latest articles on Forem by Marijan Smetko (@msmetko).</description>
    <link>https://forem.com/msmetko</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/msmetko"/>
    <language>en</language>
    <item>
      <title>Improving my blog by 100x with Rust and AI</title>
      <dc:creator>Marijan Smetko</dc:creator>
      <pubDate>Sun, 01 Mar 2026 00:00:00 +0000</pubDate>
      <link>https://forem.com/msmetko/improving-my-blog-by-100x-with-rust-and-ai-86p</link>
      <guid>https://forem.com/msmetko/improving-my-blog-by-100x-with-rust-and-ai-86p</guid>
      <description>&lt;p&gt;I don't really like to wait more than absolutely necessary. Some people are OK with extra latency here and there; I try to minimize it where I can, or even where I can't. It's unclear yet whether this is because I'm a software engineer, or I'm a software engineer because of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Even once is too slow
&lt;/h2&gt;

&lt;p&gt;I had a great opportunity to optimize the latency of my blog the other day. Or, more precisely, the &lt;em&gt;prerender&lt;/em&gt; of my blog.&lt;/p&gt;

&lt;p&gt;You see, like plenty of folks out there, I write my blogposts in Markdown. However, most browsers currently cannot render pure Markdown, they need HTML. And while converting my posts written in &lt;code&gt;.md&lt;/code&gt; to HTML on-the-fly is doable, it's not very friendly to the poor server (if I render it on the server), or the reader (if I interpret it on the client). To make things worse, some of my blogposts have plenty of math: my most math heavy post talks about fitting a curve to the precision/recall graphs for optimal threshold tuning (&lt;a href="https://terra-incognita.blog/posts/the-c-method" rel="noopener noreferrer"&gt;you should check it out&lt;/a&gt;, I'm kinda proud of it). For rendering said math I use Katex, because it's pretty and resource-friendly... but, again, not "render math on every request" friendly.&lt;/p&gt;

&lt;p&gt;That is all to say, when building Docker images that serve my content, I do a sort-of-a cache step where I compile all my blogposts by prerendering them to HTML and storing that into an SQLite database. Whenever someone visits my blog, I simply fetch the post's content from that database and wrap it with frontend decorations, including navbar and Disqus template. This ends up being quite fast for the user, friendly to the server, and satisfying for me. &lt;/p&gt;

&lt;p&gt;The compilation step, though, is anything but: (make sure to scroll to the right)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;uv run compile.py posts/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="go"&gt;[the-c-method]                                                  Markdown: 20.9894s, Minify: 0.0609s
[average-entropy-of-a-discrete-probability-distribution-part-1] Markdown: 13.2319s, Minify: 0.0261s
[sampling-a-categorical-pmf]                                    Markdown:  3.6729s, Minify: 0.0113s
[merging-repos-with-jj]                                         Markdown:  0.0122s, Minify: 0.0020s
[learning-to-fly-through-windows-in-cloud]                      Markdown:  0.0095s, Minify: 0.0013s
[web-summer-camp-2025-behind-the-scenes]                        Markdown:  0.0089s, Minify: 0.0012s
[my-deployment-setup-part-2]                                    Markdown:  0.0057s, Minify: 0.0008s
[irab-the-millionare-next-door]                                 Markdown:  0.0044s, Minify: 0.0005s
[my-deployment-setup-part-1]                                    Markdown:  0.0043s, Minify: 0.0006s
[hello-world]                                                   Markdown:  0.0034s, Minify: 0.0004s
[irab-the-phoenix-project]                                      Markdown:  0.0025s, Minify: 0.0004s
[irab-tiago-forte-building-a-second-brain]                      Markdown:  0.0025s, Minify: 0.0003s
Grand total: 41s
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I don't have a &lt;em&gt;brand new&lt;/em&gt; PC, but my &lt;a href="https://pcpartpicker.com/user/InCogNiTo124/builds/#view=ZjNPxr" rel="noopener noreferrer"&gt;Intel Core i5-13600K 3.5 GHz 14-Core Processor&lt;/a&gt; should render my 10-ish blogposts in way less than 40 seconds total. This gets way worse in my Github Actions CI, by the way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;uv run compile.py posts/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="go"&gt;[average-entropy-of-a-discrete-probability-distribution-part-1] Markdown: 92.6224s, Minify: 0.0467s
[the-c-method]                                                  Markdown: 76.2160s, Minify: 0.1315s
[sampling-a-categorical-pmf]                                    Markdown: 13.3637s, Minify: 0.0273s
[merging-repos-with-jj]                                         Markdown:  0.0291s, Minify: 0.0049s
[learning-to-fly-through-windows-in-cloud]                      Markdown:  0.0184s, Minify: 0.0027s
[web-summer-camp-2025-behind-the-scenes]                        Markdown:  0.0170s, Minify: 0.0026s
[my-deployment-setup-part-2]                                    Markdown:  0.0111s, Minify: 0.0016s
[my-deployment-setup-part-1]                                    Markdown:  0.0083s, Minify: 0.0013s
[irab-the-millionare-next-door]                                 Markdown:  0.0078s, Minify: 0.0011s
[hello-world]                                                   Markdown:  0.0057s, Minify: 0.0007s
[irab-the-phoenix-project]                                      Markdown:  0.0049s, Minify: 0.0008s
[irab-tiago-forte-building-a-second-brain]                      Markdown:  0.0049s, Minify: 0.0005s
Grand total: 190s
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Obviously, their weaker runner with 2 shared vCPUs, when the other core is compiling Rust in the same action, can never match local performance... but what is too much is too much, and 3 minutes is definitely too much.&lt;/p&gt;

&lt;h2&gt;
  
  
  Subprocesses walk into a bar... 96 times
&lt;/h2&gt;

&lt;p&gt;Immediately, we can see something crazy: there's a &lt;em&gt;huge&lt;/em&gt; variance in the compilation step per post. Most of them are done in &lt;em&gt;1 millisecond&lt;/em&gt; but others take &lt;em&gt;tens of seconds&lt;/em&gt;. Our first lead is that the culprits for high latency all seem to be math-heavy. Let's profile the compilation step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;uv run python &lt;span class="nt"&gt;-m&lt;/span&gt; cProfile &lt;span class="nt"&gt;-s&lt;/span&gt; cumtime compile.py posts/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="go"&gt;         3695055 function calls (3439898 primitive calls) in 38.398 seconds

   Ordered by: cumulative time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    734/1    0.018    0.000   38.400   38.400 {built-in method builtins.exec}
&lt;/span&gt;&lt;span class="gp"&gt;        1    0.000    0.000   38.400   38.400 compile.py:1(&amp;lt;module&amp;gt;&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="go"&gt;        1    0.000    0.000   38.128   38.128 main.py:1131(__call__)
        1    0.000    0.000   38.127   38.127 core.py:1483(__call__)
        1    0.000    0.000   38.127   38.127 core.py:716(main)
        1    0.000    0.000   38.127   38.127 core.py:157(_main)
        1    0.000    0.000   38.127   38.127 core.py:1255(invoke)
        1    0.000    0.000   38.127   38.127 core.py:768(invoke)
        1    0.000    0.000   38.127   38.127 main.py:1505(wrapper)
        1    0.000    0.000   38.127   38.127 compile.py:254(main)
       12    0.001    0.000   38.091    3.174 compile.py:188(process_blog_entry_dir)
       12    0.000    0.000   37.187    3.099 core.py:315(convert)
      116    0.000    0.000   37.145    0.320 wrapper.py:126(get_bin_cmd)
      116    0.006    0.000   37.137    0.320 wrapper.py:79(_get_usr_parts)
      116    0.001    0.000   37.077    0.320 subprocess.py:423(check_output)
      116    0.001    0.000   37.076    0.320 subprocess.py:512(run)
      116    0.001    0.000   37.054    0.319 subprocess.py:1176(communicate)
      684   37.053    0.054   37.053    0.054 {method 'read' of '_io.BufferedReader' objects}
       12    0.000    0.000   36.907    3.076 extension.py:243(run)
     2167    0.002    0.000   36.907    0.017 extension.py:200(_iter_out_lines)
      115    0.000    0.000   36.900    0.321 extension.py:74(tex2html)
      115    0.001    0.000   36.900    0.321 wrapper.py:220(tex2html)
      255    0.000    0.000   36.815    0.144 wrapper.py:157(_iter_cmd_parts)
       90    0.000    0.000   28.837    0.320 extension.py:193(_make_tag_for_inline)
       90    0.000    0.000   28.836    0.320 extension.py:112(md_inline2html)
       25    0.000    0.000    8.065    0.323 extension.py:182(_make_tag_for_block)
       25    0.000    0.000    8.064    0.323 extension.py:88(md_block2html)
&lt;/span&gt;&lt;span class="gp"&gt;       62    0.001    0.000    0.535    0.009 __init__.py:1(&amp;lt;module&amp;gt;&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="go"&gt;[~truncated~]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As I suspected, most of the time was spent on rendering math. My latency obsession has now uncovered a real risk for the future, since I don't mean to stop writing math, but if I continue, my build times will rise unacceptably high.&lt;/p&gt;

&lt;p&gt;The culprit seems to be function &lt;a href="https://github.com/mbarkhau/markdown-katex/blob/f86824676cf15e3283d87b07a3fe0bc22b4154fb/src/markdown_katex/wrapper.py#L214-L260" rel="noopener noreferrer"&gt;_write_tex2html&lt;/a&gt; of the KaTeX rendering Markdown plugin, which... looking at the code, &lt;em&gt;seems to spawn a new process for every math equation&lt;/em&gt;?&lt;/p&gt;

&lt;p&gt;Let's confirm it with strace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;strace &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;trace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;execve &lt;span class="nt"&gt;-o&lt;/span&gt; /tmp/strace_execve_orig.txt uv run compile.py posts/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;wc&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; /tmp/strace_execve_orig.txt
&lt;span class="go"&gt;1059
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's horrible. I have 96 equations so far (I counted) so this implies about 11 new processes spawned for each of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep dive
&lt;/h2&gt;

&lt;p&gt;But wait! The function I linked mentions cache in the source, which seems to be stored in &lt;code&gt;/tmp/mdkatex&lt;/code&gt;.&lt;br&gt;
First of all, &lt;em&gt;not a fan&lt;/em&gt; of libraries writing files to my filesystem.&lt;br&gt;
On top of that, perhaps persisting cache to &lt;code&gt;${XDG_CACHE_HOME}&lt;/code&gt; and not deleting it on the device's reboot is a more worthwhile strategy.&lt;br&gt;
And finally: if you're writing cache, please use it then?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;no cache
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;uv run compile.py posts/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="go"&gt;[the-c-method]                                                  Markdown: 25.1156s, Minify: 0.0592s
[average-entropy-of-a-discrete-probability-distribution-part-1] Markdown: 15.7168s, Minify: 0.0202s
[sampling-a-categorical-pmf]                                    Markdown:  4.2688s, Minify: 0.0111s
[merging-repos-with-jj]                                         Markdown:  0.0122s, Minify: 0.0020s
[learning-to-fly-through-windows-in-cloud]                      Markdown:  0.0094s, Minify: 0.0013s
[web-summer-camp-2025-behind-the-scenes]                        Markdown:  0.0088s, Minify: 0.0012s
[my-deployment-setup-part-2]                                    Markdown:  0.0057s, Minify: 0.0008s
[my-deployment-setup-part-1]                                    Markdown:  0.0043s, Minify: 0.0006smermaid_old_arch_seq.svg
[irab-the-millionare-next-door]                                 Markdown:  0.0040s, Minify: 0.0005s
[hello-world]                                                   Markdown:  0.0029s, Minify: 0.0003s
[irab-the-phoenix-project]                                      Markdown:  0.0025s, Minify: 0.0004s
[irab-tiago-forte-building-a-second-brain]                      Markdown:  0.0025s, Minify: 0.0003s
Grand total: 45s

&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;cache created
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /tmp/mdkatex/
&lt;span class="go"&gt;ls /tmp/mdkatex
00b97793d1fd1e36bf3a806444218f2e8d382ee5372b14d9092bd325eebc129a.html  95884ccba64a3b39d5d9d384d88c70259444c93e01d0d0baea297e1294ade559.html
07355d6832ae8bb2bcf2d2a22230c3616044bcda07ca21f4d3c1f45493d3b958.html  9595e5feb4702104674f1ec5d464b748f4ef546ff0703a772c7533b7ecbf9532.html
08290bccd697775c767c7303a6a37e4533f023f17c3022ce8f22aeb4832dfc41.html  9673832d923cdd9994bcd2cfb12bf73c0639065e4e59688a0d8629d3853d3310.html
0b44f4509fe258a30d3b5e9ae2c20118e3344a2b2388cfd040fa700a944da82a.html  971cd3143d00297f8d579e7658be2c0a2ffe08126cd828ad2a06d54924d5302c.html
0bc815bc4fc3e53e2f5744c798339af9dc7ecf9d8b51e300111fec7183be5f89.html  9937dc90af96d0d9c81710d39225d4e2402971ee5b5d4808694138eda9f96060.html
107efb2677a95d8a8bb8395c201dcd552a67ab23c9fa14e46bd969c0f35aafb3.html  9a15d9ba0b85ec3927982a870f725c5a8352e46c82e5989b8cee2b75b543723f.html
[~50 files elided~]

&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;with cache &lt;span class="o"&gt;(&lt;/span&gt;minimal differences&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;uv run compile.py posts/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="go"&gt;[the-c-method]                                                  Markdown: 22.4009s, Minify: 0.0589s
[average-entropy-of-a-discrete-probability-distribution-part-1] Markdown: 14.1009s, Minify: 0.0205s
[sampling-a-categorical-pmf]                                    Markdown:  3.7962s, Minify: 0.0111s
[merging-repos-with-jj]                                         Markdown:  0.0120s, Minify: 0.0020s
[learning-to-fly-through-windows-in-cloud]                      Markdown:  0.0094s, Minify: 0.0013s
[web-summer-camp-2025-behind-the-scenes]                        Markdown:  0.0088s, Minify: 0.0012s
[my-deployment-setup-part-2]                                    Markdown:  0.0056s, Minify: 0.0008s
[my-deployment-setup-part-1]                                    Markdown:  0.0042s, Minify: 0.0006s
[irab-the-millionare-next-door]                                 Markdown:  0.0040s, Minify: 0.0005s
[hello-world]                                                   Markdown:  0.0029s, Minify: 0.0003s
[irab-the-phoenix-project]                                      Markdown:  0.0025s, Minify: 0.0004s
[irab-tiago-forte-building-a-second-brain]                      Markdown:  0.0025s, Minify: 0.0003s
Grand total: 40s
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I want to be a doer, not a complainer, so let's dive a bit more to see what is actually happening.&lt;/p&gt;

&lt;h2&gt;
  
  
  Binary search is not always optimal
&lt;/h2&gt;

&lt;p&gt;Turns out, the library &lt;em&gt;does&lt;/em&gt; use cache. Sorry for yelling there! One of the leads is that all timings are strictly less than what they were this first time around. I confirmed it with strace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;strace &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;trace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;openat &lt;span class="nt"&gt;-o&lt;/span&gt; /proc/self/fd/1 uv run compile.py posts/&lt;span class="k"&gt;*&lt;/span&gt; | rg /tmp/mdkatex
&lt;span class="go"&gt;22561 openat(AT_FDCWD, "/tmp/mdkatex/a8a72791face8f1e3da52e72fdd320ed91111b875e70c721755d8eaf37139f77.html", O_RDONLY|O_CLOEXEC) = 4
22561 openat(AT_FDCWD, "/tmp/mdkatex", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 4
22561 openat(AT_FDCWD, "/tmp/mdkatex/18121dc005c62089a5d8751437675bd2c1de7031402bd64b6df2e194eec1e106.html", O_RDONLY|O_CLOEXEC) = 4
22561 openat(AT_FDCWD, "/tmp/mdkatex", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 4
22561 openat(AT_FDCWD, "/tmp/mdkatex/c3ba77bd6884ce55e2f0e78eaedadd5dc11ae95ff0b3a749314be0f9f000c7c7.html", O_RDONLY|O_CLOEXEC) = 4
22561 openat(AT_FDCWD, "/tmp/mdkatex", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 4
22561 openat(AT_FDCWD, "/tmp/mdkatex/acd7cdc3243bfbdb9a78822b94e719ffc23733f1cf5bf9a7a907e5471612f4ae.html", O_RDONLY|O_CLOEXEC) = 4
22561 openat(AT_FDCWD, "/tmp/mdkatex", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 4
22561 openat(AT_FDCWD, "/tmp/mdkatex/80ef85c983e9b05a7fa847efc5190b024091ec9557bde8496428274938e67b58.html", O_RDONLY|O_CLOEXEC) = 4
[~many lines elided~]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Turning our attention back at the profiler output, though, shows most of the time is already lost &lt;em&gt;before&lt;/em&gt; we even get to the renderer! There seems to be a subtle interplay between &lt;a href="https://github.com/mbarkhau/markdown-katex/blob/f86824676cf15e3283d87b07a3fe0bc22b4154fb/src/markdown_katex/wrapper.py#L96-L133" rel="noopener noreferrer"&gt;_get_user_parts()&lt;/a&gt; and &lt;a href="https://github.com/mbarkhau/markdown-katex/blob/f86824676cf15e3283d87b07a3fe0bc22b4154fb/src/markdown_katex/wrapper.py#L154-L160" rel="noopener noreferrer"&gt;get_bin_cmd&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the library ships with bundled &lt;a href="https://github.com/mbarkhau/markdown-katex/blob/master/src/markdown_katex/bin/katex_v0.15.1_node10_x86_64-Linux" rel="noopener noreferrer"&gt;node katex binary&lt;/a&gt;, but only as a fallback. It tries really hard to run whatever relevant there is on the local system. I actually &lt;em&gt;don't&lt;/em&gt; have katex installed on my system in any way, shape or form, except for whatever is in this library &lt;/li&gt;
&lt;li&gt;For every math equation, &lt;code&gt;_get_usr_parts()&lt;/code&gt; loops over many variations of &lt;code&gt;npx&lt;/code&gt; and &lt;code&gt;katex&lt;/code&gt; commands built from my &lt;code&gt;${PATH}&lt;/code&gt; to see what exists

&lt;ul&gt;
&lt;li&gt;if a test command succeeds, that success is written in the &lt;em&gt;command cache&lt;/em&gt; in that same temporary directory, so this whole dance should be only done once, thankfully&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;if no satisfactory command is found, we finally fall back to the bundled binary&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;And herein lies the problem: the fact that the fallback happened is &lt;em&gt;not cached anywhere&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;This means that, in order to process &lt;strong&gt;one&lt;/strong&gt; math equation, the library tries to start &lt;strong&gt;dozens&lt;/strong&gt; of processes, all of which fail: the ones invoking &lt;code&gt;katex&lt;/code&gt; fail immediately with &lt;code&gt;katex: no such file or directory&lt;/code&gt;, and the ones invoking &lt;code&gt;npx --no-install katex&lt;/code&gt; fail with &lt;code&gt;npm error npx canceled due to missing packages and no YES option: ["katex@0.16.33"]&lt;/code&gt; &lt;strong&gt;300 milliseconds later&lt;/strong&gt; ⁉️, because JS ecosystem I suppose. And then the same process happens for &lt;em&gt;every remaining math expression&lt;/em&gt; because the info about fallback didn't get stored anywhere. I am literally being punished for having &lt;code&gt;npx&lt;/code&gt; in path without &lt;code&gt;katex&lt;/code&gt; globally installed. &lt;/p&gt;

&lt;p&gt;Let's test this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;which npx
&lt;span class="go"&gt;/usr/bin/npx

&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;sudo mv&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; /usr/bin/npx&lt;span class="o"&gt;{&lt;/span&gt;,_&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="gp"&gt;renamed '/usr/bin/npx' -&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'/usr/bin/npx_'&lt;/span&gt;
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;uv run compile.py posts/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="go"&gt;[the-c-method]                                                  Markdown:  2.3572s, Minify: 0.0593s
[average-entropy-of-a-discrete-probability-distribution-part-1] Markdown:  1.4653s, Minify: 0.0204s
[sampling-a-categorical-pmf]                                    Markdown:  0.3880s, Minify: 0.0113s
[merging-repos-with-jj]                                         Markdown:  0.0122s, Minify: 0.0021s
[learning-to-fly-through-windows-in-cloud]                      Markdown:  0.0095s, Minify: 0.0012s
[web-summer-camp-2025-behind-the-scenes]                        Markdown:  0.0088s, Minify: 0.0012s
[my-deployment-setup-part-2]                                    Markdown:  0.0057s, Minify: 0.0008s
[my-deployment-setup-part-1]                                    Markdown:  0.0042s, Minify: 0.0006s
[irab-the-millionare-next-door]                                 Markdown:  0.0041s, Minify: 0.0005s
[hello-world]                                                   Markdown:  0.0029s, Minify: 0.0003s
[irab-the-phoenix-project]                                      Markdown:  0.0025s, Minify: 0.0004s
[irab-tiago-forte-building-a-second-brain]                      Markdown:  0.0025s, Minify: 0.0003s
Grand total: 4.764s

&lt;/span&gt;&lt;span class="gp"&gt;#&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;second run with cache
&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;uv run compile.py posts/&lt;span class="k"&gt;*&lt;/span&gt;
&lt;span class="go"&gt;[sampling-a-categorical-pmf]                                    Markdown:  0.0394s, Minify: 0.0111s
[the-c-method]                                                  Markdown:  0.0370s, Minify: 0.0592s
[average-entropy-of-a-discrete-probability-distribution-part-1] Markdown:  0.0253s, Minify: 0.0205s
[merging-repos-with-jj]                                         Markdown:  0.0123s, Minify: 0.0020s
[learning-to-fly-through-windows-in-cloud]                      Markdown:  0.0095s, Minify: 0.0013s
[web-summer-camp-2025-behind-the-scenes]                        Markdown:  0.0089s, Minify: 0.0012s
[my-deployment-setup-part-2]                                    Markdown:  0.0057s, Minify: 0.0008s
[my-deployment-setup-part-1]                                    Markdown:  0.0044s, Minify: 0.0007s
[irab-the-millionare-next-door]                                 Markdown:  0.0041s, Minify: 0.0005s
[hello-world]                                                   Markdown:  0.0030s, Minify: 0.0003s
[irab-the-phoenix-project]                                      Markdown:  0.0026s, Minify: 0.0004s
[irab-tiago-forte-building-a-second-brain]                      Markdown:  0.0025s, Minify: 0.0003s
Grand total: 0.674s
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;😂 😂 😂  😭&lt;/p&gt;

&lt;p&gt;&lt;a href="https://terra-incognita.blog/posts/improving-my-blog-by-100x-with-rust-and-ai/mermaid_old_arch_seq.svg" rel="noopener noreferrer"&gt;The old architecure has a subtle bug that only got exposed by my specific setup.&lt;br&gt;Sequence diagram by Gemini 3.1 Pro&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This exploration resulted with &lt;a href="https://github.com/mbarkhau/markdown-katex/issues/22" rel="noopener noreferrer"&gt;this issue&lt;/a&gt; I kindly opened to the owner.&lt;/p&gt;

&lt;h2&gt;
  
  
  NIH (no interpreters here)
&lt;/h2&gt;

&lt;p&gt;So, I fixed this locally by simply not having the &lt;code&gt;npm&lt;/code&gt; package installed (which includes the &lt;code&gt;npx&lt;/code&gt; binary); I hope I don't need it. However, I don't even install it explicitly in my GHA CI and yet it's there, also without &lt;code&gt;katex&lt;/code&gt;, also slowing my builds tremendously.&lt;/p&gt;

&lt;p&gt;This got me thinking about my mitigation options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I can wait for the upstream fix, which may never even come. &lt;code&gt;markdown-katex&lt;/code&gt; is an MIT licensed FOSS project and I can't really demand a fix (nor would I)&lt;/li&gt;
&lt;li&gt;I could try to upstream a fix, which may never land&lt;/li&gt;
&lt;li&gt;I can hackishly rename the binary in the CI just like I did locally which is both 1) ugly, and 2) might be a footgun and break something completely different&lt;/li&gt;
&lt;li&gt;I could, in theory, prewrite the command cache mentioned above with the contents of the bundled binary, so the library can load it directly 😊 that &lt;em&gt;would&lt;/em&gt; work but, after about 5 minutes of being proud, I would hate myself for that gross hack.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Taking a step back and looking at this whole situation from a higher angle, though... I am &lt;em&gt;not&lt;/em&gt; happy about a new process being invoked for every math equation, especially given that I don't have the previous cache in CI so I could never get the benefits of the cache. Furthermore, the bundled binary is a Node runtime for Javascript. And as I mentioned in the opening, I don't like to wait more than absolutely necessary, and waiting for the JS interpreter to start, interpret the math formula and exit is not necessary. There should be a plugin library in a &lt;em&gt;compiled&lt;/em&gt; language I can use with Python that can parse Katex and output HTML. Bonus points for memory safety&lt;sup id="fnref1"&gt;1&lt;/sup&gt; 🙂&lt;/p&gt;

&lt;p&gt;So, let there be a plugin library in a memory-safe compiled language that can parse Katex and output HTML.&lt;/p&gt;

&lt;h2&gt;
  
  
  Python markdown plugins
&lt;/h2&gt;

&lt;p&gt;Just to be clear: while I laid some reasonable arguments in the previous section why to write a Rust-based Python-Markdown plugin, the decision to write it was primarily driven by my curiosity and "I wanna do it like this" attitude. Just a perfect opportunity to try something new and learn.&lt;br&gt;
the inefficient loop of shell calls and Node.js timeouts triggered for every single math expression.&lt;br&gt;
What surprised me the most is that the &lt;a href="https://python-markdown.github.io/extensions/api/" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; for writing python-markdown plugins is... not that bad actually? We've all had a taste of dry, incomplete docs, but you can see the effort behind this one. The phases of markdown processing are explained clearly and every phase is illustrated with examples.&lt;/p&gt;

&lt;p&gt;Since I already delved deep into the source code of the original extension, I saw that the main architecture of the extension consisted of a &lt;a href="https://github.com/mbarkhau/markdown-katex/blob/master/src/markdown_katex/extension.py#L193-L268" rel="noopener noreferrer"&gt;Preprocessor&lt;/a&gt; class, and a &lt;a href="https://github.com/mbarkhau/markdown-katex/blob/master/src/markdown_katex/extension.py#L281-L315" rel="noopener noreferrer"&gt;Postprocessor&lt;/a&gt; class. The Preprocessor parses the math, converts the math to HTML, and -- yields a unique tag back to the text. The Postprocessor's single role, then, is to replace the unique tags  with the previously generated HTML. &lt;sup id="fnref2"&gt;2&lt;/sup&gt; Naturally, I &lt;a href="https://fs.blog/chestertons-fence/" rel="noopener noreferrer"&gt;decided&lt;/a&gt; not to &lt;a href="https://www.reddit.com/r/AskEurope/comments/m82vac/comment/grgcqup/" rel="noopener noreferrer"&gt;reinvent warm water&lt;/a&gt;. I decided on keeping the same architecture, with the most significant change (simplification, actually) being ditching the whole npx/katex search and going straight to Rust for the HTML generation. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://terra-incognita.blog/posts/improving-my-blog-by-100x-with-rust-and-ai/mermaid_markdown_katex_rs_flow.svg" rel="noopener noreferrer"&gt;The flow of the plugin. Other downstream Markdown plugins see only unique tags. The postprocessor injects rendered HTML.&lt;br&gt;Flow diagram by Gemini 3.1 Pro&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Rust API is exceedingly simple: a single function that accepts a view-only, immutable &lt;code&gt;&amp;amp;str&lt;/code&gt;, asks &lt;a href="https://github.com/katex-rs/katex-rs" rel="noopener noreferrer"&gt;&lt;code&gt;katex-rs&lt;/code&gt;&lt;/a&gt; to convert it to a &lt;code&gt;String&lt;/code&gt; with HTML, and returns &lt;code&gt;PyResult&amp;lt;String&amp;gt;&lt;/code&gt;. Rust part stores no data, and Python is left owning all the results.&lt;/p&gt;

&lt;p&gt;This was, I think, my first &lt;em&gt;real&lt;/em&gt; use case for PyOxide; I played with it before, but all under the premise of playing with it and exploring. I guess the learning did kind of pay off itself, for I would not be able to think of this otherwise. I think it always pays off.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://terra-incognita.blog/posts/improving-my-blog-by-100x-with-rust-and-ai/mermaid_new_arch_seq.svg" rel="noopener noreferrer"&gt;The new architecure is as simple as it gets. Call Rust with input, recieve Rust's output and store it. No brainer.&lt;br&gt;Sequence diagram by Gemini 3.1 Pro&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  DIY (delegate it yourself)
&lt;/h2&gt;

&lt;p&gt;I did all this previous analysis completely by myself, without AI, which angers some and surprises others. I do it to avoid &lt;a href="https://buttondown.com/artefacts/archive/artefact-247/" rel="noopener noreferrer"&gt;cognitive debt&lt;/a&gt;, which I saw happening to me in one occasion and that event left me scared of losing the grip with the codebase's &lt;a href="https://pages.cs.wisc.edu/~remzi/Naur.pdf" rel="noopener noreferrer"&gt;theory&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;That said, instead of hand crafting my code like I do it every time, I decided this was a perfect opportunity to test a new vibecoding tool called Antigravity. The plan was to basically dump this whole markdown plugin context to Gemini, together with the source of the original extension, explain some ground rules (mostly usage of tools like &lt;a href="https://docs.astral.sh/uv/" rel="noopener noreferrer"&gt;uv&lt;/a&gt;, &lt;a href="https://www.maturin.rs/" rel="noopener noreferrer"&gt;maturin&lt;/a&gt; and &lt;a href="https://www.jj-vcs.dev" rel="noopener noreferrer"&gt;jj&lt;/a&gt;) and see where this gets me.&lt;/p&gt;

&lt;p&gt;Frankly, Gemini 3.0 basically oneshotted&lt;sup id="fnref3"&gt;3&lt;/sup&gt; it.&lt;/p&gt;

&lt;p&gt;I mean, don't get me wrong, the entire project has barely 250 SLOC. It's not exactly &lt;a href="https://i.redd.it/oky9oyi8n2w11.png" rel="noopener noreferrer"&gt;rocket science&lt;/a&gt; to connect two pre-made projects into a whole; I'm reminded of my Latin teacher that called these types of problems "equations without unknowns". But after I signed off on the Antigravity's plan, I expected some friction and Human-in-the-loop-ing until I got a working version. What happened was that the model got stuck maybe once or twice, got itself out of the mess and I had a barebones project that was &lt;em&gt;working&lt;/em&gt; pretty much immediately. I would like to believe that it was my engineering of the LLM's context what made it work so well, but just like almost everything else with these creations beyond human comprehension, it's hard to verify that, so I'll never have that counterfactual.  &lt;/p&gt;

&lt;p&gt;I later made the model build test cases out of my blog to verify that, at the very least, &lt;a href="https://hannahilea.com/blog/houseplant-programming/" rel="noopener noreferrer"&gt;&lt;em&gt;I have a use&lt;/em&gt;&lt;/a&gt; from the library, and then I made it explain to me in simple words how to setup GHA CI so I can push it on PyPI without much hassle, since this usually takes too much effort anyways. And I have to say, I'm glad I did, because I prepared for a struggle with secrets and keys, but instead, I got instructions for setting up &lt;a href="https://docs.github.com/en/actions/how-tos/secure-your-work/security-harden-deployments/oidc-in-pypi" rel="noopener noreferrer"&gt;OIDC&lt;/a&gt; with PyPI. I'm not so sure about AGI, but Gemini is superhuman when it comes to GHA CI best practices, if that human is me 😁&lt;/p&gt;

&lt;p&gt;All in all, it took me up to 6h, with breaks, to go from exploring the problem space to the push to PyPI. 🐍🦀🎉 You can explore the codebase at &lt;a href="https://github.com/InCogNiTo124/markdown-katex-rs" rel="noopener noreferrer"&gt;InCogNiTo124/markdown-katex-rs&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Marking the bench
&lt;/h2&gt;

&lt;p&gt;Without further ado, let me show you how this GenAI adventure had a clear impact on my latency obsession:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;uv run compile_with_markdown_rs.py
&lt;span class="go"&gt;[sampling-a-categorical-pmf]                                    Markdown: 0.0369s, Minify: 0.0111s
[the-c-method]                                                  Markdown: 0.0257s, Minify: 0.0598s
[average-entropy-of-a-discrete-probability-distribution-part-1] Markdown: 0.0195s, Minify: 0.0203s
[merging-repos-with-jj]                                         Markdown: 0.0122s, Minify: 0.0021s
[learning-to-fly-through-windows-in-cloud]                      Markdown: 0.0095s, Minify: 0.0013s
[web-summer-camp-2025-behind-the-scenes]                        Markdown: 0.0088s, Minify: 0.0012s
[my-deployment-setup-part-2]                                    Markdown: 0.0056s, Minify: 0.0008s
[my-deployment-setup-part-1]                                    Markdown: 0.0043s, Minify: 0.0006s
[irab-the-millionare-next-door]                                 Markdown: 0.0040s, Minify: 0.0005s
[hello-world]                                                   Markdown: 0.0029s, Minify: 0.0003s
[irab-the-phoenix-project]                                      Markdown: 0.0026s, Minify: 0.0004s
[irab-tiago-forte-building-a-second-brain]                      Markdown: 0.0025s, Minify: 0.0003s
Grand total: 0.597s
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is 75x, if you compare with the cache-less version, and 60x, if compared with cache-full version. If you go and compare it with the results when I made the original library use cache, you'll even see my library beats &lt;code&gt;fopen()&lt;/code&gt; for this use case 😁 a surprise to be sure, but a welcome one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://terra-incognita.blog/posts/improving-my-blog-by-100x-with-rust-and-ai/old_time_ms_and_rust_time_ms.svg" rel="noopener noreferrer"&gt;Graphical demonstration of the prerendering time before and after my plugin. Notice how the Y-axis is log-scaled; that's because building times are spread over 3 orders of maginitude&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And in Github Actions, the difference is, again, even more stark:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;[sampling-a-categorical-pmf]                                    Markdown: 0.1626s, Minify: 0.0972s
[the-c-method]                                                  Markdown: 0.0955s, Minify: 0.1420s
[average-entropy-of-a-discrete-probability-distribution-part-1] Markdown: 0.0705s, Minify: 0.0848s
[merging-repos-with-jj]                                         Markdown: 0.0512s, Minify: 0.0092s
[learning-to-fly-through-windows-in-cloud]                      Markdown: 0.0329s, Minify: 0.0046s
[my-deployment-setup-part-2]                                    Markdown: 0.0207s, Minify: 0.0029s
[web-summer-camp-2025-behind-the-scenes]                        Markdown: 0.0185s, Minify: 0.0026s
[my-deployment-setup-part-1]                                    Markdown: 0.0179s, Minify: 0.0022s
[irab-the-millionare-next-door]                                 Markdown: 0.0134s, Minify: 0.0016s
[hello-world]                                                   Markdown: 0.0091s, Minify: 0.0010s
[irab-tiago-forte-building-a-second-brain]                      Markdown: 0.0083s, Minify: 0.0010s
[irab-the-phoenix-project]                                      Markdown: 0.0082s, Minify: 0.0023s
Grand total: 0.8841s
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;which is literally a 200x improvement. 😎&lt;/p&gt;

&lt;p&gt;&lt;a href="https://terra-incognita.blog/posts/improving-my-blog-by-100x-with-rust-and-ai/ci_old_time_ms_and_ci_rust_time_ms.svg" rel="noopener noreferrer"&gt;Same as above just with timings from the GHA CI. It's interesting to notice how my non-math blogposts rendered faster &lt;em&gt;before&lt;/em&gt;. I have no explanation other than "random". The math blogposts more than make up for the lost milliseconds, though.&lt;/a&gt;&lt;br&gt;
To finish this post off, I want to touch on how this impacted my iteration cycles. Usually my blogposts start in Obsidian where I do a draft dump and I then I iterate and knead the words until I'm happy. Then the text and the multimedia move to the repository for final formatting changes. Usually I hate this part because, well, it's mostly over and it is (was) a pain in the ass waiting for the compilation of posts to finish. Now, that friction is &lt;em&gt;gone&lt;/em&gt;. I feel I have a much larger need to polish things up now.&lt;/p&gt;

&lt;p&gt;I think I have truly made something that both solves my problem, and is hard to beat on the &lt;code&gt;invested effort/performance&lt;/code&gt; metric.&lt;/p&gt;



&lt;p&gt;&lt;a href="https://youtu.be/TN25ghkfgQA?list=RDTN25ghkfgQA&amp;amp;t=1" rel="noopener noreferrer"&gt;Or have I&lt;/a&gt;?&lt;/p&gt;

&lt;p&gt;During one of my rewrite attempts, Gemini discovered that there's a package that does what I need already in my dependencies, called &lt;a href="https://facelessuser.github.io/pymdown-extensions/extensions/arithmatex/" rel="noopener noreferrer"&gt;aritmatex&lt;/a&gt;, a part of &lt;code&gt;pymdown-extensions&lt;/code&gt; "super-plugin" that packs a bunch of markdown QoL improvements. The compilation results with &lt;code&gt;arithmatex&lt;/code&gt; seemed too good to be true, even beating my Rust-based approach by 20%:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;[sampling-a-categorical-pmf]                                    Markdown: 0.0358s, Minify: 0.0083s
[merging-repos-with-jj]                                         Markdown: 0.0126s, Minify: 0.0020s
[learning-to-fly-through-windows-in-cloud]                      Markdown: 0.0103s, Minify: 0.0013s
[average-entropy-of-a-discrete-probability-distribution-part-1] Markdown: 0.0098s, Minify: 0.0013s
[web-summer-camp-2025-behind-the-scenes]                        Markdown: 0.0095s, Minify: 0.0012s
[the-c-method]                                                  Markdown: 0.0074s, Minify: 0.0015s
[my-deployment-setup-part-2]                                    Markdown: 0.0060s, Minify: 0.0008s
[my-deployment-setup-part-1]                                    Markdown: 0.0046s, Minify: 0.0007s
[irab-the-millionare-next-door]                                 Markdown: 0.0042s, Minify: 0.0005s
[hello-world]                                                   Markdown: 0.0030s, Minify: 0.0003s
[irab-the-phoenix-project]                                      Markdown: 0.0027s, Minify: 0.0004s
[irab-tiago-forte-building-a-second-brain]                      Markdown: 0.0027s, Minify: 0.0003s
Grand total: 0.478s
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;... and that's because they were. You see, what the extension really does is wrap the TeX blocks with a &lt;code&gt;&amp;lt;div class=arithmatex&amp;gt;&amp;lt;script type="math/tex"&amp;gt;&lt;/code&gt; and leaves the rendering to the visitor's browser 🙂&lt;/p&gt;

&lt;p&gt;This was one of those situations where I could have totally just go with what the model spewed out and call it a day, saving some hours (and missing a learning opportunity) for me and ever-so-slightly inconveniencing you, my dear reader. However I am glad I explored this solution, caught it early enough and went with the other approach. This is a low-stakes environment where the damage wouldn't be significant, but events like this one can totally happen in a much more dangerous setting if we don't verify the output. Good thing we're all always verifying the output, right?&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Those who know me know how much I love Rust ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;If you wondered why that extra step with unique tags, mbarkhau left &lt;a href="https://github.com/mbarkhau/markdown-katex/blob/master/src/markdown_katex/extension.py#L271-L278" rel="noopener noreferrer"&gt;a comment&lt;/a&gt; explaining exactly that. Thanks for answering my questions asynchronously! ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;Oneshotted? One shot? Shooted once? English gets confusing sometimes. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>tech</category>
      <category>rust</category>
      <category>python</category>
      <category>genai</category>
    </item>
    <item>
      <title>How to merge two repositories with jj</title>
      <dc:creator>Marijan Smetko</dc:creator>
      <pubDate>Sat, 21 Feb 2026 00:00:00 +0000</pubDate>
      <link>https://forem.com/msmetko/how-to-merge-two-repositories-with-jj-4691</link>
      <guid>https://forem.com/msmetko/how-to-merge-two-repositories-with-jj-4691</guid>
      <description>&lt;p&gt;&lt;em&gt;For the impatient, click HERE&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This post assumes basic understanding of &lt;a href="https://jj-vcs.dev" rel="noopener noreferrer"&gt;Jujutsu&lt;/a&gt; version&lt;br&gt;
control system.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The need to merge two repositories such that their histories stay preserved is more common than one might think. One situation is vendoring a third party dependency to your repository, in order to evade working with notorious submodules. Usually (but not necessarily) that dependency is imported to a subdirectory of the destination repository.&lt;/p&gt;

&lt;p&gt;The other situation, which I personally encountered in 2024, is migrating multiple repositories to a single monorepo... which you can think of as vendoring first party dependencies (:&lt;/p&gt;

&lt;p&gt;What we're trying to do here is have a commit that a) has two parents, one for the dependency and another for the destination repo (also known as a merge commit), and b) the contents of that commit should be the destination repo with files of the dependency in a subdirectory. And while Linus Torvalds calls merges like these &lt;a href="https://www.mail-archive.com/git@vger.kernel.org/msg73938.html" rel="noopener noreferrer"&gt;evil merges&lt;/a&gt; I actually think they're really neat for this exact purpose.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/InCogNiTo124/moonrepo/refs/heads/main/personal-blog/database/posts/merging-repos-with-jj/vcs.svg" rel="noopener noreferrer"&gt;Click here to see an animation of what we're trying to accomplish&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Side quest: doing it with git
&lt;/h2&gt;

&lt;p&gt;Let me be your personal &lt;code&gt;${SEARCH_ENGINE}&lt;/code&gt; results page and show you some of the approaches of merging two repositories' histories such that one ends up in a subdirectory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://stackoverflow.com/a/10548919" rel="noopener noreferrer"&gt;&lt;code&gt;git merge --allow-unrelated-histories&lt;/code&gt;&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;only if you don't have conflictingly named files (or you like resolving git conflicts)&lt;/li&gt;
&lt;li&gt;That, however, is exceedingly rare, since repositories tend to have many similarly named files (&lt;code&gt;README&lt;/code&gt;, &lt;code&gt;LICENSE&lt;/code&gt;, &lt;code&gt;src/&lt;/code&gt;...)&lt;/li&gt;
&lt;li&gt;Also, it just merges all files together so you end up with this soup of files with different origins, not another repo as a subdir&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://gist.github.com/x-yuri/9890ab1079cf4357d6f269d073fd9731" rel="noopener noreferrer"&gt;Use &lt;code&gt;git-filter-branch&lt;/code&gt; or its speedier cousin &lt;code&gt;git-filter-repo&lt;/code&gt;&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Rewrites the history such that one of the repositories was &lt;em&gt;always&lt;/em&gt; in a subdirectory&lt;/li&gt;
&lt;li&gt;However, we want to &lt;strong&gt;save&lt;/strong&gt; the history and keep all the hashes, not recalculate them&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://git-scm.com/book/en/v2/Git-Tools-Submodules" rel="noopener noreferrer"&gt;submodules&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;I don't like submodules&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.atlassian.com/git/tutorials/git-subtree" rel="noopener noreferrer"&gt;subtrees&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Frankly, I never used them.&lt;/li&gt;
&lt;li&gt;They do seem promising, but I don't really have a need to keep the subtree
as a separate project and upstream changes to it.&lt;/li&gt;
&lt;li&gt;Besides, I'm trying to showcase jj here 😛&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://avasdream.engineer/merging-multiple-repositories-into-monorepo" rel="noopener noreferrer"&gt;https://avasdream.engineer/merging-multiple-repositories-into-monorepo&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Uses plumbing commands like &lt;code&gt;git read-tree&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;It &lt;em&gt;does&lt;/em&gt; work, but seems very complex, manual, and error prone&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://blog.merovius.de/posts/2022-12-08-cleanly-merge-git-repositories/" rel="noopener noreferrer"&gt;https://blog.merovius.de/posts/2022-12-08-cleanly-merge-git-repositories/&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Same as the previous entry, but seemingly even lower level in the abstraction hierarchy 🤨&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So out of 6 entries, 2 of them don't work at all, 2 of them can be made to work, 1 is what just might do what I need and 1 is what I'm not going anywhere near to.&lt;/p&gt;

&lt;p&gt;Let's switch back to step-by-step guide on doing it with jj. A lot has been written already about jj on the Internets, and I won't be repeating all the points, but I find its CLI really intuitive and aligned with my mental model of source version control, and I truly enjoy doing graph theory shenanigans with it. This is one of them.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step by step
&lt;/h2&gt;

&lt;p&gt;There are really only three steps to do it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;fetch all repositories locally&lt;/li&gt;
&lt;li&gt;prepare one of the repositories&lt;/li&gt;
&lt;li&gt;rebase the change to the other one&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's best to try this out with an example so I'll show you how I migrated most of my important repositories to a monorepo. I used to have different repositories for personal website, personal blog, and a third repository that had shared functionality, primarily CSS and cookie storage data. I now keep everything in my &lt;a href="https://github.com/InCogNiTo124/moonrepo" rel="noopener noreferrer"&gt;monorepo&lt;/a&gt; and link that shared directory as a local dependency.&lt;/p&gt;
&lt;h3&gt;
  
  
  1. Fetch all repositories locally
&lt;/h3&gt;

&lt;p&gt;You should have both the target repository and the other repository locally under different remotes. Here's how I did it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj git remote add personal-website git@github.com:InCogNiTo124/personal-website.git
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj git remote add personal-reusables git@github.com:InCogNiTo124/personal-reusables.git
&lt;span class="go"&gt;
&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj git remote list
&lt;span class="go"&gt;personal-reusables git@github.com:InCogNiTo124/personal-reusables.git
personal-website git@github.com:InCogNiTo124/personal-website.git

&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj git fetch &lt;span class="nt"&gt;--all-remotes&lt;/span&gt;
&lt;span class="go"&gt;remote: Enumerating objects: 202, done.
remote: Total 202 (delta 64), reused 177 (delta 51), pack-reused 0 (from 0)
remote: Enumerating objects: 1565, done.
remote: Total 1565 (delta 3), reused 0 (delta 0), pack-reused 1557 (from 1)
bookmark: master@personal-reusables                       [new] untracked
bookmark: master@personal-website                         [new] untracked
[...]
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We'll be making a new change on top of two main bookmarks, &lt;code&gt;master@personal-website&lt;/code&gt; and &lt;code&gt;master@personal-reusables&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. prepare one of the repositories
&lt;/h3&gt;

&lt;p&gt;By "preparing", in this scenario I mean to move the dependency, in my case &lt;code&gt;personal-reusables&lt;/code&gt;, into a separate directory. You might have something more complex.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj new master@personal-reusables
&lt;span class="go"&gt;Working copy  (@) now at: kzmmukoo 96af1229 (empty) (no description set)
Parent commit (@-)      : rtltllyx 2dfe4841 master@personal-reusables | Migrate to svelte 5
Added 42 files, modified 0 files, removed 0 files

&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;lsd &lt;span class="nt"&gt;-lah&lt;/span&gt;
&lt;span class="go"&gt;drwxr-xr-x msmetko msmetko  80 B Tue Feb 17 21:08:21 2026  .jj/
drwxr-xr-x msmetko msmetko 120 B Tue Feb 17 21:25:18 2026  lib/
drwxr-xr-x msmetko msmetko 140 B Tue Feb 17 21:25:18 2026  static/
.rw-r--r-- msmetko msmetko 350 B Tue Feb 17 21:25:18 2026  README.md

&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; personal-reusables
&lt;span class="go"&gt;mkdir: created directory 'personal-reusables'

&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nb"&gt;mv&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; lib static README.md personal-reusables/
&lt;span class="gp"&gt;renamed 'lib' -&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'personal-reusables/lib'&lt;/span&gt;
&lt;span class="gp"&gt;renamed 'static' -&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'personal-reusables/static'&lt;/span&gt;
&lt;span class="gp"&gt;renamed 'README.md' -&amp;gt;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s1"&gt;'personal-reusables/README.md'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we're ready to finally merge the repository histories&lt;/p&gt;

&lt;h3&gt;
  
  
  3. rebase the change
&lt;/h3&gt;

&lt;p&gt;The final moment is a bit anticlimactic as it's, &lt;em&gt;literally&lt;/em&gt;, only one command&lt;sup id="fnref1"&gt;1&lt;/sup&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj rebase &lt;span class="nt"&gt;-r&lt;/span&gt; @ &lt;span class="nt"&gt;-o&lt;/span&gt; @- &lt;span class="nt"&gt;-o&lt;/span&gt; master@personal-website
&lt;span class="go"&gt;Rebased 1 commits to destination
Working copy  (@) now at: kzmmukoo 0f375680 (no description set)
Parent commit (@-)      : rtltllyx 2dfe4841 master@personal-reusables | Migrate to svelte 5
&lt;/span&gt;&lt;span class="gp"&gt;Parent commit (@-)      : ovxlkzxx 0e02edf2 master@personal-website | (empty) Merge pull request #&lt;/span&gt;191 from InCogNiTo124/renovate/lock-file-maintenance
&lt;span class="go"&gt;Added 32 files, modified 0 files, removed 0 files
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's unpack the command just a little so it's a bit less magic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;jj rebase&lt;/code&gt;: command that, perhaps surprisingly, rebases a change&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-r @&lt;/code&gt;: rebase &lt;em&gt;the current&lt;/em&gt; change (&lt;code&gt;kzmmukoo&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-o @-&lt;/code&gt;: rebase &lt;em&gt;on top of&lt;/em&gt; current change's parent.

&lt;ul&gt;
&lt;li&gt;If you think about it for a bit, @'s parent is already @- so this is, in isolation, pretty much a no-op.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;-o master@personal-website&lt;/code&gt;: &lt;strong&gt;also&lt;/strong&gt; rebase &lt;em&gt;on top of&lt;/em&gt; &lt;code&gt;master@personal-website&lt;/code&gt;.

&lt;ul&gt;
&lt;li&gt;The nice thing about &lt;code&gt;-o&lt;/code&gt; is that it can be repeated! This means that all &lt;code&gt;-o&lt;/code&gt; changes will be parents of &lt;code&gt;-r&lt;/code&gt;. Equivalently, &lt;code&gt;-r&lt;/code&gt; will be the children of all &lt;code&gt;-o&lt;/code&gt; changes. Neat!&lt;sup id="fnref2"&gt;2&lt;/sup&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The output of the command confirms that the change &lt;code&gt;kzmmukoo&lt;/code&gt; now has 2 parents &lt;code&gt;@-&lt;/code&gt;, one is from the source repository, and the other is for the destination repository&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;And there you have it, we created a change that has histories of two repositories as parents. You can also validate that by, for example, running&lt;br&gt;
&lt;code&gt;jj log -r 'ancestors(@, 5)'&lt;/code&gt; :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj log &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'ancestors(@, 5)'&lt;/span&gt;
&lt;span class="go"&gt;@    kzmmukoo msmetko@msmetko.xyz 2026-02-17 21:33:24 0f375680
├─╮  (no description set)
│ ◆    ovxlkzxx msmetko@msmetko.xyz 2024-11-10 18:42:57 master@personal-website 0e02edf2
&lt;/span&gt;&lt;span class="gp"&gt;│ ├─╮  (empty) Merge pull request #&lt;/span&gt;191 from InCogNiTo124/renovate/lock-file-maintenance
&lt;span class="go"&gt;│ │ ◆  zmrszpzu 29139614+renovate[bot]@users.noreply.github.com 2024-11-10 18:42:00 renovate/lock-file-maintenance@personal-website 5d2a6891
│ ├─╯  chore(deps): lock file maintenance
│ ◆    qyooponn msmetko@msmetko.xyz 2024-11-10 18:37:00 72f4ea0f
│ ├─╮  (empty) chore(config): migrate renovate config
│ │ ◆  kvsomwuw 29139614+renovate[bot]@users.noreply.github.com 2024-11-10 18:35:29 59a0896c
│ ├─╯  chore(config): migrate config .github/renovate.json
│ ◆    xmuwpyuo msmetko@msmetko.xyz 2024-11-10 15:05:11 2de53adb
&lt;/span&gt;&lt;span class="gp"&gt;│ ├─╮  (empty) Merge pull request #&lt;/span&gt;189 from InCogNiTo124/renovate/all
&lt;span class="go"&gt;│ │ ◆  wovwqnot 29139614+renovate[bot]@users.noreply.github.com 2024-11-10 02:41:52 09b3d5e0
│ ├─╯  chore(deps): update all dependencies
│ ◆  qkulpnzw msmetko@msmetko.xyz 2024-11-03 21:18:27 53bb0af1
&lt;/span&gt;&lt;span class="gp"&gt;│ │  (empty) Merge pull request #&lt;/span&gt;188 from InCogNiTo124/renovate/major-all
&lt;span class="go"&gt;│ ~
│
◆  rtltllyx msmetko@msmetko.xyz 2024-11-10 17:50:54 master@personal-reusables 2dfe4841
│  Migrate to svelte 5
◆    xyouttxu msmetko@msmetko.xyz 2024-08-06 10:06:43 c0814896
&lt;/span&gt;&lt;span class="gp"&gt;├─╮  (empty) Merge pull request #&lt;/span&gt;1 from InCogNiTo124/width-fix
&lt;span class="go"&gt;│ ◆  tlqmunmy msmetko@msmetko.xyz 2024-08-05 22:26:48 3a2f8bd8
├─╯  Increase content width from 650 to 720 px
◆  txsmnyvw msmetko@msmetko.xyz 2023-05-28 14:05:58 7b733cde
│  Refresh images
◆  xumusqzs amalija@netgen.io 2022-10-09 08:14:27 30dbbb97
│  use new variables
~
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sorry I don't have a proper syntax highlighting :) but if you squint here, you can totally see two separate chains going on and on and on.&lt;/p&gt;

&lt;p&gt;The best part is, thanks to jj, you can totally just continue editing this change and it'll still have the same parents. For example, my next step was moving all files originating from the &lt;code&gt;personal-website&lt;/code&gt; to a &lt;code&gt;personal-website&lt;/code&gt; directory, which took a minute, and then reconfiguring the code to look at the new locations, which took hundreds more :)&lt;/p&gt;




&lt;h2&gt;
  
  
  Bonus: merging a third, secret, repository
&lt;/h2&gt;

&lt;p&gt;Let's say you're feeling &lt;em&gt;really frisky&lt;/em&gt; and that you might even be in the mood for merging &lt;em&gt;three&lt;/em&gt; repositories. I did mention I had a personal blog that shared that code so it may as well be a part of the monorepo...&lt;/p&gt;

&lt;p&gt;I won't be reproducing all the steps, but let's assume we start from a state where both projects are in their own directory and we're ready to merge with the third repository:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;lsd &lt;span class="nt"&gt;-lah&lt;/span&gt;
&lt;span class="go"&gt;drwxr-xr-x msmetko msmetko  80 B Tue Feb 17 21:08:21 2026  .jj
drwxr-xr-x msmetko msmetko 100 B Tue Feb 17 22:15:15 2026  personal-reusables
drwxr-xr-x msmetko msmetko 360 B Tue Feb 17 22:15:15 2026  personal-website

&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj log
&lt;span class="go"&gt;@    kzmmukoo msmetko@msmetko.xyz 2026-02-17 22:10:23 cfaa577d
├─╮  (no description set)
│ ◆  ovxlkzxx msmetko@msmetko.xyz 2024-11-10 18:42:57 master@personal-website 0e02edf2
&lt;/span&gt;&lt;span class="gp"&gt;│ │  (empty) Merge pull request #&lt;/span&gt;191 from InCogNiTo124/renovate/lock-file-maintenance
&lt;span class="go"&gt;│ ~  (elided revisions)
◆ │  rtltllyx msmetko@msmetko.xyz 2024-11-10 17:50:54 master@personal-reusables 2dfe4841
│ │  Migrate to svelte 5
~ │  (elided revisions)
├─╯
◆  zzzzzzzz root() 00000000

&lt;/span&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj git remote list
&lt;span class="go"&gt;personal-blog git@github.com:InCogNiTo124/personal-blog.git
personal-reusables git@github.com:InCogNiTo124/personal-reusables.git
personal-website git@github.com:InCogNiTo124/personal-website.git
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, simply run &lt;strong&gt;the same command&lt;/strong&gt;&lt;sup id="fnref3"&gt;3&lt;/sup&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="gp"&gt;$&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;jj rebase &lt;span class="nt"&gt;-r&lt;/span&gt; @ &lt;span class="nt"&gt;-o&lt;/span&gt; @- &lt;span class="nt"&gt;-o&lt;/span&gt; master@personal-blog
&lt;span class="go"&gt;Rebased 1 commits to destination
Working copy  (@) now at: kzmmukoo 80756c82 (no description set)
&lt;/span&gt;&lt;span class="gp"&gt;Parent commit (@-)      : ovxlkzxx 0e02edf2 master@personal-website | (empty) Merge pull request #&lt;/span&gt;191 from InCogNiTo124/renovate/lock-file-maintenance
&lt;span class="go"&gt;Parent commit (@-)      : rtltllyx 2dfe4841 master@personal-reusables | Migrate to svelte 5
Parent commit (@-)      : xxkvttlr 0ad8251f master@personal-blog | fix(posts/sampling): fix the date
Added 95 files, modified 1 files, removed 0 files
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works like a charm, right out of the box 🎉&lt;/p&gt;

&lt;p&gt;I &lt;em&gt;genuinely&lt;/em&gt; don't know neither 1) how would I do that with git right at the top off my head, nor 2) how many tries and do-overs would I need in order to merge three unrelated git histories. If someone has a knack for self-inflicted learning opportunities, let me know if you somehow manage to do that and I'll immortalize your attempt here!&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;This works because jj has a single &lt;code&gt;root()&lt;/code&gt; commit underlying every jj repo.&lt;br&gt;
This actually means &lt;em&gt;all jj repos have the same parent&lt;/em&gt;, like mine repos,&lt;br&gt;
your repos, and even repos that don't exist yet. All of them share &lt;code&gt;zzzzzz&lt;/code&gt;&lt;br&gt;
parent. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;the command can be made even more compact thanks to the power of&lt;br&gt;
&lt;a href="https://docs.jj-vcs.dev/latest/revsets/" rel="noopener noreferrer"&gt;revsets&lt;/a&gt;. Instead of repeating&lt;br&gt;
&lt;code&gt;-o&lt;/code&gt; flag, we could do &lt;code&gt;jj rebase -r '@' -o '@- | master@personal-blog'&lt;/code&gt;.&lt;br&gt;
Neater! ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;I am aware that technically that is not exactly the same command, thank you ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>tech</category>
      <category>jj</category>
    </item>
  </channel>
</rss>
