<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: TildAlice</title>
    <description>The latest articles on Forem by TildAlice (@tildalice).</description>
    <link>https://forem.com/tildalice</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tildalice"/>
    <language>en</language>
    <item>
      <title>Pandas Time Series Resample: OHLC 14x Faster Than Custom</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Wed, 06 May 2026 21:03:37 +0000</pubDate>
      <link>https://forem.com/tildalice/pandas-time-series-resample-ohlc-14x-faster-than-custom-1foa</link>
      <guid>https://forem.com/tildalice/pandas-time-series-resample-ohlc-14x-faster-than-custom-1foa</guid>
      <description>&lt;h2&gt;
  
  
  OHLC Looks Like a Shortcut Until You Measure It
&lt;/h2&gt;

&lt;p&gt;Most traders and quant devs reach for &lt;code&gt;df.resample('1H').ohlc()&lt;/code&gt; when they need hourly bars from minute-level tick data. It's a one-liner, it's built-in, and the docs make it look like the obvious choice. But when you're processing millions of rows of crypto or futures data, that convenience costs you. I tested OHLC against custom aggregation on 500K rows of real tick data — OHLC finished in 0.31 seconds, custom agg took 4.4 seconds. That's a 14x gap.&lt;/p&gt;

&lt;p&gt;The weird part? Custom aggregation gives you &lt;em&gt;more&lt;/em&gt; control and flexibility. You'd expect the tradeoff to be speed vs features, but here you lose on both fronts if you avoid the built-in. This post digs into why that performance gap exists, when you actually need custom aggregation despite the cost, and how to close the gap when you can't avoid it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-pandas-time-series-resample-ohlc-vs-custom-agg-performance-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-pandas-time-series-resample-ohlc-vs-custom-agg-performance-1.jpg" alt="A giant panda peacefully munching on bamboo against an Asian architectural backdrop." width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;
Photo by &lt;a href="https://www.pexels.com/@wunha-chen-713719832" rel="nofollow noopener noreferrer"&gt;Wunha Chen&lt;/a&gt; on &lt;a href="https://www.pexels.com" rel="nofollow noopener noreferrer"&gt;Pexels&lt;/a&gt;



&lt;h2&gt;
  
  
  The Test Setup: Real Tick Data and Two Approaches
&lt;/h2&gt;




&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/pandas-time-series-resample-ohlc-vs-custom-agg-performance/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>pandas</category>
      <category>timeseries</category>
      <category>performance</category>
      <category>dataanalysis</category>
    </item>
    <item>
      <title>Python weakref for Cache Systems: Prevent Memory Leaks</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Wed, 06 May 2026 18:05:11 +0000</pubDate>
      <link>https://forem.com/tildalice/python-weakref-for-cache-systems-prevent-memory-leaks-281p</link>
      <guid>https://forem.com/tildalice/python-weakref-for-cache-systems-prevent-memory-leaks-281p</guid>
      <description>&lt;h2&gt;
  
  
  A 400MB Memory Leak from 12 Lines of Cache Code
&lt;/h2&gt;

&lt;p&gt;I watched a production service climb from 200MB to 4GB over 6 hours. The culprit? A dictionary-based cache that never forgot anything.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# The silent killer
&lt;/span&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ImageProcessor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;_cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;  &lt;span class="c1"&gt;# This grows forever
&lt;/span&gt;
    &lt;span class="nd"&gt;@classmethod&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_processed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cls&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;raw_image&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;image_id&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;cls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_cache&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;cls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_cache&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;image_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;expensive_process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;raw_image&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;cls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_cache&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;image_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The fix was three lines. Here's the working version first, then I'll explain why the naive approach fails so spectacularly:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;weakref&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ImageProcessor&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;_cache&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;weakref&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;WeakValueDictionary&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="nd"&gt;@classmethod&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_processed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cls&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;processed_image&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="c1"&gt;# Only caches while caller holds a reference
&lt;/span&gt;        &lt;span class="n"&gt;cls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_cache&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;image_id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;processed_image&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;processed_image&lt;/span&gt;

    &lt;span class="nd"&gt;@classmethod&lt;/span&gt;  
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_cached&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cls&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;image_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;cls&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;_cache&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Returns None if GC'd
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. &lt;code&gt;WeakValueDictionary&lt;/code&gt; doesn't prevent garbage collection of its values. When the last strong reference to a cached object disappears, the entry evicts itself. No manual cleanup, no LRU complexity, no TTL tracking.&lt;/p&gt;






&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/python-weakref-cache-memory-leak/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>weakref</category>
      <category>memorymanagement</category>
      <category>caching</category>
    </item>
    <item>
      <title>MobileNetV3 vs EfficientNet-Lite: ARM CPU Latency Benchmark</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Wed, 06 May 2026 15:06:48 +0000</pubDate>
      <link>https://forem.com/tildalice/mobilenetv3-vs-efficientnet-lite-arm-cpu-latency-benchmark-4cn0</link>
      <guid>https://forem.com/tildalice/mobilenetv3-vs-efficientnet-lite-arm-cpu-latency-benchmark-4cn0</guid>
      <description>&lt;h2&gt;
  
  
  MobileNetV3 vs EfficientNet-Lite: Which Actually Runs Faster on ARM?
&lt;/h2&gt;

&lt;p&gt;MobileNetV3-Small claims 15ms inference on a Pixel phone. EfficientNet-Lite0 claims similar accuracy with "better efficiency." But when I converted both to TFLite and ran them on a Raspberry Pi 4, the numbers told a different story—MobileNetV3-Small hit 23ms while EfficientNet-Lite0 crawled at 67ms. That's a 2.9x gap that no paper prepared me for.&lt;/p&gt;

&lt;p&gt;You can read the MobileNetV3 paper &lt;a href="https://arxiv.org/abs/1905.02244" rel="noopener noreferrer"&gt;here&lt;/a&gt; (Howard et al., ICCV 2019) and the EfficientNet paper &lt;a href="https://arxiv.org/abs/1905.11946" rel="noopener noreferrer"&gt;here&lt;/a&gt; (Tan &amp;amp; Le, ICML 2019).&lt;/p&gt;

&lt;p&gt;This isn't about which architecture is "better"—it's about why theoretical FLOPs and actual ARM latency diverge so dramatically, and what that means if you're building an interview portfolio project that needs to run on real edge hardware.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-mobilenetv3-vs-efficientnet-lite-arm-latency-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-mobilenetv3-vs-efficientnet-lite-arm-latency-1.jpg" alt="Close-up of multiple computer CPUs stacked on a wooden surface, showcasing technology components." width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;
Photo by &lt;a href="https://www.pexels.com/@iseeghoststoo" rel="nofollow noopener noreferrer"&gt;Shawn Stutzman&lt;/a&gt; on &lt;a href="https://www.pexels.com" rel="nofollow noopener noreferrer"&gt;Pexels&lt;/a&gt;



&lt;h2&gt;
  
  
  Why the Paper Numbers Don't Match Your Raspberry Pi
&lt;/h2&gt;




&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/mobilenetv3-vs-efficientnet-lite-arm-latency/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mobilenetv3</category>
      <category>efficientnetlite</category>
      <category>armcpu</category>
      <category>edgedeployment</category>
    </item>
    <item>
      <title>Kadane's Algorithm: Maximum Subarray in O(N) + Edge Cases</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Tue, 05 May 2026 21:03:55 +0000</pubDate>
      <link>https://forem.com/tildalice/kadanes-algorithm-maximum-subarray-in-on-edge-cases-4gmo</link>
      <guid>https://forem.com/tildalice/kadanes-algorithm-maximum-subarray-in-on-edge-cases-4gmo</guid>
      <description>&lt;h2&gt;
  
  
  Most Candidates Fail This Problem Because They Skip One Check
&lt;/h2&gt;

&lt;p&gt;Here's a claim: a significant portion of candidates who know Kadane's algorithm still fail the maximum subarray problem in interviews. Not because they don't understand the algorithm—they fail because they don't handle an edge case that seems trivial until you're staring at a wrong answer with 5 minutes left.&lt;/p&gt;

&lt;p&gt;The edge case? An array of all negative numbers.&lt;/p&gt;

&lt;p&gt;Kadane's algorithm, in its textbook form, returns 0 for &lt;code&gt;[-3, -1, -4]&lt;/code&gt;. But the correct answer is &lt;code&gt;-1&lt;/code&gt;. And that single oversight has tanked more interviews than I'd like to guess.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Brute Force Baseline: Why O(N²) Makes Sense First
&lt;/h2&gt;

&lt;p&gt;Before optimizing, let me show you the brute force approach—not because you'd use it, but because it clarifies what we're actually computing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;max_subarray_bruteforce&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nums&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;O(N²) baseline: check all subarrays&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;nums&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;ValueError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Empty array has no subarrays&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;nums&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;max_sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;nums&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Not 0! This is the first trap.
&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;current_sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;j&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;current_sum&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;nums&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;j&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="n"&gt;max_sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;max_sum&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;current_sum&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;max_sum&lt;/span&gt;

&lt;span class="c1"&gt;# Quick sanity check
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;max_subarray_bruteforce&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;  &lt;span class="c1"&gt;# -1, not 0
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;max_subarray_bruteforce&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;  &lt;span class="c1"&gt;# 7
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/kadane-algorithm-maximum-subarray-edge-cases/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kadanealgorithm</category>
      <category>maximumsubarray</category>
      <category>dynamicprogramming</category>
      <category>codinginterview</category>
    </item>
    <item>
      <title>Pandas vs SQL: 3.2x Speed Gap in Real Data Cleaning Jobs</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Tue, 05 May 2026 18:05:02 +0000</pubDate>
      <link>https://forem.com/tildalice/pandas-vs-sql-32x-speed-gap-in-real-data-cleaning-jobs-3il4</link>
      <guid>https://forem.com/tildalice/pandas-vs-sql-32x-speed-gap-in-real-data-cleaning-jobs-3il4</guid>
      <description>&lt;h2&gt;
  
  
  SQL Won By a Mile. Then I Ran It Again.
&lt;/h2&gt;

&lt;p&gt;I ran the same data cleaning job in Pandas and SQL expecting Pandas to edge ahead on small datasets. The opposite happened — PostgreSQL finished in 1.8 seconds while Pandas took 5.9 seconds on a 500k-row CSV with messy nulls, duplicates, and type mismatches. The gap widened to 3.2x on 2 million rows.&lt;/p&gt;

&lt;p&gt;This contradicts the "use SQL for big data, Pandas for small" advice you see everywhere. The reality depends on what you're actually doing. Filtering and joins? SQL wins at any scale. Complex string parsing or regex-heavy transformations? Pandas pulls ahead because Python's string methods are richer than SQL's.&lt;/p&gt;

&lt;p&gt;I'm sharing side-by-side code for five common cleaning tasks: deduplication, null handling, type conversion, outlier filtering, and date parsing. You'll see exact timings, memory footprints, and the specific edge cases where each tool chokes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-pandas-vs-sql-data-cleaning-speed-comparison-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-pandas-vs-sql-data-cleaning-speed-comparison-1.jpg" alt="A young giant panda cub playfully climbs on a rocky terrain in its enclosure." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
Photo by &lt;a href="https://www.pexels.com/@alicia-chai-hui-yi-312591533" rel="nofollow noopener noreferrer"&gt;Alicia Chai Hui Yi&lt;/a&gt; on &lt;a href="https://www.pexels.com" rel="nofollow noopener noreferrer"&gt;Pexels&lt;/a&gt;



&lt;h2&gt;
  
  
  Test Setup: Same Messy Data, Two Approaches
&lt;/h2&gt;




&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/pandas-vs-sql-data-cleaning-speed-comparison/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>pandas</category>
      <category>sql</category>
      <category>datacleaning</category>
      <category>postgres</category>
    </item>
    <item>
      <title>PyTorch 2.6 vs TensorFlow 2.18: 5x Faster Training</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Tue, 05 May 2026 15:04:15 +0000</pubDate>
      <link>https://forem.com/tildalice/pytorch-26-vs-tensorflow-218-5x-faster-training-11gh</link>
      <guid>https://forem.com/tildalice/pytorch-26-vs-tensorflow-218-5x-faster-training-11gh</guid>
      <description>&lt;h2&gt;
  
  
  The Compile Mode Nobody Actually Uses
&lt;/h2&gt;

&lt;p&gt;PyTorch 2.6's &lt;code&gt;torch.compile()&lt;/code&gt; claims 2-5x speedups. TensorFlow 2.18's XLA promises similar gains. Most repos I've audited still wrap models in &lt;code&gt;model.to(device)&lt;/code&gt; and call it a day.&lt;/p&gt;

&lt;p&gt;I ran the same ResNet-50 training script on both frameworks with and without compilation. PyTorch with compile mode hit 847 images/sec on an A100. TensorFlow with XLA managed 612 images/sec. Vanilla PyTorch? 168 images/sec. The gap is real, but the setup friction explains why it's rare in production code.&lt;/p&gt;

&lt;p&gt;Here's what actually happened when I forced both frameworks through identical workloads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-pytorch-26-vs-tensorflow-218-compile-mode-5x-faster-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-pytorch-26-vs-tensorflow-218-compile-mode-5x-faster-1.jpg" alt="Detailed view of code and file structure in a software development environment." width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;
Photo by &lt;a href="https://www.pexels.com/@dkomov" rel="nofollow noopener noreferrer"&gt;Daniil Komov&lt;/a&gt; on &lt;a href="https://www.pexels.com" rel="nofollow noopener noreferrer"&gt;Pexels&lt;/a&gt;



&lt;h2&gt;
  
  
  Why Compile Mode Exists (and Why It's Not Default)
&lt;/h2&gt;

&lt;p&gt;Both frameworks execute models in eager mode by default — every operation becomes a Python function call. This flexibility makes debugging trivial. You can print tensor shapes mid-forward pass, drop into &lt;code&gt;pdb&lt;/code&gt;, inspect gradients line by line.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/pytorch-26-vs-tensorflow-218-compile-mode-5x-faster/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>pytorch</category>
      <category>tensorflow</category>
      <category>modelcompilation</category>
      <category>deeplearning</category>
    </item>
    <item>
      <title>PPO Entropy Decay Bug: Why Exploration Dies at 500K Steps</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Mon, 04 May 2026 21:04:50 +0000</pubDate>
      <link>https://forem.com/tildalice/ppo-entropy-decay-bug-why-exploration-dies-at-500k-steps-1npn</link>
      <guid>https://forem.com/tildalice/ppo-entropy-decay-bug-why-exploration-dies-at-500k-steps-1npn</guid>
      <description>&lt;h2&gt;
  
  
  The Bug That Killed My Agent at Step 523,000
&lt;/h2&gt;

&lt;p&gt;Your PPO agent trains beautifully for 500,000 steps, hits 80% win rate, then flatlines. The policy stops exploring, gets stuck repeating the same suboptimal actions, and never recovers. You check the value loss, policy loss, KL divergence—everything looks normal. But if you plot the entropy coefficient over time, you'll see it decayed to 0.0001 while your entropy bonus weight stayed at 0.01. The agent stopped exploring because the coefficient that controls exploration vanished.&lt;/p&gt;

&lt;p&gt;This isn't a hyperparameter tuning problem. It's a silent implementation bug in how most PPO codebases handle entropy decay.&lt;/p&gt;

&lt;p&gt;I hit this training a MuJoCo Ant-v4 agent (Gymnasium 0.29.1, Stable Baselines3 2.2.1). The agent learned to walk forward, then stopped trying new gaits entirely. Training curves showed the policy entropy $H(\pi)$ dropping from 2.1 nats to 0.03 nats between steps 400K-600K, but the entropy coefficient scheduler had already bottomed out at step 520K. Once the coefficient hit its minimum, the entropy bonus term in the loss function became negligible:&lt;/p&gt;

&lt;p&gt;$$L_{total} = L_{clip} + c_1 L_{value} - c_{ent} H(\pi)$$&lt;/p&gt;

&lt;p&gt;When $c_{ent} = 0.0001$ and your base weight is 0.01, the effective entropy bonus is $0.01 \times 0.0001 = 0.000001$. At that point, the policy gradient overwhelmingly favors exploitation. The agent locks into a local optimum and stops trying new actions.&lt;/p&gt;






&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/ppo-entropy-decay-bug-exploration-collapses-500k-steps/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ppo</category>
      <category>reinforcementlearnin</category>
      <category>entropyregularizatio</category>
      <category>hyperparametertuning</category>
    </item>
    <item>
      <title>Kubernetes HPA + Triton: Custom Metrics Autoscaling Setup</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Mon, 04 May 2026 18:06:23 +0000</pubDate>
      <link>https://forem.com/tildalice/kubernetes-hpa-triton-custom-metrics-autoscaling-setup-3bl2</link>
      <guid>https://forem.com/tildalice/kubernetes-hpa-triton-custom-metrics-autoscaling-setup-3bl2</guid>
      <description>&lt;h2&gt;
  
  
  The Default CPU Metric Doesn't Scale Inference Pods Right
&lt;/h2&gt;

&lt;p&gt;Kubernetes Horizontal Pod Autoscaler (HPA) ships with CPU and memory metrics out of the box. Sounds great until you realize inference workloads don't behave like web servers. I've seen Triton pods sit at 30% CPU utilization while requests queue for 2+ seconds because the GPU is maxed out. The cluster thinks everything's fine. It's not.&lt;/p&gt;

&lt;p&gt;Triton Inference Server can batch requests and pipeline stages across CPU/GPU, which means CPU usage becomes a terrible proxy for "is this pod overloaded?" You need to scale on what actually matters: GPU utilization, queue depth, or batch occupancy. This post walks through wiring HPA to Triton's Prometheus metrics so your cluster scales on signal that reflects reality.&lt;/p&gt;

&lt;p&gt;I'll show the full stack: Prometheus → Prometheus Adapter → HPA custom metrics → autoscaling Triton deployments. The key insight is that HPA only knows about metrics the API server exposes, so you're building a pipeline from Triton metrics to &lt;code&gt;custom.metrics.k8s.io&lt;/code&gt; API.&lt;/p&gt;

&lt;p&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-kubernetes-hpa-triton-custom-metrics-autoscaling-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-kubernetes-hpa-triton-custom-metrics-autoscaling-1.jpg" alt="Close-up of the Triton statue spouting water at the iconic Piazza Navona fountain in Rome." width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;
Photo by &lt;a href="https://www.pexels.com/@paolo-motti-410165760" rel="nofollow noopener noreferrer"&gt;Paolo Motti&lt;/a&gt; on &lt;a href="https://www.pexels.com" rel="nofollow noopener noreferrer"&gt;Pexels&lt;/a&gt;






&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/kubernetes-hpa-triton-custom-metrics-autoscaling/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>triton</category>
      <category>mlops</category>
      <category>autoscaling</category>
    </item>
    <item>
      <title>RLHF vs DPO: Training Cost Drops 68% in Real Migration</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Mon, 04 May 2026 15:04:40 +0000</pubDate>
      <link>https://forem.com/tildalice/rlhf-vs-dpo-training-cost-drops-68-in-real-migration-4nhf</link>
      <guid>https://forem.com/tildalice/rlhf-vs-dpo-training-cost-drops-68-in-real-migration-4nhf</guid>
      <description>&lt;h2&gt;
  
  
  The $12,000 Surprise
&lt;/h2&gt;

&lt;p&gt;RLHF training for a 7B parameter model ran us $12,400 on AWS for three days of continuous runs. The compute wasn't the issue — it was the &lt;em&gt;waste&lt;/em&gt;. Every iteration meant spinning up a critic model, generating completions, calculating rewards, backpropagating through both networks, and repeating. When we migrated the same preference dataset to DPO, the equivalent training run cost $3,950. Same dataset, same base model, 68% cost reduction.&lt;/p&gt;

&lt;p&gt;But the migration wasn't a drop-in replacement. DPO doesn't use a reward model at all, which sounds like a simplification until you realize your entire loss function changes shape.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-rlhf-to-dpo-migration-training-cost-analysis-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-rlhf-to-dpo-migration-training-cost-analysis-1.jpg" alt="A group of three stylish sports cars parked under ambient lighting in an urban garage setting." width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;
Photo by &lt;a href="https://www.pexels.com/@ene-marius-241207761" rel="nofollow noopener noreferrer"&gt;Ene Marius&lt;/a&gt; on &lt;a href="https://www.pexels.com" rel="nofollow noopener noreferrer"&gt;Pexels&lt;/a&gt;



&lt;h2&gt;
  
  
  What Actually Changed Under the Hood
&lt;/h2&gt;

&lt;p&gt;RLHF trains two models in tandem. The policy model generates text, the reward model scores it, and policy gradient methods (usually PPO) nudge the policy toward higher rewards. The loss for the policy network involves a rather involved expectation:&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/rlhf-to-dpo-migration-training-cost-analysis/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>rlhf</category>
      <category>dpo</category>
      <category>llmfinetuning</category>
      <category>preferencelearning</category>
    </item>
    <item>
      <title>OpenCV to Albumentations: 3x Faster Augmentation Pipeline</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Sun, 03 May 2026 21:05:45 +0000</pubDate>
      <link>https://forem.com/tildalice/opencv-to-albumentations-3x-faster-augmentation-pipeline-1c82</link>
      <guid>https://forem.com/tildalice/opencv-to-albumentations-3x-faster-augmentation-pipeline-1c82</guid>
      <description>&lt;h2&gt;
  
  
  Why Your OpenCV Augmentation Loop Is Probably Too Slow
&lt;/h2&gt;

&lt;p&gt;I've seen production pipelines where augmentation takes longer than model training per epoch. The culprit? Hand-rolled OpenCV transforms applied one by one in a Python for-loop.&lt;/p&gt;

&lt;p&gt;OpenCV is great for reading images and basic preprocessing. But when you're stacking 8+ augmentations per image across 50,000 training samples, those sequential &lt;code&gt;cv2.rotate()&lt;/code&gt;, &lt;code&gt;cv2.GaussianBlur()&lt;/code&gt;, and manual brightness adjustments compound into a bottleneck. Albumentations solves this by batching transforms into a single optimized pipeline with minimal memory copies.&lt;/p&gt;

&lt;p&gt;Here's what I mean. A typical OpenCV augmentation setup looks like this:&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
python
import cv2
import numpy as np
import random

def augment_opencv(image):
    # Horizontal flip
    if random.random() &amp;gt; 0.5:
        image = cv2.flip(image, 1)

    # Rotation
    angle = random.uniform(-15, 15)
    h, w = image.shape[:2]
    M = cv2.getRotationMatrix2D((w/2, h/2), angle, 1.0)
    image = cv2.warpAffine(image, M, (w, h))

    # Brightness adjustment
    brightness_factor = random.uniform(0.8, 1.2)
    image = np.clip(image * brightness_factor, 0, 255).astype(np.uint8)

    # Gaussian blur
    if random.random() &amp;gt; 0.7:
        image = cv2.GaussianBlur(image, (5, 5), 0)

    # Hue/saturation shift (requires HSV conversion)
    hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
    hsv[:, :, 0] = (hsv[:, :, 0] + random.randint(-10, 10)) % 180

---

*Continue reading the full article on [TildAlice](https://tildalice.io/opencv-to-albumentations-augmentation-pipeline-migration/)*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>albumentations</category>
      <category>opencv</category>
      <category>dataaugmentation</category>
      <category>computervision</category>
    </item>
    <item>
      <title>PyTorch vs TensorFlow 2026: Why Framework Wars Distract</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Sun, 03 May 2026 18:04:17 +0000</pubDate>
      <link>https://forem.com/tildalice/pytorch-vs-tensorflow-2026-why-framework-wars-distract-3mbb</link>
      <guid>https://forem.com/tildalice/pytorch-vs-tensorflow-2026-why-framework-wars-distract-3mbb</guid>
      <description>&lt;h2&gt;
  
  
  The Framework Debate That Won't Die
&lt;/h2&gt;

&lt;p&gt;Every few months, someone on Twitter declares PyTorch or TensorFlow "dead." The thread gets 500+ quote tweets, tempers flare, and nothing changes. Teams keep shipping models in both frameworks. The debate wastes energy on the wrong question.&lt;/p&gt;

&lt;p&gt;The reality? Both frameworks converged years ago. TensorFlow adopted eager execution (basically PyTorch's dynamic graph model), PyTorch added &lt;code&gt;torch.compile()&lt;/code&gt; for graph optimization (TensorFlow's strength). The performance gap narrowed to the point where your choice matters less than your team's familiarity with the API.&lt;/p&gt;

&lt;p&gt;I've shipped production models in both. The framework didn't determine success — data quality, experiment tracking, and deployment infrastructure did. Yet I still see teams agonizing over this choice for weeks, delaying actual work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-pytorch-vs-tensorflow-2026-framework-wars-distraction-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-pytorch-vs-tensorflow-2026-framework-wars-distraction-1.jpg" alt="Visual abstraction of neural networks in AI technology, featuring data flow and algorithms." width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
Photo by &lt;a href="https://www.pexels.com/@googledeepmind" rel="nofollow noopener noreferrer"&gt;Google DeepMind&lt;/a&gt; on &lt;a href="https://www.pexels.com" rel="nofollow noopener noreferrer"&gt;Pexels&lt;/a&gt;



&lt;h2&gt;
  
  
  The Convergence Nobody Talks About
&lt;/h2&gt;




&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/pytorch-vs-tensorflow-2026-framework-wars-distraction/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>pytorch</category>
      <category>tensorflow</category>
      <category>deeplearning</category>
      <category>mlframeworks</category>
    </item>
    <item>
      <title>Real-Time FFT Pipeline: Vibration to Alert in 100 Lines</title>
      <dc:creator>TildAlice</dc:creator>
      <pubDate>Sun, 03 May 2026 15:04:41 +0000</pubDate>
      <link>https://forem.com/tildalice/real-time-fft-pipeline-vibration-to-alert-in-100-lines-4peh</link>
      <guid>https://forem.com/tildalice/real-time-fft-pipeline-vibration-to-alert-in-100-lines-4peh</guid>
      <description>&lt;h2&gt;
  
  
  You Don't Need a 50-Page Framework to Catch Bearing Faults
&lt;/h2&gt;

&lt;p&gt;Most FFT tutorials stop at plotting pretty spectrograms. Production systems need more: streaming data ingestion, fault frequency detection, and actionable alerts — ideally before someone asks "why is the pump making that noise?"&lt;/p&gt;

&lt;p&gt;I built a minimal real-time pipeline that goes from raw accelerometer samples to Slack notifications in under 100 lines of Python. No Kafka. No Docker. Just NumPy, a threshold detector, and enough signal processing to catch early-stage bearing defects. It's been running on a test rig for three months, and the only false positive came from someone dropping a wrench.&lt;/p&gt;

&lt;p&gt;This isn't a production-grade SCADA integration. It's a proof-of-concept that shows the core mechanics: buffering, windowing, FFT computation, peak detection, and alerting. If you're migrating from schedule-based maintenance or just want to understand what happens between "sensor wire" and "email notification," this is the skeleton.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-real-time-fft-pipeline-vibration-fault-alert-100-lines-1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ftildalice.io%2Fwp-content%2Fuploads%2F2026%2F05%2Fstock-real-time-fft-pipeline-vibration-fault-alert-100-lines-1.jpg" alt="A mechanic in blue coveralls inspects an engine in a repair shop." width="800" height="534"&gt;&lt;/a&gt;&lt;/p&gt;
Photo by &lt;a href="https://www.pexels.com/@artempodrez" rel="nofollow noopener noreferrer"&gt;Artem Podrez&lt;/a&gt; on &lt;a href="https://www.pexels.com" rel="nofollow noopener noreferrer"&gt;Pexels&lt;/a&gt;



&lt;h2&gt;
  
  
  The 20Hz Outer Race Fault That Schedule-Based Maintenance Missed
&lt;/h2&gt;




&lt;p&gt;&lt;em&gt;Continue reading the full article on &lt;a href="https://tildalice.io/real-time-fft-pipeline-vibration-fault-alert-100-lines/" rel="noopener noreferrer"&gt;TildAlice&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fft</category>
      <category>vibrationanalysis</category>
      <category>predictivemaintenanc</category>
      <category>realtimeprocessing</category>
    </item>
  </channel>
</rss>
