<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: CARLOS ENRIQUE CASTRO LAZARO</title>
    <description>The latest articles on Forem by CARLOS ENRIQUE CASTRO LAZARO (@onceupontry).</description>
    <link>https://forem.com/onceupontry</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/onceupontry"/>
    <language>en</language>
    <item>
      <title>green-linter: Your Project's Carbon Footprint in One Command</title>
      <dc:creator>CARLOS ENRIQUE CASTRO LAZARO</dc:creator>
      <pubDate>Sun, 19 Apr 2026 19:08:46 +0000</pubDate>
      <link>https://forem.com/onceupontry/green-linter-your-projects-carbon-footprint-in-one-command-1lig</link>
      <guid>https://forem.com/onceupontry/green-linter-your-projects-carbon-footprint-in-one-command-1lig</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for &lt;a href="https://dev.to/challenges/weekend-2026-04-16"&gt;Weekend Challenge: Earth Day Edition&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;green-linter&lt;/strong&gt; is a zero-dependency CLI tool that scans your project and tells you exactly how much computational waste you're carrying — translated into grams of CO2.&lt;/p&gt;

&lt;p&gt;No runtime analysis. No network requests. No opinions. Just measurable facts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo &lt;span class="nb"&gt;install &lt;/span&gt;green-linter
green-linter ./my-project &lt;span class="nt"&gt;--country&lt;/span&gt; USA
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/OnCeUponTry/GREEN-LINTER" rel="noopener noreferrer"&gt;OnCeUponTry/GREEN-LINTER&lt;/a&gt; | &lt;strong&gt;crates.io&lt;/strong&gt;: &lt;a href="https://crates.io/crates/green-linter" rel="noopener noreferrer"&gt;green-linter&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Every &lt;code&gt;node_modules/&lt;/code&gt; committed to a repo, every phantom dependency listed but never imported, every Docker image built on &lt;code&gt;ubuntu&lt;/code&gt; instead of &lt;code&gt;alpine&lt;/code&gt; — it all gets stored, transferred, backed up, and rebuilt across CI pipelines worldwide.&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;how much does it actually cost in carbon?&lt;/strong&gt; I couldn't find a single tool that answered this question for project structure. Tools like CodeCarbon measure runtime energy. GreenFrame measures web page loads. Eco-CI tracks pipeline energy. But nothing audits the &lt;strong&gt;static waste sitting in your repo right now&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That's the gap green-linter fills.&lt;/p&gt;




&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;green-linter walks your project tree and checks for &lt;strong&gt;12 categories of waste&lt;/strong&gt; across 4 domains:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;Checks&lt;/th&gt;
&lt;th&gt;What It Catches&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Docker&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;13 patterns&lt;/td&gt;
&lt;td&gt;Heavy base images with size deltas, missing cache optimization, layer bloat, orphaned Dockerfiles&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Node.js&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Usage analysis&lt;/td&gt;
&lt;td&gt;Heavy deps with SAFE/PARTIAL/unused verdict, phantom deps via peer graph, duplicates, deprecated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Artifacts&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;10 dirs&lt;/td&gt;
&lt;td&gt;node_modules/, dist/, .next/, target/ committed to repo — real disk size&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Lockfiles&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2 checks&lt;/td&gt;
&lt;td&gt;Missing lockfile, conflicting lockfiles&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Every finding is a &lt;strong&gt;measurable fact with real numbers&lt;/strong&gt;. No "consider using alpine" — instead: "&lt;code&gt;ubuntu:22.04&lt;/code&gt; = 77MB. &lt;code&gt;alpine:3.19&lt;/code&gt; = 7MB. &lt;strong&gt;Delta: 70MB&lt;/strong&gt;."&lt;/p&gt;

&lt;p&gt;CO2 estimation uses peer-reviewed methodology:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Energy&lt;/strong&gt;: 0.06 kWh/GB — &lt;a href="https://doi.org/10.1111/jiec.12630" rel="noopener noreferrer"&gt;Aslan et al. 2018&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Carbon intensity&lt;/strong&gt;: per-country data from &lt;a href="https://ember-climate.org/" rel="noopener noreferrer"&gt;Ember Climate 2023&lt;/a&gt; — 209 countries, CC-BY-4.0&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Demo: Real Output
&lt;/h2&gt;

&lt;p&gt;Here's green-linter scanning a real React + NestJS project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;green-linter v0.2.0

Scanning: /home/user/my-project
Detected: Node.js project (package.json)
Country:  PER (291.77 gCO2/kWh)

Found 18 finding(s):
14 Phantom Dependency | 2 Build Artifact | 1 Heavy Dependency | 1 Lockfile Conflict

 1. [N] Heavy Dependency (package.json)
    Heavy dependency: axios (~450KB) — PARTIAL usage
    Consider native fetch (0KB). Potential savings: ~450KB
    ~ 0.0073 gCO2

 2. [N] Phantom Dependency (package.json)
    Phantom dependency: @radix-ui/react-alert-dialog
    Listed in dependencies but not imported in any source file

17. [N] Build Artifact
    node_modules/ in repository (667.4 MB)
    ~ 10.8 gCO2

--- CO2 Impact Summary ---
  Total waste measured: 669.2 MB
  Estimated CO2:       11.4413 gCO2
  Method: 0.06 kWh/GB (Aslan 2018) x grid intensity (Ember 2023)
  Like keeping a 9W LED on for ~4.4 hours
  RAGEST (669.2 MB) — Your project weighs more than my first laptop.

  This scan's footprint: ~0.002g CO2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What Makes It Different
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Peer Graph Analysis — Zero False Positives
&lt;/h3&gt;

&lt;p&gt;The hardest problem in dependency auditing: false positives. If your project lists &lt;code&gt;@radix-ui/react-popover&lt;/code&gt;, which internally requires &lt;code&gt;@radix-ui/react-portal&lt;/code&gt; as a peer dependency — is &lt;code&gt;react-portal&lt;/code&gt; a phantom dependency?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No.&lt;/strong&gt; green-linter reads every dependency's &lt;code&gt;package.json&lt;/code&gt; inside &lt;code&gt;node_modules/&lt;/code&gt; and builds a &lt;strong&gt;peer dependency graph&lt;/strong&gt;. If a listed dependency is required as a peer by any other installed package, it's not phantom. This eliminated 4 false positives in my test suite across 6 real-world projects.&lt;/p&gt;

&lt;p&gt;The same logic extends to Docker: &lt;strong&gt;multi-stage awareness&lt;/strong&gt;. If your Dockerfile has a production stage that runs &lt;code&gt;npm install --production&lt;/code&gt;, green-linter won't flag devDependencies in that stage — because they're correctly excluded.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Result: 0 false positives across 37 findings in 6 projects tested.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The LED Bulb Equivalent
&lt;/h3&gt;

&lt;p&gt;CO2 grams are abstract. So green-linter converts waste into something tangible: &lt;strong&gt;hours of a 9W LED bulb&lt;/strong&gt;. The clever part? This metric is &lt;strong&gt;country-independent&lt;/strong&gt; — the grid carbon intensity cancels out in the ratio &lt;code&gt;(waste_CO2 / LED_CO2)&lt;/code&gt;, because both use the same grid. It works the same in Norway (low carbon) and Poland (high carbon).&lt;/p&gt;

&lt;h3&gt;
  
  
  The RAGE Waste Scale
&lt;/h3&gt;

&lt;p&gt;Every scan ends with a waste profile — with brutally honest feedback:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Level&lt;/th&gt;
&lt;th&gt;Range&lt;/th&gt;
&lt;th&gt;Sample Message&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;😌 CHILL&lt;/td&gt;
&lt;td&gt;&amp;lt; 100 MB&lt;/td&gt;
&lt;td&gt;"Nice! You actually read the docs."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;😠 ANGRY&lt;/td&gt;
&lt;td&gt;100–500 MB&lt;/td&gt;
&lt;td&gt;"node_modules just sent a distress signal."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;💀 RAGEST&lt;/td&gt;
&lt;td&gt;&amp;gt; 500 MB&lt;/td&gt;
&lt;td&gt;"Delete node_modules. Breathe. Start over."&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;9 messages total, deterministically selected — same project always gets the same message.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disk Accuracy: 99.6%
&lt;/h3&gt;

&lt;p&gt;Most tools use &lt;code&gt;metadata.len()&lt;/code&gt; (logical file size). green-linter uses &lt;code&gt;blocks() * 512&lt;/code&gt; (actual disk allocation) on Unix systems. For directories with thousands of small files (like &lt;code&gt;node_modules/&lt;/code&gt;), this closes a &lt;strong&gt;~28% gap&lt;/strong&gt; between reported and real disk usage. Verified against &lt;code&gt;du -sh&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Built With GitHub Copilot — Pure Terminal, Zero IDE
&lt;/h2&gt;

&lt;p&gt;This entire project was built using &lt;strong&gt;GitHub Copilot CLI&lt;/strong&gt; in a Linux terminal. No VS Code. No GUI. No IDE. Just a shell, Copilot, and a conversation.&lt;/p&gt;

&lt;p&gt;Here's what that looked like in practice — real pair programming, not "AI wrote my code":&lt;/p&gt;

&lt;h3&gt;
  
  
  Structured Definition Before Code
&lt;/h3&gt;

&lt;p&gt;Before writing a single line of Rust, we went through a &lt;strong&gt;6-phase MVP definition&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;E2E scope&lt;/strong&gt;: 7 mandatory features, anything else goes to MEJORAS.md&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uniqueness search&lt;/strong&gt;: analyzed 7 existing tools — found the niche (static structure audit)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scope questionnaire&lt;/strong&gt;: decided offline-only, 2 ecosystems, CLI-only&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improvements log&lt;/strong&gt;: 12 ideas captured but excluded from MVP&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gap analysis&lt;/strong&gt;: 6 gaps identified with explicit strategies (RESOLVE / SCOPE-REDUCE / DEFER)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pre-implementation review&lt;/strong&gt;: verified architecture covers all features, no contradictions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Only then did we start coding.&lt;/strong&gt; This structure prevented scope creep and kept the weekend timeline achievable.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Pair Programming Dynamic
&lt;/h3&gt;

&lt;p&gt;This wasn't "prompt and paste." I set quality standards; Copilot proposed implementations; I challenged the proposals:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;I detected&lt;/strong&gt; that phantom dependency detection had false positives on framework packages → &lt;strong&gt;Copilot implemented&lt;/strong&gt; peer graph analysis by walking &lt;code&gt;node_modules/*/package.json&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I noticed&lt;/strong&gt; that &lt;code&gt;metadata.len()&lt;/code&gt; underreported &lt;code&gt;node_modules/&lt;/code&gt; size by ~28% → &lt;strong&gt;Copilot found&lt;/strong&gt; &lt;code&gt;std::os::unix::fs::MetadataExt::blocks() * 512&lt;/code&gt; for disk-accurate measurement&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Copilot proposed&lt;/strong&gt; the LED bulb equivalent → &lt;strong&gt;I validated&lt;/strong&gt; that grid intensity cancels out, making it country-independent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;I required&lt;/strong&gt; zero false positives → &lt;strong&gt;Copilot built&lt;/strong&gt; multi-stage Docker awareness and the import index (O(1) lookup per dependency)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every architectural decision was debated, not delegated.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terminal Workflow
&lt;/h3&gt;

&lt;p&gt;The entire lifecycle happened in Copilot CLI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Architecture&lt;/strong&gt;: discussed and documented in DEFINICION.md (6 phases)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Implementation&lt;/strong&gt;: Rust code written, compiled, tested — all via terminal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compilation&lt;/strong&gt;: &lt;code&gt;cargo build&lt;/code&gt; via SSH to a build server, results analyzed in the same session&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing&lt;/strong&gt;: ran against 6 real projects, verified 0 false positives, benchmarked at 56ms median&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publishing&lt;/strong&gt;: &lt;code&gt;git commit&lt;/code&gt;, &lt;code&gt;git push&lt;/code&gt;, &lt;code&gt;cargo publish&lt;/code&gt; to crates.io — from the same terminal session&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;This post&lt;/strong&gt;: drafted, reviewed, and published via Copilot CLI + dev.to API — even this writing was pair-programmed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No file was ever opened in an IDE. The git log tells the story:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;e4e10eb v0.2.0: disk size accuracy, LED bulb equivalent, RAGE waste profile
2511345 green-linter v0.1.0: static project auditor for computational waste
71a6442 Add GPL-3.0-or-later license
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three focused commits. Two published versions. One weekend.&lt;/p&gt;




&lt;h2&gt;
  
  
  JSON for CI/CD
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;green-linter &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--country&lt;/span&gt; USA &lt;span class="nt"&gt;--json&lt;/span&gt; | jq &lt;span class="s1"&gt;'.summary'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"total_findings"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;18&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"total_wasted_bytes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;701753344&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"total_co2_grams"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;11.441&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"lightbulb_hours"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;4.357&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"waste_profile"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"emoji"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"RAGEST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"label"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"RAGEST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Your project weighs more than my first laptop."&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exit code 1 when waste is detected — plug it into any CI pipeline to catch bloat before it ships.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prize Categories
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Best Use of GitHub Copilot&lt;/strong&gt; — green-linter was built entirely through pair programming with GitHub Copilot CLI in a pure terminal environment. Every line of Rust, every architectural decision, every test, and every publication step — including this very post — was pair-programmed in Copilot CLI. No IDE was used at any point.&lt;/p&gt;




&lt;h2&gt;
  
  
  Technical Stats
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Build time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~8 hours (single day, from zero to published)&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Language&lt;/td&gt;
&lt;td&gt;Rust (855KB static binary)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dependencies&lt;/td&gt;
&lt;td&gt;4 (clap, colored, serde, serde_json)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scan speed&lt;/td&gt;
&lt;td&gt;56ms median (real project, 61 source files)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Countries&lt;/td&gt;
&lt;td&gt;209 (carbon data embedded, offline)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;False positives&lt;/td&gt;
&lt;td&gt;0 across 6 projects, 37 findings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disk accuracy&lt;/td&gt;
&lt;td&gt;99.6% vs &lt;code&gt;du -sh&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scan footprint&lt;/td&gt;
&lt;td&gt;~0.002g CO2 (5,720x less than typical project waste)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;License&lt;/td&gt;
&lt;td&gt;GPL-3.0-or-later&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install&lt;/span&gt;
cargo &lt;span class="nb"&gt;install &lt;/span&gt;green-linter

&lt;span class="c"&gt;# Scan your project&lt;/span&gt;
green-linter &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--country&lt;/span&gt; USA

&lt;span class="c"&gt;# JSON output for CI&lt;/span&gt;
green-linter &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="nt"&gt;--country&lt;/span&gt; USA &lt;span class="nt"&gt;--json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your project's waste is probably 5,720x more expensive than running this scan. Find out how much.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;green-linter v0.2.0 — &lt;a href="https://github.com/OnCeUponTry/GREEN-LINTER" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; | &lt;a href="https://crates.io/crates/green-linter" rel="noopener noreferrer"&gt;crates.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
      <category>rust</category>
      <category>cli</category>
    </item>
    <item>
      <title>I built nftguard: atomic nftables versioning with instant rollback</title>
      <dc:creator>CARLOS ENRIQUE CASTRO LAZARO</dc:creator>
      <pubDate>Sat, 18 Apr 2026 06:06:47 +0000</pubDate>
      <link>https://forem.com/onceupontry/i-built-nftguard-atomic-nftables-versioning-with-instant-rollback-1pn4</link>
      <guid>https://forem.com/onceupontry/i-built-nftguard-atomic-nftables-versioning-with-instant-rollback-1pn4</guid>
      <description>&lt;p&gt;I manage multiple Linux servers. Each one has its own nftables firewall config — some with 50 rules, some with 200+. And for years, my "versioning system" was a mix of &lt;code&gt;.bak&lt;/code&gt; files, commented-out lines, and the vague hope that I'd remember what changed last Tuesday.&lt;/p&gt;

&lt;p&gt;Then one night I fat-fingered a flush command and locked myself out of a production box via SSH. Recovery took 40 minutes. The fix took 10 seconds — I just needed the previous ruleset. But I didn't have it.&lt;/p&gt;

&lt;p&gt;So I built &lt;strong&gt;nftguard&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes it different
&lt;/h2&gt;

&lt;p&gt;There's nothing else like this. Seriously — I searched. There are iptables backup scripts. There are Ansible playbooks that template firewall configs. But there is &lt;strong&gt;zero tooling&lt;/strong&gt; for atomic nftables versioning with rollback. nftguard is the first.&lt;/p&gt;

&lt;p&gt;Here's what it actually does that nothing else can:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. SHA-256 fingerprinted rule tracking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every rule gets individually hashed after normalization (counters and handles stripped). This means nftguard detects &lt;em&gt;semantic&lt;/em&gt; changes — not just text diffs. If you reorder rules but the logic is identical, it knows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Retention guard (the "oops" detector)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your new config would delete more than 40% of existing rules, nftguard stops and asks. This catches the most common disaster: accidentally applying a minimal test config over your production ruleset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Selective table flush&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most tools do &lt;code&gt;nft flush ruleset&lt;/code&gt; then &lt;code&gt;nft -f config&lt;/code&gt;. That creates a window — maybe 50ms, maybe 500ms — where your firewall has zero rules. nftguard only flushes the tables that appear in your new config, preserving everything else. No gap.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Cascading boot recovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At boot, before any network interface comes up, nftguard loads your firewall. If the latest snapshot is corrupted, it tries the previous one. Then the one before that. Then the conf file. Five layers of fallback before giving up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Transparent nft wrapper&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install it as &lt;code&gt;/usr/local/sbin/nft&lt;/code&gt; and every &lt;code&gt;nft -f&lt;/code&gt; call from any source — scripts, kube-proxy, manual — gets versioned automatically. Nothing changes in your workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it looks in practice
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Apply config with full safety net&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nftguard hot-apply /etc/nftables.conf
&lt;span class="o"&gt;[&lt;/span&gt;nftguard] Syntax OK
&lt;span class="o"&gt;[&lt;/span&gt;nftguard] 142 rules, 8 chains, 2 tables
&lt;span class="o"&gt;[&lt;/span&gt;nftguard] Diff: +3 new, &lt;span class="nt"&gt;-1&lt;/span&gt; removed, 138 unchanged
&lt;span class="o"&gt;[&lt;/span&gt;nftguard] Retention: 97.2% &lt;span class="o"&gt;(&lt;/span&gt;above 60% threshold&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;nftguard] Snapshot &lt;span class="c"&gt;#47 saved&lt;/span&gt;

&lt;span class="c"&gt;# Oh no, something broke&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nftguard rollback
&lt;span class="o"&gt;[&lt;/span&gt;nftguard] Restored snapshot &lt;span class="c"&gt;#46 (2 seconds ago)&lt;/span&gt;

&lt;span class="c"&gt;# Check what happened&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;nftguard compare 46 47
&lt;span class="o"&gt;[&lt;/span&gt;nftguard] 3 rules added, 1 rule removed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;100 snapshots in a circular buffer. Oldest rotate out automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  The technical bits
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pure Rust&lt;/strong&gt;, single binary, ~600KB stripped&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero runtime dependencies&lt;/strong&gt; — no Python, no Node, no databases&lt;/li&gt;
&lt;li&gt;Snapshots are plain JSON with full metadata (timestamp, SHA-256, chain/table info, per-rule fingerprints)&lt;/li&gt;
&lt;li&gt;Runs as a oneshot systemd service before &lt;code&gt;network-pre.target&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Apache-2.0 licensed — use it anywhere, including commercial infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Install
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cargo &lt;span class="nb"&gt;install &lt;/span&gt;nftguard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or from source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/OnCeUponTry/NFTGUARD.git
&lt;span class="nb"&gt;cd &lt;/span&gt;NFTGUARD
cargo build &lt;span class="nt"&gt;--release&lt;/span&gt;
&lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 755 target/release/nftguard /usr/local/sbin/nftguard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/OnCeUponTry/NFTGUARD" rel="noopener noreferrer"&gt;OnCeUponTry/NFTGUARD&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;crates.io&lt;/strong&gt;: &lt;a href="https://crates.io/crates/nftguard" rel="noopener noreferrer"&gt;nftguard&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you manage Linux firewalls, give it a try. I'd love to hear how it works on your setup.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>linux</category>
      <category>security</category>
      <category>devops</category>
    </item>
    <item>
      <title>RAGE-QUANT: 3x Faster LLM Inference on CPU with Pure Rust Quantized GEMV</title>
      <dc:creator>CARLOS ENRIQUE CASTRO LAZARO</dc:creator>
      <pubDate>Fri, 17 Apr 2026 08:10:28 +0000</pubDate>
      <link>https://forem.com/onceupontry/rage-quant-3x-faster-llm-inference-on-cpu-with-pure-rust-quantized-gemv-1hdn</link>
      <guid>https://forem.com/onceupontry/rage-quant-3x-faster-llm-inference-on-cpu-with-pure-rust-quantized-gemv-1hdn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Skip dequantization. Save 57% RAM. Get 3x faster decode. No GPU required.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Every LLM framework (llama.cpp, candle, burn) does this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GGUF quantized weights → dequantize to f32 → f32 GEMV → result
             4x DRAM bandwidth wasted ^     ^ 3.2 GB RAM for dense cache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;RAGE-QUANT does this instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GGUF quantized weights → quantized GEMV → result
         reads 1.06 bytes/element instead of 4 bytes = 3.76x less DRAM traffic
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;No dequantization step. No f32 cache. 57% less RAM. 3x faster decode.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Real Benchmarks (not theoretical)
&lt;/h2&gt;

&lt;p&gt;Tested on &lt;strong&gt;Qwen3-0.6B-Q8_0.gguf&lt;/strong&gt; | CPU-only | AMD Ryzen 9 9900X | 12 threads&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What we measured&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Improvement&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Decode latency per token&lt;/td&gt;
&lt;td&gt;42 ms&lt;/td&gt;
&lt;td&gt;14 ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;3.0x faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;From naive Rust&lt;/td&gt;
&lt;td&gt;120,000 ms&lt;/td&gt;
&lt;td&gt;466 ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;257x faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;From sgemm baseline&lt;/td&gt;
&lt;td&gt;74,758 ms&lt;/td&gt;
&lt;td&gt;466 ms&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;160x faster&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Peak RAM usage&lt;/td&gt;
&lt;td&gt;3.2 GB&lt;/td&gt;
&lt;td&gt;1.38 GB&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;57% less&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Throughput&lt;/td&gt;
&lt;td&gt;~24 tok/s&lt;/td&gt;
&lt;td&gt;67-71 tok/s&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~3x more&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These numbers are real, measured, reproducible. See the &lt;a href="https://github.com/OnCeUponTry/RAGE-QUANT/blob/main/docs/cpu-optimizations.md" rel="noopener noreferrer"&gt;full methodology&lt;/a&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why is it faster?
&lt;/h2&gt;

&lt;p&gt;On modern CPUs, LLM decode (batch=1) is &lt;strong&gt;DRAM bandwidth-limited&lt;/strong&gt;, not compute-limited. By reading 1 byte (quantized) instead of 4 bytes (f32), you move 3.76x less data through the memory bus. The speedup follows directly.&lt;/p&gt;

&lt;p&gt;Additionally: &lt;strong&gt;LLVM cannot auto-vectorize the i8-to-f32 widening path.&lt;/strong&gt; It tries i8→i16→i32→f32, wasting registers. Manual &lt;code&gt;vpmovsxbd&lt;/code&gt; (i8→i32 direct) via &lt;code&gt;_mm256_cvtepi8_epi32&lt;/code&gt; is required. This is why hand-written AVX2 intrinsics beat the compiler here.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[dependencies]&lt;/span&gt;
&lt;span class="py"&gt;rage-quant&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"0.1"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;rage_quant&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;dot_q8_0_f32&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;dot_q8_0_f32&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;quantized_weights&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;input_vector&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;num_elements&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// Auto-detects AVX2+FMA at runtime; falls back to scalar on older CPUs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Supported formats: &lt;strong&gt;Q8_0&lt;/strong&gt;, &lt;strong&gt;Q6_K&lt;/strong&gt;, &lt;strong&gt;Q4_K&lt;/strong&gt; (GGUF-native blocks).&lt;/p&gt;




&lt;h2&gt;
  
  
  Why not just use llama.cpp?
&lt;/h2&gt;

&lt;p&gt;llama.cpp is excellent, but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;It is C/C++&lt;/strong&gt; — integrating into a Rust project requires unsafe FFI bindings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It is monolithic&lt;/strong&gt; — you cannot extract just the quantized dot product without pulling the entire engine&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;rage-quant is a standalone Rust crate&lt;/strong&gt; — &lt;code&gt;cargo add rage-quant&lt;/code&gt; and you have the kernels&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  CPU Optimization Findings (T1-T9)
&lt;/h2&gt;

&lt;p&gt;This crate embodies 9 validated CPU inference optimizations discovered during development:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ID&lt;/th&gt;
&lt;th&gt;What was optimized&lt;/th&gt;
&lt;th&gt;Measured result&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;T1&lt;/td&gt;
&lt;td&gt;GEMV on quantized data (skip f32)&lt;/td&gt;
&lt;td&gt;decode 42ms → 18ms = &lt;strong&gt;2.3x&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;T2&lt;/td&gt;
&lt;td&gt;Eliminate dense f32 weight caches&lt;/td&gt;
&lt;td&gt;RSS 3.2GB → 1.38GB = &lt;strong&gt;-57% RAM&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;T3&lt;/td&gt;
&lt;td&gt;AVX2 widening i8→f32 intrinsics&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;+18.8%&lt;/strong&gt; on top of T1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;T4&lt;/td&gt;
&lt;td&gt;Memory-bound diagnosis&lt;/td&gt;
&lt;td&gt;Proved DRAM is the bottleneck&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;T7&lt;/td&gt;
&lt;td&gt;GEMV vs sgemm for m=1 decode&lt;/td&gt;
&lt;td&gt;sgemm 180ms vs GEMV 18ms = &lt;strong&gt;10x&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;T8&lt;/td&gt;
&lt;td&gt;QKV fusion (decode-only path)&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;1.8x&lt;/strong&gt; per-layer QKV compute&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;T9&lt;/td&gt;
&lt;td&gt;Column-tiling for GEMM prefill&lt;/td&gt;
&lt;td&gt;5091ms → 3057ms = &lt;strong&gt;1.67x&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Hardware Requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Minimum&lt;/strong&gt;: Any x86_64 CPU (scalar fallback works everywhere)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recommended&lt;/strong&gt;: AVX2+FMA support (Intel Haswell 2013+ / AMD Zen 2017+)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tested on&lt;/strong&gt;: AMD Ryzen 9 9900X (Zen 5), DDR5, 12 threads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ARM NEON and AVX-512 support are planned.&lt;/p&gt;




&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/OnCeUponTry/RAGE-QUANT" rel="noopener noreferrer"&gt;github.com/OnCeUponTry/RAGE-QUANT&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HuggingFace&lt;/strong&gt;: &lt;a href="https://huggingface.co/TheRagestBoy/rage-quant" rel="noopener noreferrer"&gt;hf.co/TheRagestBoy/rage-quant&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Crates.io&lt;/strong&gt;: &lt;a href="https://crates.io/crates/rage-quant" rel="noopener noreferrer"&gt;crates.io/crates/rage-quant&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  License
&lt;/h2&gt;

&lt;p&gt;Dual-licensed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AGPL-3.0&lt;/strong&gt; — free for open-source, personal, and academic use&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commercial&lt;/strong&gt; — for proprietary/closed-source use (contact: &lt;a href="mailto:the@angriestboy.com"&gt;the@angriestboy.com&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Published from RAGE-QUANT v0.1.0 — pure Rust, zero dependencies, 3x faster.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>llm</category>
      <category>machinelearning</category>
      <category>performance</category>
      <category>rust</category>
    </item>
  </channel>
</rss>
