<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Roger Wang</title>
    <description>The latest articles on Forem by Roger Wang (@pigslybear).</description>
    <link>https://forem.com/pigslybear</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/pigslybear"/>
    <language>en</language>
    <item>
      <title>The more AI does, the clearer the system becomes—instead of more chaotic.</title>
      <dc:creator>Roger Wang</dc:creator>
      <pubDate>Thu, 16 Apr 2026 14:24:34 +0000</pubDate>
      <link>https://forem.com/pigslybear/the-more-ai-does-the-clearer-the-system-becomes-instead-of-more-chaotic-5bfg</link>
      <guid>https://forem.com/pigslybear/the-more-ai-does-the-clearer-the-system-becomes-instead-of-more-chaotic-5bfg</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F793hmimj1ds1df0zaqlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F793hmimj1ds1df0zaqlv.png" alt=" " width="800" height="240"&gt;&lt;/a&gt;&lt;br&gt;
Have you noticed that many AI tools look incredibly powerful, yet once you try to use them in a real project, they still feel hard to trust?&lt;/p&gt;

&lt;p&gt;Today, they can help you generate a solution draft. Tomorrow, they can write a document for you.&lt;br&gt;
But the real problem is not whether they &lt;em&gt;can&lt;/em&gt; do it. It is this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Why did the output turn out this way?&lt;/li&gt;
&lt;li&gt;What was it actually based on?&lt;/li&gt;
&lt;li&gt;Once requirements change or team members rotate, the whole process starts to drift&lt;/li&gt;
&lt;li&gt;The project may move fast, but not necessarily in the right direction, while risk keeps piling up&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is exactly the problem many teams are starting to face:&lt;br&gt;
AI has accelerated output, but governance has not kept up.&lt;/p&gt;

&lt;p&gt;That gap is what &lt;strong&gt;AxiomFlow&lt;/strong&gt; is designed to solve.&lt;/p&gt;

&lt;p&gt;It is not another attempt to repackage AI as a “better chatbot.”&lt;br&gt;
Instead, it puts AI back into the place it actually needs to occupy inside teams, workflows, and governance systems.&lt;/p&gt;

&lt;p&gt;According to the project description, AxiomFlow is positioned as a governance model for AI-assisted software delivery. Its core goal is to ensure that even under AI-accelerated execution, delivery remains &lt;strong&gt;aligned, bounded, and traceable&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That may sound abstract, but it is actually extremely practical.&lt;/p&gt;

&lt;p&gt;Because in real projects, what people fear most is never that AI is not smart enough.&lt;br&gt;
What they fear is realizing too late that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It solved the wrong problem&lt;/li&gt;
&lt;li&gt;It crossed boundaries it should never have crossed&lt;/li&gt;
&lt;li&gt;It quietly turned a one-time judgment into a long-term structure&lt;/li&gt;
&lt;li&gt;It left behind outputs, but not the reasoning behind the decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes AxiomFlow powerful is that it does not only talk about &lt;strong&gt;how to do things&lt;/strong&gt;.&lt;br&gt;
It starts by separating different layers of project thinking.&lt;/p&gt;

&lt;p&gt;It uses document roles to structure reasoning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;REQ&lt;/strong&gt; defines what problem should be solved&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SPEC&lt;/strong&gt; explains how it will be done&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ADR&lt;/strong&gt; explains why this architectural direction was chosen&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CONTRACT&lt;/strong&gt; clearly defines which boundaries must not be crossed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The value of this design is significant.&lt;/p&gt;

&lt;p&gt;Because once a team mixes together &lt;strong&gt;the problem, the implementation, the rationale, and the boundaries&lt;/strong&gt;, AI will only amplify the confusion.&lt;br&gt;
But if these four layers are clearly separated, AI has a chance to become a stable execution amplifier rather than a high-speed source of chaos.&lt;/p&gt;

&lt;p&gt;From a product perspective, I would say the real value of AxiomFlow is not just the method itself.&lt;br&gt;
It is that it addresses a market gap that very few people are tackling directly:&lt;/p&gt;

&lt;p&gt;Most AI products are focused on generating faster.&lt;br&gt;
AxiomFlow is focused on what happens &lt;strong&gt;after generation&lt;/strong&gt;—how a team can still stay in control of the whole system.&lt;/p&gt;

&lt;p&gt;That is also what makes it fundamentally different from ordinary AI tools.&lt;/p&gt;

&lt;p&gt;Most tools are strong at &lt;strong&gt;giving immediate answers&lt;/strong&gt;.&lt;br&gt;
AxiomFlow is strong at &lt;strong&gt;making those answers usable inside a long-term system&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Most tools give you outputs.&lt;br&gt;
AxiomFlow cares more about whether those outputs can be explained, verified, and carried forward.&lt;/p&gt;

&lt;p&gt;There is a line in the README that captures this very well:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Turn AI agents into governable builders.”&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is almost the entire product thesis in one sentence.&lt;/p&gt;

&lt;p&gt;It does not treat AI as a black-box agent that operates freely.&lt;br&gt;
It turns AI into something that can actually be governed.&lt;/p&gt;

&lt;p&gt;What does that mean?&lt;/p&gt;

&lt;p&gt;It means the AI systems that truly succeed in the future may not be the ones that feel the most human or speak the most fluently.&lt;br&gt;
They may be the ones that can be trusted, handed over, reviewed, and evolved inside real organizations.&lt;/p&gt;

&lt;p&gt;If you are only looking for a tool to help you write a few more paragraphs, AxiomFlow may not be for you.&lt;/p&gt;

&lt;p&gt;But if you are already thinking about questions like these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How does AI enter real software delivery workflows?&lt;/li&gt;
&lt;li&gt;How can teams avoid losing control as AI accelerates execution?&lt;/li&gt;
&lt;li&gt;How can documents, decisions, architecture, and boundaries form a positive feedback loop?&lt;/li&gt;
&lt;li&gt;How does a project move from simply “working” to being governable and evolvable?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then AxiomFlow is a direction worth paying attention to.&lt;/p&gt;

&lt;p&gt;In one sentence:&lt;/p&gt;

&lt;p&gt;It is not about helping AI do more.&lt;br&gt;
It is about helping teams build a new capability:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The more AI does, the clearer the system becomes—instead of more chaotic.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Project link:&lt;br&gt;
&lt;code&gt;https://github.com/pigsly/AxiomFlow&lt;/code&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>softwareengineering</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Two powerful tools, but missing a layer in between.</title>
      <dc:creator>Roger Wang</dc:creator>
      <pubDate>Sat, 11 Apr 2026 04:44:34 +0000</pubDate>
      <link>https://forem.com/pigslybear/two-powerful-tools-but-missing-a-layer-in-between-ap7</link>
      <guid>https://forem.com/pigslybear/two-powerful-tools-but-missing-a-layer-in-between-ap7</guid>
      <description>&lt;p&gt;We already have great tools.&lt;/p&gt;

&lt;p&gt;OpenAI → helps you think&lt;br&gt;
Logseq → helps you store&lt;/p&gt;

&lt;p&gt;Both are powerful.&lt;/p&gt;

&lt;p&gt;But together, they still leave a gap.&lt;/p&gt;

&lt;p&gt;AI generates reasoning.&lt;br&gt;
Notes capture results.&lt;/p&gt;

&lt;p&gt;👉 But the reasoning itself disappears.&lt;/p&gt;

&lt;p&gt;You get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;outputs from AI&lt;/li&gt;
&lt;li&gt;knowledge in your notes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But not the process that created them.&lt;/p&gt;

&lt;p&gt;And that process is the most valuable part.&lt;/p&gt;

&lt;p&gt;Because thinking is not the answer.&lt;/p&gt;

&lt;p&gt;Thinking is the path.&lt;/p&gt;

&lt;p&gt;So I started asking:&lt;/p&gt;

&lt;p&gt;What if reasoning itself could be stored?&lt;/p&gt;

&lt;p&gt;Not just the result —&lt;br&gt;
but the full thinking flow.&lt;/p&gt;

&lt;p&gt;That question led to this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pigsly/ClawMind" rel="noopener noreferrer"&gt;https://github.com/pigsly/ClawMind&lt;/a&gt;&lt;/p&gt;

</description>
      <category>openai</category>
      <category>logseq</category>
      <category>writing</category>
      <category>productivity</category>
    </item>
    <item>
      <title>OpenAI gives you answers. Logseq stores your knowledge. ClawMind builds how you think.</title>
      <dc:creator>Roger Wang</dc:creator>
      <pubDate>Fri, 10 Apr 2026 23:47:28 +0000</pubDate>
      <link>https://forem.com/pigslybear/openai-gives-you-answers-logseq-stores-your-knowledge-clawmind-builds-how-you-think-47hh</link>
      <guid>https://forem.com/pigslybear/openai-gives-you-answers-logseq-stores-your-knowledge-clawmind-builds-how-you-think-47hh</guid>
      <description>&lt;p&gt;Most of us use AI like this:&lt;/p&gt;

&lt;p&gt;Open OpenAI&lt;br&gt;
Ask a question&lt;br&gt;
Get an answer&lt;/p&gt;

&lt;p&gt;And that’s it.&lt;/p&gt;

&lt;p&gt;It feels powerful.&lt;/p&gt;

&lt;p&gt;But there’s a hidden problem:&lt;/p&gt;

&lt;p&gt;👉 Your thinking doesn’t stay.&lt;/p&gt;

&lt;p&gt;A few days later,&lt;br&gt;
you face a similar problem —&lt;/p&gt;

&lt;p&gt;and start from zero again.&lt;/p&gt;

&lt;p&gt;Like you’ve never thought about it before.&lt;/p&gt;

&lt;p&gt;So we try to fix it with tools like Logseq.&lt;/p&gt;

&lt;p&gt;We write things down.&lt;br&gt;
We connect ideas.&lt;/p&gt;

&lt;p&gt;But something is still missing.&lt;/p&gt;

&lt;p&gt;AI helps you think.&lt;br&gt;
Notes help you remember.&lt;/p&gt;

&lt;p&gt;👉 But they don’t connect.&lt;/p&gt;

&lt;p&gt;You end up with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;answers&lt;/li&gt;
&lt;li&gt;notes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…but no thinking system.&lt;/p&gt;

&lt;p&gt;That’s the gap I wanted to solve.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/pigsly/ClawMind" rel="noopener noreferrer"&gt;https://github.com/pigsly/ClawMind&lt;/a&gt;&lt;/p&gt;

</description>
      <category>logseq</category>
      <category>codexcli</category>
      <category>openai</category>
    </item>
  </channel>
</rss>
