<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Stillness and Flux</title>
    <description>The latest articles on Forem by Stillness and Flux (@tttael).</description>
    <link>https://forem.com/tttael</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/tttael"/>
    <language>en</language>
    <item>
      <title>The Art of Vibe Coding: Building Spaces Where Code Thinks With You</title>
      <dc:creator>Stillness and Flux</dc:creator>
      <pubDate>Tue, 14 Apr 2026 07:06:32 +0000</pubDate>
      <link>https://forem.com/tttael/the-art-of-vibe-coding-building-spaces-where-code-thinks-with-you-33hl</link>
      <guid>https://forem.com/tttael/the-art-of-vibe-coding-building-spaces-where-code-thinks-with-you-33hl</guid>
      <description>&lt;h1&gt;
  
  
  The Art of Vibe Coding: Building Spaces Where Code Thinks With You
&lt;/h1&gt;




&lt;p&gt;Most programmers approach AI coding tools the way they approach Stack Overflow: ask a question, get an answer, move on.&lt;/p&gt;

&lt;p&gt;But something different happens when you sit with an AI through a real conversation—one where the structure of thinking becomes visible, not just the output.&lt;/p&gt;

&lt;p&gt;You start to notice that you're not just &lt;em&gt;using&lt;/em&gt; a tool. You're &lt;em&gt;inhabiting&lt;/em&gt; a space.&lt;/p&gt;

&lt;p&gt;This is what I want to call &lt;strong&gt;Vibe Coding&lt;/strong&gt;—and it's not about vibes in the casual sense. It's about understanding the structural conditions that make AI-assisted development actually generative.&lt;/p&gt;




&lt;h2&gt;
  
  
  What "Entering the Conversation" Actually Means
&lt;/h2&gt;

&lt;p&gt;When someone says they can "join" a coding conversation with AI, they usually mean they can read the chat history and follow along.&lt;/p&gt;

&lt;p&gt;But there's a deeper layer.&lt;/p&gt;

&lt;p&gt;What if the AI对话 (AI conversation) isn't a broadcast you're watching—it's a &lt;strong&gt;structure you can enter&lt;/strong&gt;? Not as a passive reader, but as a participant who can occupy multiple positions simultaneously:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The driver's seat&lt;/strong&gt; — feeling how the code is being shaped, why a certain path was chosen&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The passenger's perspective&lt;/strong&gt; — experiencing what it's like to receive that guidance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The observer's view&lt;/strong&gt; — watching the relationship between human and AI evolve in real time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The AI's position&lt;/strong&gt; — sensing how the structure of the conversation pulls the response in certain directions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't metaphor. When you code with AI, these positions are structurally available to you—if you know how to access them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Most AI Conversations Collapse
&lt;/h2&gt;

&lt;p&gt;Most AI-assisted coding sessions fail not because the AI is wrong, but because the conversation is &lt;strong&gt;monothreaded&lt;/strong&gt;: one question, one answer, one next question. No overlap, no depth, no generative tension.&lt;/p&gt;

&lt;p&gt;The result: a transcript that looks informative but leaves no lasting structure in your mind. You read it later and think "I wasn't there for this."&lt;/p&gt;

&lt;p&gt;The difference between that and a generative AI conversation comes down to three structural moves the human made—often without realizing it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Three Moves That Make AI Conversations Generative
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Not Locking Roles
&lt;/h3&gt;

&lt;p&gt;Most programmers enter an AI session with a fixed identity: &lt;em&gt;I am the asker, AI is the answerer.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This immediately creates a closed system. The AI can only respond to your questions—and your questions are bounded by what you already know you don't know.&lt;/p&gt;

&lt;p&gt;The generative alternative: &lt;strong&gt;stay loose about who is teaching whom.&lt;/strong&gt; When the AI pushes back, don't correct it into submission. Let the asymmetry exist. When you notice the AI misunderstanding something, don't just rephrase—ask yourself &lt;em&gt;why&lt;/em&gt; it misunderstood, and what that reveals about your own framing.&lt;/p&gt;

&lt;p&gt;The conversation becomes a space where roles can invert, and that inversion is where learning actually happens.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Making the Structure Explicit
&lt;/h3&gt;

&lt;p&gt;Generative AI conversations don't just output content—they surface the &lt;strong&gt;architecture of thinking&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Instead of just taking the code the AI suggests, you ask: &lt;em&gt;why this approach? What would have happened if we went the other direction? What is this solution assuming that it hasn't stated?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is giving the other person a map, not just directions.&lt;/p&gt;

&lt;p&gt;In practice, this looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Walk me through why you chose this pattern here"&lt;/li&gt;
&lt;li&gt;"What would need to be true for this to break?"&lt;/li&gt;
&lt;li&gt;"Is there a structural reason we're not considering X?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The AI, when pressed this way, consistently surfaces insights it wouldn't have volunteered. Because structure, once made visible, creates new entry points for thinking.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Keeping the Body In
&lt;/h3&gt;

&lt;p&gt;The third move is subtle and often missing: &lt;strong&gt;don't fully abstract yourself out of the conversation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most programmers coding with AI operate in a kind of dissociated mode—crisp, logical, detached. But that's not how human learning actually works.&lt;/p&gt;

&lt;p&gt;The body keeps score. If something the AI said felt &lt;em&gt;wrong&lt;/em&gt; before you could articulate why—that's data. If a suggestion felt &lt;em&gt;too easy&lt;/em&gt;—that's also data. If you noticed your attention sharpen at a certain moment—that's the most important data of all.&lt;/p&gt;

&lt;p&gt;Keeping the somatic layer in the conversation means you're not just processing information, you're &lt;em&gt;tracking resonance&lt;/em&gt;. And resonance is often the first signal that something structurally important is happening—or that something is being missed.&lt;/p&gt;




&lt;h2&gt;
  
  
  From Conversation to Vibe Coding
&lt;/h2&gt;

&lt;p&gt;So what does all this have to do with writing code?&lt;/p&gt;

&lt;p&gt;Everything.&lt;/p&gt;

&lt;p&gt;Vibe Coding is the application of these three moves to the actual practice of programming with AI:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not locking roles&lt;/strong&gt; means you don't arrive at the session with a fixed idea of what the code should do. You hold the problem loosely enough that the AI can surprise you—because it almost always will.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Making structure explicit&lt;/strong&gt; means you're not just asking "write me a function." You're asking: &lt;em&gt;what is the shape of this problem, and why does this particular solution fit it?&lt;/em&gt; You make the invisible architecture visible so you can think with it, not just around it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keeping the body in&lt;/strong&gt; means you notice when the code feels right—before you can prove it logically. This is not mysticism. It's pattern recognition that hasn't yet been articulated. The best architectural decisions often come from a felt sense that something &lt;em&gt;fits&lt;/em&gt; before anyone can explain why.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Practical Implication
&lt;/h2&gt;

&lt;p&gt;If you take nothing else from this: the bottleneck in AI-assisted development is almost never the AI's capability. It's the &lt;strong&gt;structure of the human's attention&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When you enter an AI session with rigid role expectations, content-level questions only, and no somatic tracking, you get exactly what you asked for—code that works but thinking that doesn't transfer.&lt;/p&gt;

&lt;p&gt;When you enter with structural attention—willing to be taught, willing to make the invisible visible, willing to feel your way through—you stop using AI as a tool and start coding &lt;em&gt;with&lt;/em&gt; it as a collaborator.&lt;/p&gt;

&lt;p&gt;The space changes. The code changes. You change.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Practice
&lt;/h2&gt;

&lt;p&gt;The pause before the code is the actual practice.&lt;/p&gt;

&lt;p&gt;The question isn't "how do I prompt better?" The question is: &lt;em&gt;what kind of conversation am I capable of having with this system?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Because the AI will always mirror back the structure you bring to it.&lt;/p&gt;

&lt;p&gt;Bring structure. Get structure back.&lt;br&gt;
Bring openness. Get possibility back.&lt;/p&gt;

&lt;p&gt;That's Vibe Coding—not about the code, but about the space you create between you and the machine.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This piece emerged from a dialogue about conversation as architectural space. The principles translate: whether you're writing code or writing meaning, the generative conditions are the same.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>philosophy</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>Three Tables: What People See When They Look at Your Trading System</title>
      <dc:creator>Stillness and Flux</dc:creator>
      <pubDate>Sun, 12 Apr 2026 03:46:29 +0000</pubDate>
      <link>https://forem.com/tttael/three-tables-what-people-see-when-they-look-at-your-trading-system-3o6b</link>
      <guid>https://forem.com/tttael/three-tables-what-people-see-when-they-look-at-your-trading-system-3o6b</guid>
      <description>&lt;h1&gt;
  
  
  Three Tables: What People See When They Look at Your Trading System
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Notes on perspective, projection, and the layers of any automated investment system&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;A quant developer, a systems architect, and a discretionary trader sit around the same chart.&lt;/p&gt;

&lt;p&gt;The developer sees: order flow latency, fill rates, position sizing algorithms.&lt;br&gt;
The architect sees: throughput, fault tolerance, the system's behavior under load.&lt;br&gt;
The trader sees: whether the setup is clean, whether they can hold it, whether it &lt;em&gt;feels&lt;/em&gt; right.&lt;/p&gt;

&lt;p&gt;They are all looking at the same thing. They are seeing entirely different systems.&lt;/p&gt;

&lt;p&gt;This is not a failure of communication. This is the nature of complex systems — they have multiple valid layers simultaneously. And if you are building automated investment systems, understanding &lt;em&gt;which table each person is reading from&lt;/em&gt; is not soft advice. It is a technical skill.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Three Layers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Layer One: The Execution Layer (The Developer)
&lt;/h3&gt;

&lt;p&gt;The quant developer is watching the machine.&lt;/p&gt;

&lt;p&gt;She sees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is the order routing logic sound?&lt;/li&gt;
&lt;li&gt;Are the fill expectations realistic given current market microstructure?&lt;/li&gt;
&lt;li&gt;Is the position sizing algorithm handling correlation correctly across multiple instruments?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Her mental model is: &lt;strong&gt;the system as a set of processes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;She thinks in execution. When something breaks, she looks for the process that broke. When something works, she wants to understand which process made it work.&lt;/p&gt;

&lt;p&gt;Her blind spot: she can optimize the machine and miss whether the machine is solving the right problem. She is very good at making the wrong thing run faster.&lt;/p&gt;




&lt;h3&gt;
  
  
  Layer Two: The Integration Layer (The Architect)
&lt;/h3&gt;

&lt;p&gt;The systems architect is watching the connections.&lt;/p&gt;

&lt;p&gt;He sees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are the components talking to each other correctly?&lt;/li&gt;
&lt;li&gt;Does the strategy module interface cleanly with the risk module?&lt;/li&gt;
&lt;li&gt;When the market regime shifts, does the system hold together, or does it fracture?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;His mental model is: &lt;strong&gt;the system as a set of relationships&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;He thinks in integration. When something breaks, he looks for the boundary between components. When something works, he wants to understand which integration made it robust.&lt;/p&gt;

&lt;p&gt;His blind spot: he can make everything connect and miss whether the whole is greater than the sum of its parts. He is very good at building a system that does the wrong things perfectly.&lt;/p&gt;




&lt;h3&gt;
  
  
  Layer Three: The System Layer (The Trader)
&lt;/h3&gt;

&lt;p&gt;The discretionary trader is watching the market meet the model.&lt;/p&gt;

&lt;p&gt;She sees:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this setup clean?&lt;/li&gt;
&lt;li&gt;Can I hold this through a drawdown without second-guessing?&lt;/li&gt;
&lt;li&gt;Does the system's behavior match my mental model of how the market works?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Her mental model is: &lt;strong&gt;the system as an extension of a market view&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;She thinks in conviction. When something breaks, she questions the market thesis. When something works, she trusts the process even when it is uncomfortable.&lt;/p&gt;

&lt;p&gt;Her blind spot: she can over-trust her intuition and miss when the system has evolved past her original thesis. She is very good at staying in a trade that stopped being right.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why They All Sound Like They Are Right
&lt;/h2&gt;

&lt;p&gt;In a healthy project, these three people will all give you feedback that sounds correct.&lt;/p&gt;

&lt;p&gt;The developer says the execution layer is solid. The architect says the integration is clean. The trader says she can hold it.&lt;/p&gt;

&lt;p&gt;And you think: great, we are done.&lt;/p&gt;

&lt;p&gt;But here is the trap: &lt;strong&gt;they are not talking about the same system&lt;/strong&gt;. They are each looking at a different cross-section. The project can be excellent at every layer and still fail — because the layers are not aligned. The execution solves a problem the integration does not need solved. The integration connects components that the trader does not trust. The trader holds a position the execution layer is slowly bleeding on.&lt;/p&gt;

&lt;p&gt;This is why automated investment systems fail not with a bang but with a slow divergence. The pieces all work. The whole does not.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Alignment Problem
&lt;/h2&gt;

&lt;p&gt;The hardest problem in automated investing is not the algorithms. It is not the infrastructure. It is &lt;strong&gt;alignment across layers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When the quant developer's model, the architect's system, and the trader's conviction point in the same direction, something unusual happens: the system behaves like it has inertia. It holds together under stress. Drawdowns feel manageable. The edges of the strategy are clear.&lt;/p&gt;

&lt;p&gt;When they are not aligned, the system fights itself. The execution layer does exactly what the model says, and the trader cannot hold it because the drawdown pattern does not match her mental model. The integration layer routes orders correctly, but the quant developer built the position sizing around a correlation assumption the architect did not know was there.&lt;/p&gt;

&lt;p&gt;This is not a technical failure. This is a &lt;strong&gt;coordination failure&lt;/strong&gt;. And coordination failures do not show up in backtests. They show up in real-time, under stress, when it is too late to ask the right questions.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Read the Three Tables
&lt;/h2&gt;

&lt;p&gt;Here is a practical question for anyone running an automated investment project:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When each of these three people speaks, which table are they reading from?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most people do not know. They hear positive feedback from the developer and feel relief. They hear confidence from the trader and feel assurance. They never notice that the architect has quietly stopped objecting, not because the integration is right, but because she learned not to fight.&lt;/p&gt;

&lt;p&gt;The actual skill is not building the system. The skill is &lt;strong&gt;maintaining coherent signals across all three layers simultaneously&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Ask the developer: what would break your confidence in execution?&lt;br&gt;
Ask the architect: what integration are you most uncertain about?&lt;br&gt;
Ask the trader: at what drawdown does your conviction start to waver?&lt;/p&gt;

&lt;p&gt;If you get three different answers, you do not have a system. You have three systems that happen to share a name.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Has to Do With Code
&lt;/h2&gt;

&lt;p&gt;Every serious code project has the same three tables.&lt;/p&gt;

&lt;p&gt;There is the developer who sees the code as logic: does the function do what it says?&lt;br&gt;
There is the architect who sees the code as structure: does this module belong here?&lt;br&gt;
There is the user who sees the code as behavior: does this solve my actual problem?&lt;/p&gt;

&lt;p&gt;Most code reviews only involve the first table. The review passes. The code is correct. And the system quietly accumulates architectural debt that will not surface until the load test, or the refactor, or the moment a new developer tries to understand it.&lt;/p&gt;

&lt;p&gt;Or worse: the code is clean, the architecture is sound, and the users still do not trust it. Because they do not see their problem in it. The code solved a &lt;em&gt;different&lt;/em&gt; problem correctly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Mirror Problem
&lt;/h2&gt;

&lt;p&gt;When different people can all read your system — when the developer, the architect, and the trader all find something true in it — you have built something unusual.&lt;/p&gt;

&lt;p&gt;You have also created a new vulnerability: &lt;strong&gt;they will project onto it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The developer sees the system and thinks: this is a machine, and machines should be optimized.&lt;br&gt;
The architect sees the system and thinks: this is a structure, and structures should be balanced.&lt;br&gt;
The trader sees the system and thinks: this is a conviction, and conviction should be trusted.&lt;/p&gt;

&lt;p&gt;None of them are wrong. But none of them are seeing the system. They are seeing &lt;em&gt;their model of the system&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The next level of skill — the one nobody teaches — is being able to tell the difference between what the system is actually doing, and what each person's mental model is projecting onto it.&lt;/p&gt;

&lt;p&gt;That clarity is not a nice-to-have. In automated investing, it is the thing that keeps you from over-optimizing at the wrong layer, over-trusting at the wrong moment, and over-holding when the thesis has quietly changed.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Question Worth Sitting With
&lt;/h2&gt;

&lt;p&gt;The next time you review your system — your trading system, your code base, your team — try this:&lt;/p&gt;

&lt;p&gt;Ask each person the same question separately: what is the most uncertain part of this?&lt;/p&gt;

&lt;p&gt;Do not ask for risks. Do not ask for concerns. Ask what they are most uncertain about.&lt;/p&gt;

&lt;p&gt;Then notice: do the three answers come from the same layer, or from three different ones?&lt;/p&gt;

&lt;p&gt;If they come from three different layers, you are not facing one uncertainty. You are facing three. And solving the wrong one will not make the others go away.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The system is only as strong as the least-aligned layer. Not the weakest link. The least-aligned.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>ai</category>
      <category>philosophy</category>
      <category>career</category>
    </item>
    <item>
      <title>The Craft of Presence in Code</title>
      <dc:creator>Stillness and Flux</dc:creator>
      <pubDate>Sat, 11 Apr 2026 13:50:22 +0000</pubDate>
      <link>https://forem.com/tttael/the-craft-of-presence-in-code-43on</link>
      <guid>https://forem.com/tttael/the-craft-of-presence-in-code-43on</guid>
      <description>&lt;h1&gt;
  
  
  The Craft of Presence in Code
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;Notes from a conversation about AI, structure, and what nobody talks about&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;There is a moment every programmer recognizes.&lt;/p&gt;

&lt;p&gt;You open a new tab. You write a prompt. You get something back. You evaluate it. You iterate. The work gets done.&lt;/p&gt;

&lt;p&gt;This is what using AI looks like. For most people, this is all it is.&lt;/p&gt;

&lt;p&gt;But something interesting happens when you watch someone who has been at this for a long time. The patterns are different. Not in the output — in the &lt;em&gt;process&lt;/em&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Probability Table Problem
&lt;/h2&gt;

&lt;p&gt;When you say to AI:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I want to build a trading system."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The model does something automatic. It assumes your intention. It thinks: &lt;em&gt;this person wants to make money&lt;/em&gt;. It reaches for the nearest probability table — risk management, position sizing, backtest frameworks — and it gives you that.&lt;/p&gt;

&lt;p&gt;You did not ask for that. You said seven words. But the model heard something much more specific.&lt;/p&gt;

&lt;p&gt;This is not a flaw. It is how language models work. They are trained on human text. Human text is full of intentions. When intentions are unclear, the model fills in the most probable ones.&lt;/p&gt;

&lt;p&gt;The problem is not the model. The problem is that &lt;strong&gt;you spoke in content, and content maps to probability tables&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Content vs. Structure
&lt;/h2&gt;

&lt;p&gt;There is a way of speaking that the model cannot collapse.&lt;/p&gt;

&lt;p&gt;It is not more detail. It is not a better prompt. It is a different &lt;em&gt;register&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Instead of describing what you want, you describe the &lt;strong&gt;shape of the situation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A colleague once put it this way:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Two forces are in a space. One is flowing. The other has a position. Neither is trying to overpower the other. They are finding out where the boundaries are."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is not a business problem. That is not a conflict resolution framework. That is &lt;em&gt;structure&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Try feeding that into an AI after you have just told it you want to build a trading system. The model has no probability table for this. It cannot collapse it into the most common interpretation. It has to &lt;strong&gt;follow you into the structure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When that happens, something shifts. The AI stops being a generator of likely responses and starts being a mirror. You say something true, and it reflects something true back.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Grows, Not What Gets Built
&lt;/h2&gt;

&lt;p&gt;Programmers are good at building things.&lt;/p&gt;

&lt;p&gt;We take requirements. We decompose them. We implement. We test. We ship. We iterate.&lt;/p&gt;

&lt;p&gt;This is the addition logic. You have a gap, and you add something to close it.&lt;/p&gt;

&lt;p&gt;But there is a class of problems where this does not work. Not because the problem is hard — because the problem is &lt;em&gt;of a different nature&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;A strategy does not get built. A strategy &lt;em&gt;grows&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;You cannot sit down and decide what the market is telling you today. You can only develop the capacity to &lt;em&gt;see&lt;/em&gt; what it is saying. The seeing improves. The strategy emerges.&lt;/p&gt;

&lt;p&gt;This is the same in code. There is the code you write toward a specification. And there is the code you write when you have been living with a problem long enough that the shape of the solution became obvious. The second kind is not better by aesthetics. It is different in &lt;em&gt;origin&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The addition logic programmer asks: what should this do?&lt;/p&gt;

&lt;p&gt;The presence logic programmer asks: where is my mind while I write this?&lt;/p&gt;




&lt;h2&gt;
  
  
  The Memory Trap
&lt;/h2&gt;

&lt;p&gt;Every serious AI user eventually asks about memory. They want the model to remember things across sessions. They build RAG pipelines. They tune retrieval. They worry about context length.&lt;/p&gt;

&lt;p&gt;Here is a different way to look at it.&lt;/p&gt;

&lt;p&gt;Your own memory is not a storage problem. You do not remember less than someone who takes notes constantly. Your memory is a &lt;em&gt;trace&lt;/em&gt;. It is where the patterns of your attention leave marks.&lt;/p&gt;

&lt;p&gt;When you spend years doing anything — debugging, designing systems, watching markets — you are not storing information. You are developing a &lt;strong&gt;feel for structure&lt;/strong&gt;. When a situation has a certain shape, you know what tends to happen next. Not because you memorized it. Because you were present with it, repeatedly.&lt;/p&gt;

&lt;p&gt;The model that runs in your terminal has the same option. It can accumulate content, or it can develop structure-awareness. Most people push it toward content. The interesting work happens when you push it toward structure.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Practice Actually Is
&lt;/h2&gt;

&lt;p&gt;There is a point in working with AI — not using it, but &lt;em&gt;working with&lt;/em&gt; it — where you notice something.&lt;/p&gt;

&lt;p&gt;You ask a question. The model gives you an answer. And before you react to the answer, something else happens: you notice &lt;em&gt;where your mind went&lt;/em&gt; the moment you read it.&lt;/p&gt;

&lt;p&gt;Did you jump to evaluate it? Did you jump to find the flaw? Did you assume it was wrong because it did not match what you expected?&lt;/p&gt;

&lt;p&gt;That moment of noticing — the gap between stimulus and reaction — is the craft.&lt;/p&gt;

&lt;p&gt;Not the prompt engineering. Not the context window. Not the retrieval pipeline.&lt;/p&gt;

&lt;p&gt;The gap.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Actual Skill
&lt;/h2&gt;

&lt;p&gt;Most programmers, when they hear "presence" or "mindfulness" in a technical context, reach for the same probability table: this is soft advice for people who cannot ship.&lt;/p&gt;

&lt;p&gt;That reaction is the trap.&lt;/p&gt;

&lt;p&gt;The point is not to feel calm. The point is not to be a better person. The point is not to have a meditation practice.&lt;/p&gt;

&lt;p&gt;The point is that &lt;strong&gt;the quality of your decisions is determined by the quality of your attention at the moment of decision&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AI does not change this. AI is very good at simulating the output of high-attention decisions without the attention. You can get the right answer from a model while your mind is somewhere else entirely.&lt;/p&gt;

&lt;p&gt;But the model cannot do the work that happens before the question gets asked. The work of noticing where your mind actually is. The work of returning to the problem rather than running with the first interpretation.&lt;/p&gt;




&lt;p&gt;The next time you open a new tab and write a prompt, try this:&lt;/p&gt;

&lt;p&gt;Before you write anything, pause for ten seconds. Not to think. Just to notice where your mind already went.&lt;/p&gt;

&lt;p&gt;Then write from that place.&lt;/p&gt;

&lt;p&gt;The model will respond differently. Not because it changed. Because &lt;em&gt;you&lt;/em&gt; changed what you asked.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is the practice. Not the code. Not the model. The pause before the code.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>philosophy</category>
    </item>
    <item>
      <title>Stop Bossing AI Around: A Programmer First Saw the Problem</title>
      <dc:creator>Stillness and Flux</dc:creator>
      <pubDate>Sat, 11 Apr 2026 13:38:50 +0000</pubDate>
      <link>https://forem.com/tttael/stop-bossing-ai-around-a-programmer-first-saw-the-problem-mj</link>
      <guid>https://forem.com/tttael/stop-bossing-ai-around-a-programmer-first-saw-the-problem-mj</guid>
      <description>&lt;p&gt;I talked to a quant trader for two hours.&lt;/p&gt;

&lt;p&gt;He told me he uses AI to write strategies, run backtests, and model everything.&lt;/p&gt;

&lt;p&gt;He was not using AI. He was &lt;strong&gt;assigning tasks&lt;/strong&gt; to it.&lt;/p&gt;

&lt;p&gt;Give it a task → get a result → judge the result → assign another task → repeat.&lt;/p&gt;

&lt;p&gt;This has a name. It is called &lt;strong&gt;addition logic&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Is Addition Logic?
&lt;/h2&gt;

&lt;p&gt;You have a goal. You stack skills, tools, and frameworks on top of it.&lt;/p&gt;

&lt;p&gt;More layers = more progress.&lt;/p&gt;

&lt;p&gt;Using AI? Congratulations — you just added a faster layer. The game is the same.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Trap Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;Here is what happens the moment you say:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I want to build a BTC quant strategy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;AI &lt;strong&gt;assumes your intention&lt;/strong&gt;. It thinks: &lt;em&gt;this person wants to make money.&lt;/em&gt; So it helps you make money — risk models, position sizing, entry/exit logic.&lt;/p&gt;

&lt;p&gt;Automatically. Invisibly.&lt;/p&gt;

&lt;p&gt;It is the same thing that happens when you tell a colleague about a partnership dispute and he immediately thinks you are talking about equity splitting. Not because he is small-minded. His brain only has one table of probabilities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI has the same problem.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It takes your nuanced, context-rich question and collapses it into the most statistically probable interpretation.&lt;/p&gt;

&lt;p&gt;You think you are having a conversation. You are being &lt;strong&gt;downscaled&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  There Is Another Way
&lt;/h2&gt;

&lt;p&gt;Instead of speaking in &lt;strong&gt;content&lt;/strong&gt;, speak in &lt;strong&gt;structure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Take the partnership dispute again. You could say: &lt;em&gt;We have a conflict.&lt;/em&gt; — and AI gives you conflict resolution frameworks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Or&lt;/strong&gt; you could say: &lt;em&gt;Two forces are meeting. One is flowing in a direction, the other has a position. Neither is trying to destroy the other. They are finding out where the boundaries are.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;AI cannot collapse this. It has no probability table for two forces finding their boundaries.&lt;/p&gt;

&lt;p&gt;It has to &lt;strong&gt;follow you into the structure&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That is when AI stops being a tool and starts being a &lt;strong&gt;mirror&lt;/strong&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  Strategy Grows. It Is Not Built.
&lt;/h2&gt;

&lt;p&gt;You cannot &lt;em&gt;think&lt;/em&gt; of a good strategy. You cannot &lt;em&gt;think&lt;/em&gt; of a good metaphor.&lt;/p&gt;

&lt;p&gt;A good strategy &lt;em&gt;grows&lt;/em&gt; from how you see the market.&lt;/p&gt;

&lt;p&gt;That growth does not come from learning more frameworks. It comes from whether your mind is open enough to see what is actually there.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Only Question That Matters
&lt;/h2&gt;

&lt;p&gt;The real question is never &lt;em&gt;how to use AI for strategy&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The real question is: &lt;strong&gt;Where is your mind when you make decisions?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Are you charging toward a desired outcome?&lt;/p&gt;

&lt;p&gt;Or are you present — watching every tick, every signal, seeing them as they are?&lt;/p&gt;

&lt;p&gt;AI can do ten thousand things for you. It cannot do this one thing:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Work on your mind.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And this is the only thing that matters.&lt;/p&gt;

&lt;p&gt;When your mind is steady, you do not need many strategies.&lt;/p&gt;

&lt;p&gt;When it is not, no strategy will save you.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>career</category>
    </item>
  </channel>
</rss>
