<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Charlie Cheng</title>
    <description>The latest articles on Forem by Charlie Cheng (@charlie_cheng_a6a98432cb3).</description>
    <link>https://forem.com/charlie_cheng_a6a98432cb3</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/charlie_cheng_a6a98432cb3"/>
    <language>en</language>
    <item>
      <title>From messy LLM chats to reliable specs: why I built Abstraction AI</title>
      <dc:creator>Charlie Cheng</dc:creator>
      <pubDate>Sun, 18 Jan 2026 05:49:04 +0000</pubDate>
      <link>https://forem.com/charlie_cheng_a6a98432cb3/from-messy-llm-chats-to-reliable-specs-why-i-built-abstraction-ai-1l3o</link>
      <guid>https://forem.com/charlie_cheng_a6a98432cb3/from-messy-llm-chats-to-reliable-specs-why-i-built-abstraction-ai-1l3o</guid>
      <description>&lt;p&gt;When I started using AI coding tools seriously last year, I fell into a pattern that might sound familiar:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open ChatGPT or another LLM.&lt;/li&gt;
&lt;li&gt;Talk for 50+ turns about a new idea.&lt;/li&gt;
&lt;li&gt;Paste the whole chat into a coding agent and say:
&amp;gt; "Build the system based on all of this."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On good days, it felt magical.&lt;br&gt;
On bad days, it felt like I had given a very smart intern a 20-page brainstorm doc and hoped they would “just figure it out”.&lt;/p&gt;

&lt;p&gt;Over time, the failure modes became obvious.&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem with throwing raw chats at coding agents
&lt;/h2&gt;

&lt;p&gt;Two issues showed up again and again:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The context is noisy and self-contradictory.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In a long conversation, I will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;change my mind&lt;/li&gt;
&lt;li&gt;backtrack on decisions&lt;/li&gt;
&lt;li&gt;explore dead ends&lt;/li&gt;
&lt;li&gt;forget to clearly mark the final choice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A coding agent that reads everything has to guess which part is the “truth”.&lt;br&gt;&lt;br&gt;
Sometimes it gets it right. Sometimes it confidently builds the wrong version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Most of us do not naturally produce complete specs.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even as developers, we often skip over:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;non-functional requirements&lt;/li&gt;
&lt;li&gt;edge cases and failure modes&lt;/li&gt;
&lt;li&gt;data contracts and schema details&lt;/li&gt;
&lt;li&gt;clear boundaries between components&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When non-technical founders work with AI coding tools, this gap is even wider.&lt;/p&gt;

&lt;p&gt;So we end up in an awkward place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The coding agent is powerful enough to write a lot of code.&lt;/li&gt;
&lt;li&gt;But the “instructions” we give it are closer to a brainstorm than a spec.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Treating AI coding more like training
&lt;/h2&gt;

&lt;p&gt;I come from an AI/ML background, so I started to think in familiar terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;spec&lt;/strong&gt; is the objective and the loss function.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;coding agent&lt;/strong&gt; is the optimizer.&lt;/li&gt;
&lt;li&gt;Logs, tests, and failures are feedback.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the objective is fuzzy, the loss landscape is noisy.&lt;br&gt;&lt;br&gt;
If the objective is clear and structured, optimization can be much more stable.&lt;/p&gt;

&lt;p&gt;The missing piece was obvious in hindsight:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I needed a deliberate step that turns “vibes” into a spec.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That is why I built &lt;strong&gt;Abstraction AI&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What &lt;a href="https://abstractai.ai-builders.space/" rel="noopener noreferrer"&gt;Abstraction AI&lt;/a&gt; does
&lt;/h2&gt;

&lt;p&gt;Abstraction AI sits between your conversations and your coding agents.&lt;/p&gt;

&lt;p&gt;You give it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;long LLM chats,&lt;/li&gt;
&lt;li&gt;meeting transcripts,&lt;/li&gt;
&lt;li&gt;or your own unstructured project notes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It generates a spec package that typically includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a concise product brief and key user stories&lt;/li&gt;
&lt;li&gt;system architecture and the main components&lt;/li&gt;
&lt;li&gt;data model and API sketches&lt;/li&gt;
&lt;li&gt;constraints and non-functional requirements&lt;/li&gt;
&lt;li&gt;an implementation plan broken into steps&lt;/li&gt;
&lt;li&gt;ready-to-use prompts you can paste into your coding agent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The design goal is to be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Newbie-friendly.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The language is as clear as possible, with a terminology section so non-technical collaborators can follow along.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Agent-friendly.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
The structure is consistent enough that coding agents can follow it like a checklist instead of guessing inside a giant chat log.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A small example of the workflow
&lt;/h2&gt;

&lt;p&gt;A typical flow for me now looks like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Brainstorm with ChatGPT or another LLM about the idea until I feel there is enough raw material.&lt;/li&gt;
&lt;li&gt;Paste that raw conversation into Abstraction AI.&lt;/li&gt;
&lt;li&gt;Review the generated spec:

&lt;ul&gt;
&lt;li&gt;edit language,&lt;/li&gt;
&lt;li&gt;remove things I do not actually want,&lt;/li&gt;
&lt;li&gt;add constraints I forgot to mention.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Once I am happy with the spec, I hand it to a coding agent (Claude Code, Cursor, etc.) together with the implementation plan.&lt;/li&gt;
&lt;li&gt;Let the agent iterate, but always with the spec as the single source of truth.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The difference is subtle but important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I am no longer asking the coding agent to discover what I want.&lt;/li&gt;
&lt;li&gt;I am asking it to satisfy a spec that I have already understood and agreed with.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I think this matters
&lt;/h2&gt;

&lt;p&gt;There is a lot of discussion about RLHF, RLAIF, and sophisticated training pipelines for models.&lt;br&gt;&lt;br&gt;
But once those models land in our editors and terminals, we often go back to a very ad-hoc workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;long prompt,&lt;/li&gt;
&lt;li&gt;hope for the best.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Abstraction AI is my attempt to borrow a tiny part of the discipline from training and bring it into everyday development:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;make objectives explicit,&lt;/li&gt;
&lt;li&gt;make evaluation concrete,&lt;/li&gt;
&lt;li&gt;then let the optimizer (the coding agent) do what it does best.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are curious or have a similar workflow, the tool is live here:&lt;/p&gt;

&lt;p&gt;👉 &lt;a href="https://abstractai.ai-builders.space/" rel="noopener noreferrer"&gt;https://abstractai.ai-builders.space/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I would love to hear how you currently bridge the gap between “messy context” and “spec”, and what you would want from a tool like this in your own stack.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>tooling</category>
      <category>startup</category>
    </item>
  </channel>
</rss>
