<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: elias hourany</title>
    <description>The latest articles on Forem by elias hourany (@elias_hourany_5735ea9eac2).</description>
    <link>https://forem.com/elias_hourany_5735ea9eac2</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/elias_hourany_5735ea9eac2"/>
    <language>en</language>
    <item>
      <title>I Replaced 25 Lines of OpenAI Boilerplate with 6 — Here's the Library</title>
      <dc:creator>elias hourany</dc:creator>
      <pubDate>Sun, 15 Feb 2026 14:42:46 +0000</pubDate>
      <link>https://forem.com/elias_hourany_5735ea9eac2/i-replaced-25-lines-of-openai-boilerplate-with-6-heres-the-library-4ngd</link>
      <guid>https://forem.com/elias_hourany_5735ea9eac2/i-replaced-25-lines-of-openai-boilerplate-with-6-heres-the-library-4ngd</guid>
      <description>&lt;p&gt;Every time I called an LLM from TypeScript, I wrote the same code. Construct the client. Define a JSON schema by hand. Call &lt;code&gt;chat.completions.create&lt;/code&gt;. Parse the response. Cast away the &lt;code&gt;any&lt;/code&gt;. Handle the error. Repeat.&lt;/p&gt;

&lt;p&gt;After the tenth time copy-pasting that pattern across projects, I built a library to make it disappear. It's called &lt;a href="https://github.com/eliashourany/ThinkLang" rel="noopener noreferrer"&gt;ThinkLang&lt;/a&gt;, and this is what it does.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Here's what structured output looks like with the raw OpenAI SDK. You want a sentiment analysis result with a label, score, and explanation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;OpenAI&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gpt-4o&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Analyze the sentiment of this review: &lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;review&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="na"&gt;response_format&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;json_schema&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;json_schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Sentiment&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;object&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;enum&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;positive&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;negative&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;neutral&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="na"&gt;score&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;number&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
          &lt;span class="na"&gt;explanation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;label&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;score&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;explanation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// result is `any` — hope for the best&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;25 lines. The schema is untyped. The result is &lt;code&gt;any&lt;/code&gt;. And you're locked to OpenAI.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;Here's the same thing with ThinkLang:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;think&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;zodSchema&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;thinklang&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;Sentiment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;enum&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;positive&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;negative&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;neutral&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
  &lt;span class="na"&gt;score&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;number&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;explanation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;think&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;infer&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;Sentiment&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Analyze the sentiment of this review&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nf"&gt;zodSchema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Sentiment&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;review&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// fully typed, autocomplete works&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same result. &lt;code&gt;result&lt;/code&gt; is fully typed. Works with Anthropic, OpenAI, Gemini, or Ollama — swap providers by changing one environment variable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;thinklang
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Set any provider's API key in your environment (&lt;code&gt;ANTHROPIC_API_KEY&lt;/code&gt;, &lt;code&gt;OPENAI_API_KEY&lt;/code&gt;, &lt;code&gt;GEMINI_API_KEY&lt;/code&gt;, or &lt;code&gt;OLLAMA_BASE_URL&lt;/code&gt;) and you're done. No &lt;code&gt;init()&lt;/code&gt; call needed — the runtime auto-detects your provider.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;think&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;thinklang&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;greeting&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;think&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Say hello to the world in a creative way&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;jsonSchema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's a working program. Five lines.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Else It Does
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Agents with Tools
&lt;/h3&gt;

&lt;p&gt;Define tools with Zod schemas, then let the LLM call them in a loop until it finds the answer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;defineTool&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;thinklang&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;zod&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;searchDocs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;defineTool&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;searchDocs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Search documentation&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="na"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;query&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;docsIndex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;query&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;How do I configure authentication?&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;searchDocs&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;maxTurns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent calls &lt;code&gt;searchDocs&lt;/code&gt; as many times as needed (up to &lt;code&gt;maxTurns&lt;/code&gt;), then returns a final answer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Guards
&lt;/h3&gt;

&lt;p&gt;Validate AI output with constraints and auto-retry on failure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;summary&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;think&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Summarize this article&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;jsonSchema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;article&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;guards&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;length&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;constraint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;rangeEnd&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
  &lt;span class="na"&gt;retryCount&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If the output is too short or too long, ThinkLang automatically retries up to 3 times.&lt;/p&gt;

&lt;h3&gt;
  
  
  Batch Processing
&lt;/h3&gt;

&lt;p&gt;Process thousands of items with concurrency control and cost budgets:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;mapThink&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;zodSchema&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;thinklang&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;mapThink&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;reviews&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;promptTemplate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;`Classify the sentiment: "&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nf"&gt;zodSchema&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Sentiment&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="na"&gt;maxConcurrency&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;costBudget&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;2.00&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// stop if cost exceeds $2&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There's also &lt;code&gt;Dataset&lt;/code&gt; for lazy, chainable pipelines (think &lt;code&gt;Array.map().filter()&lt;/code&gt; but each step goes through an LLM), &lt;code&gt;reduceThink&lt;/code&gt; for tree-reduction, and streaming via async generators.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Provider
&lt;/h3&gt;

&lt;p&gt;No lock-in. Switch providers without changing code:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Provider&lt;/th&gt;
&lt;th&gt;Env Var&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Anthropic&lt;/td&gt;
&lt;td&gt;&lt;code&gt;ANTHROPIC_API_KEY&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenAI&lt;/td&gt;
&lt;td&gt;&lt;code&gt;OPENAI_API_KEY&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;&lt;code&gt;GEMINI_API_KEY&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ollama&lt;/td&gt;
&lt;td&gt;&lt;code&gt;OLLAMA_BASE_URL&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You can also register custom providers with &lt;code&gt;registerProvider()&lt;/code&gt; if you're running your own inference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Built-in Cost Tracking
&lt;/h3&gt;

&lt;p&gt;Every call is tracked automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;globalCostTracker&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;thinklang&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// ... make some calls ...&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;summary&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;globalCostTracker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getSummary&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Total cost: $&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;totalCostUsd&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Total tokens: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;totalTokens&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Oh, and It's Also a Language
&lt;/h2&gt;

&lt;p&gt;Here's the twist I haven't mentioned yet. ThinkLang is also a &lt;em&gt;programming language&lt;/em&gt;. You can write &lt;code&gt;.tl&lt;/code&gt; files where AI primitives are first-class keywords:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Sentiment {
  @description("positive, negative, or neutral")
  label: string
  score: float
  explanation: string
}

let result = think&amp;lt;Sentiment&amp;gt;("Analyze the sentiment of this review")
  with context: review

print result
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; thinklang
thinklang run analyze.tl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The compiler validates types, catches errors before you hit the API, and generates optimized TypeScript. It comes with a VS Code extension (syntax highlighting, snippets, full LSP), a built-in test framework with snapshot replay, and a REPL.&lt;/p&gt;

&lt;p&gt;The language is there for teams that want deeper integration. But most people will use the library — and that's by design.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;thinklang
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/eliashourany/ThinkLang" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; — Star if you find it useful&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://thinklang.dev" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt; — Full guides, API reference, 32 examples&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.npmjs.com/package/thinklang" rel="noopener noreferrer"&gt;npm&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ThinkLang is MIT licensed and open source. I'd love to hear what you think — drop a comment, open an issue, or just try &lt;code&gt;npm install thinklang&lt;/code&gt; and see how it feels.&lt;/p&gt;

&lt;p&gt;If you've been writing the same LLM boilerplate I was, I think you'll like it.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>ai</category>
      <category>opensource</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Introducing ThinkLang: A Programming Language Where AI Is a First-Class Citizen</title>
      <dc:creator>elias hourany</dc:creator>
      <pubDate>Sun, 08 Feb 2026 10:43:55 +0000</pubDate>
      <link>https://forem.com/elias_hourany_5735ea9eac2/introducing-thinklang-a-programming-language-where-ai-is-a-first-class-citizen-2897</link>
      <guid>https://forem.com/elias_hourany_5735ea9eac2/introducing-thinklang-a-programming-language-where-ai-is-a-first-class-citizen-2897</guid>
      <description>&lt;p&gt;What if calling an AI model was as natural as declaring a variable?&lt;/p&gt;

&lt;p&gt;Not a library import. Not an API wrapper. Not an SDK call buried in try/catch boilerplate. A &lt;strong&gt;keyword&lt;/strong&gt; -- built into the language itself.&lt;/p&gt;

&lt;p&gt;That's the idea behind &lt;strong&gt;ThinkLang&lt;/strong&gt;, an open source programming language I've been building where &lt;code&gt;think&lt;/code&gt;, &lt;code&gt;infer&lt;/code&gt;, and &lt;code&gt;reason&lt;/code&gt; are first-class language primitives. It transpiles to TypeScript and calls an LLM at runtime, but the experience of writing it feels nothing like wiring up API calls.&lt;/p&gt;

&lt;p&gt;Today I'm open-sourcing it. Here's why I built it, what it looks like, and where it's going.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Every AI application I've built follows the same pattern: write a prompt, call an API, parse JSON, validate the shape, handle errors, retry on failure, track costs, and hope the response matches what I expected. The actual &lt;em&gt;intent&lt;/em&gt; -- "analyze the sentiment of this text" -- gets buried under plumbing.&lt;/p&gt;

&lt;p&gt;Libraries help, but they don't change the fundamental experience. You're still writing code &lt;em&gt;about&lt;/em&gt; calling an AI, rather than code that &lt;em&gt;thinks&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;I wanted a language where the AI call disappears into the syntax. Where the compiler enforces type safety on AI outputs. Where uncertainty is a type, not an afterthought.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Simplest Example
&lt;/h2&gt;

&lt;p&gt;Here's a complete ThinkLang program:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let greeting = think&amp;lt;string&amp;gt;("Say hello to the world in a creative way")
print greeting
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. &lt;code&gt;think&lt;/code&gt; is a keyword. The generic parameter &lt;code&gt;&amp;lt;string&amp;gt;&lt;/code&gt; tells the compiler (and the AI) what type to return. The prompt is the argument. No imports, no configuration, no SDK initialization.&lt;/p&gt;

&lt;p&gt;But ThinkLang is not a toy. It's designed for building real AI applications with the same rigor you'd expect from any typed language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Type-Safe AI Outputs
&lt;/h2&gt;

&lt;p&gt;The real power shows up when you define structured types:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type Sentiment {
  @description("positive, negative, or neutral")
  label: string
  @description("Intensity from 1-10")
  intensity: int
}

let review = "This product is absolutely amazing! Best purchase I've ever made."

let sentiment = think&amp;lt;Sentiment&amp;gt;("Analyze the sentiment of this review")
  with context: review

print sentiment
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;type&lt;/code&gt; declaration compiles to a JSON schema that constrains the AI's output. The &lt;code&gt;@description&lt;/code&gt; annotations guide the model without polluting your prompt. And the &lt;code&gt;with context:&lt;/code&gt; clause passes data to the AI while keeping the prompt clean.&lt;/p&gt;

&lt;p&gt;If the model returns something that doesn't match the schema, ThinkLang throws a &lt;code&gt;SchemaViolation&lt;/code&gt; -- not a mysterious runtime error three function calls later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making Uncertainty Explicit
&lt;/h2&gt;

&lt;p&gt;AI outputs are inherently uncertain. Most languages pretend otherwise. ThinkLang has a &lt;code&gt;Confident&amp;lt;T&amp;gt;&lt;/code&gt; wrapper that makes uncertainty a first-class concept:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let sentiment = think&amp;lt;Confident&amp;lt;Sentiment&amp;gt;&amp;gt;("Analyze the sentiment of this review")
  with context: review
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;code&gt;Confident&amp;lt;T&amp;gt;&lt;/code&gt; value carries the data, a confidence score, and the model's reasoning. You can't just use it as if it were a plain value -- you have to explicitly handle the uncertainty:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Extract the value, or throw if confidence &amp;lt; 0.8
let result = sentiment.unwrap(0.8)

// Use a fallback if confidence is low
let safe = sentiment.or(defaultSentiment)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This forces you to make a conscious decision about how much you trust the AI's output. It's a small syntactic cost that prevents an entire category of bugs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three Ways to Think
&lt;/h2&gt;

&lt;p&gt;ThinkLang provides three AI primitives, each for a different use case:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;think&lt;/code&gt;&lt;/strong&gt; -- structured generation. Give it a prompt and a type, get back validated data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let analysis = think&amp;lt;Sentiment&amp;gt;("Analyze the sentiment of this review")
  with context: review
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;code&gt;infer&lt;/code&gt;&lt;/strong&gt; -- lightweight classification and transformation. No type definition needed for quick operations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let priority = infer&amp;lt;string&amp;gt;("urgent: server is down!", "Classify as low, medium, high, or critical")
let language = infer&amp;lt;string&amp;gt;("Bonjour le monde", "Detect the language")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;code&gt;reason&lt;/code&gt;&lt;/strong&gt; -- multi-step reasoning with explicit goals. For complex tasks that benefit from chain-of-thought:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type InvestmentAnalysis {
  recommendation: string
  riskLevel: string
  expectedReturn: string
  reasoning: string
}

let analysis = reason&amp;lt;InvestmentAnalysis&amp;gt; {
  goal: "Analyze this investment portfolio and provide recommendations"
  steps:
    1. "Evaluate the current asset allocation"
    2. "Assess market conditions impact on each asset class"
    3. "Identify risks and opportunities"
    4. "Formulate a recommendation"
  with context: {
    portfolio,
    marketConditions,
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;reason&lt;/code&gt; block makes the chain-of-thought explicit in the code. Each step is visible, reviewable, and debuggable -- not hidden inside a system prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Guards: Declarative Output Validation
&lt;/h2&gt;

&lt;p&gt;Sometimes schema validation isn't enough. You need to constrain the content, not just the shape. ThinkLang has guards:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let summary = think&amp;lt;string&amp;gt;("Summarize this article")
  with context: article
  guard {
    length: 50..200
    contains_none: ["TODO", "placeholder"]
  }
  on_fail: retry(3) then fallback("Summary unavailable")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Guards are declarative constraints on the AI's output. If the output fails validation, ThinkLang automatically retries with exponential backoff. If all retries fail, the fallback kicks in. No manual retry loops. No scattered error handling.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern Matching on AI Data
&lt;/h2&gt;

&lt;p&gt;ThinkLang has native pattern matching that works naturally with AI-generated structured data:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let sentiment = think&amp;lt;Confident&amp;lt;Sentiment&amp;gt;&amp;gt;("Analyze the sentiment")
  with context: review

let response = match sentiment {
  { confidence: &amp;gt;= 0.9 } =&amp;gt; "High confidence result"
  { confidence: &amp;gt;= 0.5 } =&amp;gt; "Moderate confidence result"
  _ =&amp;gt; "Low confidence -- manual review needed"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This makes branching on AI outputs clean and readable, replacing chains of if/else statements that check confidence thresholds and field values.&lt;/p&gt;

&lt;h2&gt;
  
  
  Error Handling That Knows About AI
&lt;/h2&gt;

&lt;p&gt;ThinkLang has a typed error hierarchy designed for AI failure modes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;try {
  let result = think&amp;lt;Summary&amp;gt;("Summarize this text in detail")
    with context: text
  print result
} catch SchemaViolation (e) {
  print "Schema error occurred"
} catch ConfidenceTooLow (e) {
  print "Confidence too low for reliable result"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;SchemaViolation&lt;/code&gt;, &lt;code&gt;ConfidenceTooLow&lt;/code&gt;, &lt;code&gt;GuardFailed&lt;/code&gt; -- these are specific, catchable error types, not generic exceptions. You can handle each failure mode differently because the language understands what can go wrong with AI calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context Privacy
&lt;/h2&gt;

&lt;p&gt;When you're passing data to an AI, sometimes you need to include context for your code but exclude sensitive fields from the LLM call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let recommendation = think&amp;lt;Recommendation&amp;gt;("Suggest products based on user interests")
  with context: {
    profile,
    sensitiveData,
  }
  without context: sensitiveData
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;without context:&lt;/code&gt; clause strips fields before they reach the model. Privacy controls are part of the language, not an afterthought in a middleware layer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Built-In Cost Tracking
&lt;/h2&gt;

&lt;p&gt;Every AI call in ThinkLang is automatically tracked. No instrumentation required:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;thinklang run analyze.tl &lt;span class="nt"&gt;--show-cost&lt;/span&gt;

&lt;span class="c"&gt;# Output includes:&lt;/span&gt;
&lt;span class="c"&gt;# Total cost: $0.0234&lt;/span&gt;
&lt;span class="c"&gt;# Breakdown: 3 think calls ($0.0180), 2 infer calls ($0.0054)&lt;/span&gt;
&lt;span class="c"&gt;# Model: claude-opus-4-6 | Tokens: 1,240 in / 890 out&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also run &lt;code&gt;thinklang cost-report&lt;/code&gt; for aggregate summaries across runs. Cost awareness is built into the development workflow, not discovered in a billing dashboard weeks later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing AI Code
&lt;/h2&gt;

&lt;p&gt;Testing non-deterministic AI outputs is hard. ThinkLang's built-in testing framework addresses this with two key features:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Semantic assertions&lt;/strong&gt; -- test &lt;em&gt;meaning&lt;/em&gt;, not exact strings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;test "sentiment analysis" {
  let result = think&amp;lt;Sentiment&amp;gt;("Analyze sentiment")
    with context: "I love this product!"
  assert result.label == "positive"
  assert.semantic(result, "correctly identifies positive sentiment")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Deterministic replay&lt;/strong&gt; -- record AI responses once, replay them forever:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;thinklang &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;--update-snapshots&lt;/span&gt;   &lt;span class="c"&gt;# Record live responses&lt;/span&gt;
&lt;span class="nv"&gt;$ &lt;/span&gt;thinklang &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;--replay&lt;/span&gt;             &lt;span class="c"&gt;# Replay from snapshots (no API calls, no cost)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Snapshot replay means your CI pipeline can run AI tests without an API key, without network access, and without cost. Development iteration becomes fast and free.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Tooling
&lt;/h2&gt;

&lt;p&gt;ThinkLang ships with a complete development environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CLI&lt;/strong&gt; with &lt;code&gt;run&lt;/code&gt;, &lt;code&gt;compile&lt;/code&gt;, &lt;code&gt;repl&lt;/code&gt;, &lt;code&gt;test&lt;/code&gt;, and &lt;code&gt;cost-report&lt;/code&gt; commands&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VS Code extension&lt;/strong&gt; with syntax highlighting, 11 code snippets, and a full LSP server providing diagnostics, hover information, completions, go-to-definition, and signature help&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Response caching&lt;/strong&gt; that automatically deduplicates identical AI calls at zero cost&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Module system&lt;/strong&gt; with imports for reusing types and functions across files&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How It Works Under the Hood
&lt;/h2&gt;

&lt;p&gt;ThinkLang follows a traditional compiler pipeline: &lt;strong&gt;parse -&amp;gt; resolve imports -&amp;gt; type check -&amp;gt; code generate&lt;/strong&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A PEG grammar parses &lt;code&gt;.tl&lt;/code&gt; files into an AST&lt;/li&gt;
&lt;li&gt;The module resolver handles imports (with circular dependency detection)&lt;/li&gt;
&lt;li&gt;The type checker validates scope and types&lt;/li&gt;
&lt;li&gt;The code generator emits TypeScript that imports from the ThinkLang runtime&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The runtime uses the Anthropic SDK with JSON schema mode for structured outputs, automatic retries, confidence extraction, and cost tracking. Types are compiled to JSON schemas that constrain the model's response format.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;thinklang

&lt;span class="c"&gt;# Set your API key&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sk-ant-...

&lt;span class="c"&gt;# Run your first program&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'let greeting = think&amp;lt;string&amp;gt;("Say hello")'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; hello.tl
npx thinklang run hello.tl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;17 example programs in the &lt;code&gt;examples/&lt;/code&gt; directory cover every feature, from basic think calls to multi-step reasoning with guards and pattern matching.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Open Source
&lt;/h2&gt;

&lt;p&gt;ThinkLang is MIT licensed and open source because the idea of AI-native programming languages is bigger than one project. I want to see what happens when the community pushes on these concepts -- when people find use cases I haven't imagined, syntax improvements I haven't considered, and patterns that only emerge at scale.&lt;/p&gt;

&lt;p&gt;The language is at version 0.1.1. The grammar, runtime, and tooling are functional and tested (13 test suites), but there's plenty of room to grow: more model providers, richer type system features, optimization passes, and whatever else the community decides matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/eliashourany/ThinkLang" rel="noopener noreferrer"&gt;github.com/eliashourany/ThinkLang&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: &lt;a href="https://thinklang.dev" rel="noopener noreferrer"&gt;thinklang.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;VS Code Extension&lt;/strong&gt;: Search "ThinkLang" in the marketplace&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you've ever felt that calling an AI should be simpler -- that it should feel like part of the language, not something bolted on -- give ThinkLang a try. Star the repo, file an issue, or open a PR. I'd love to hear what you think.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;ThinkLang is built by &lt;a href="https://github.com/eliashourany" rel="noopener noreferrer"&gt;Elias Hourany&lt;/a&gt;. If you found this interesting, follow me for updates on the project.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>llm</category>
      <category>typescript</category>
    </item>
    <item>
      <title>I Built a Programming Language Where think Is a Keyword</title>
      <dc:creator>elias hourany</dc:creator>
      <pubDate>Sat, 07 Feb 2026 18:25:41 +0000</pubDate>
      <link>https://forem.com/elias_hourany_5735ea9eac2/i-built-a-programming-language-where-think-is-a-keyword-1p2i</link>
      <guid>https://forem.com/elias_hourany_5735ea9eac2/i-built-a-programming-language-where-think-is-a-keyword-1p2i</guid>
      <description>&lt;p&gt;We write if, for, and return every day without a second thought. These are the primitives of computation — the building blocks we use to tell machines what to do.&lt;/p&gt;

&lt;p&gt;But we're in a new era now. AI isn't a service you call occasionally — it's becoming a fundamental computing primitive. So why are we still using it like this?&lt;/p&gt;

&lt;p&gt;const response = await anthropic.messages.create({&lt;br&gt;
    model: "claude-opus-4-6",&lt;br&gt;
    max_tokens: 1024,&lt;br&gt;
    messages: [{ role: "user", content: "Analyze the sentiment of: " + text }],&lt;br&gt;
  });&lt;br&gt;
  const result = JSON.parse(response.content[0].text);&lt;br&gt;
  // hope it's the right shape...&lt;/p&gt;

&lt;p&gt;API boilerplate. Manual prompt construction. Untyped JSON parsing. Praying the response matches what you expected.&lt;/p&gt;

&lt;p&gt;What if instead, you could just write:&lt;/p&gt;

&lt;p&gt;let sentiment = think("Analyze the sentiment of this review")&lt;/p&gt;

&lt;p&gt;That's why I built ThinkLang — an open-source, AI-native programming language where think is a keyword.&lt;/p&gt;

&lt;p&gt;What Is ThinkLang?&lt;/p&gt;

&lt;p&gt;ThinkLang is a transpiler. You write .tl files, and the compiler turns them into TypeScript that calls an LLM runtime. The language has its own parser (PEG&lt;br&gt;
  grammar), type checker, code generator, LSP server, VS Code extension, testing framework, and CLI.&lt;/p&gt;

&lt;p&gt;The compilation pipeline: parse → resolve imports → type check → code generate → execute.&lt;/p&gt;

&lt;p&gt;The core idea is simple: AI should be a language-level primitive, not a library call.&lt;/p&gt;

&lt;p&gt;The Basics: think, infer, reason&lt;/p&gt;

&lt;p&gt;ThinkLang has three AI primitives built into the language:&lt;/p&gt;

&lt;p&gt;think(prompt)&lt;/p&gt;

&lt;p&gt;The primary primitive. Give it a type and a prompt, and it returns structured, type-safe output.&lt;/p&gt;

&lt;p&gt;type MovieReview {&lt;br&gt;
    title: string&lt;br&gt;
    rating: int&lt;br&gt;
    pros: string[]&lt;br&gt;
    cons: string[]&lt;br&gt;
    verdict: string&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;let review = think("Review the movie Inception")&lt;/p&gt;

&lt;p&gt;print(review.title)    // type-safe access&lt;br&gt;
  print(review.rating)   // guaranteed to be an int&lt;/p&gt;

&lt;p&gt;The compiler turns your type declaration into a JSON schema that constrains the LLM's output. The AI cannot return anything that violates your type. No&lt;br&gt;
  parsing. No validation. No hoping.&lt;/p&gt;

&lt;p&gt;infer(value)&lt;/p&gt;

&lt;p&gt;Lightweight inference on existing values — when you already have data and want the AI to derive something from it.&lt;/p&gt;

&lt;p&gt;type Category {&lt;br&gt;
    label: string&lt;br&gt;
    confidence: float&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;let data = "The patient presents with a persistent cough and fever"&lt;br&gt;
  let category = infer(data)&lt;/p&gt;

&lt;p&gt;reason {}&lt;/p&gt;

&lt;p&gt;Multi-step reasoning with explicit goals and steps:&lt;/p&gt;

&lt;p&gt;type Analysis {&lt;br&gt;
    findings: string[]&lt;br&gt;
    recommendation: string&lt;br&gt;
    risk_level: string&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;let analysis = reason {&lt;br&gt;
    goal: "Evaluate this investment opportunity"&lt;br&gt;
    steps:&lt;br&gt;
      1. "Analyze the financial fundamentals"&lt;br&gt;
      2. "Assess market conditions and competition"&lt;br&gt;
      3. "Evaluate risk factors"&lt;br&gt;
      4. "Form a final recommendation"&lt;br&gt;
    with context: { portfolio, market_data }&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;This compiles into a structured chain-of-thought prompt. The AI follows your steps explicitly rather than reasoning however it wants.&lt;/p&gt;

&lt;p&gt;Confidence as a Language Concept&lt;/p&gt;

&lt;p&gt;Here's something most AI wrappers get wrong: they treat every AI response as equally certain. But AI outputs have varying levels of confidence, and your code&lt;br&gt;
  should reflect that.&lt;/p&gt;

&lt;p&gt;ThinkLang has Confident:&lt;/p&gt;

&lt;p&gt;type Diagnosis {&lt;br&gt;
    condition: string&lt;br&gt;
    severity: string&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;let result = think&amp;gt;("Diagnose based on these symptoms")&lt;/p&gt;

&lt;p&gt;print(result.value)       // the actual diagnosis&lt;br&gt;
  print(result.confidence)  // 0.0 to 1.0&lt;br&gt;
  print(result.reasoning)   // why the AI is this confident&lt;/p&gt;

&lt;p&gt;And the uncertain modifier forces you to handle uncertainty explicitly:&lt;/p&gt;

&lt;p&gt;uncertain let diagnosis = think&amp;gt;("Diagnose this")&lt;/p&gt;

&lt;p&gt;// This won't compile:&lt;br&gt;
  // print(diagnosis.value)&lt;/p&gt;

&lt;p&gt;// You must explicitly unwrap:&lt;br&gt;
  let safe = diagnosis.expect(0.8)         // throws if confidence &amp;lt; 0.8&lt;br&gt;
  let fallback = diagnosis.or(default_val) // use fallback if low confidence&lt;br&gt;
  let raw = diagnosis.unwrap()             // explicit "I accept the risk"&lt;/p&gt;

&lt;p&gt;The compiler enforces this. You can't silently ignore uncertainty — you have to make a conscious decision about how to handle it.&lt;/p&gt;

&lt;p&gt;Output Guards&lt;/p&gt;

&lt;p&gt;What if the AI returns something that's technically the right type but semantically wrong? A summary that's too long. A response that contains placeholder&lt;br&gt;
  text. A rating that's out of range.&lt;/p&gt;

&lt;p&gt;Guards are declarative validation rules with automatic retry:&lt;/p&gt;

&lt;p&gt;let summary = think("Summarize this article")&lt;br&gt;
    guard {&lt;br&gt;
      length: 50..200&lt;br&gt;
      contains_none: ["TODO", "placeholder", "as an AI"]&lt;br&gt;
    }&lt;br&gt;
    on_fail: retry(3) then fallback("Could not generate summary")&lt;/p&gt;

&lt;p&gt;If the output fails validation, ThinkLang automatically retries up to 3 times. If all retries fail, it falls back to your default. No manual retry loops. No&lt;br&gt;
  callback hell.&lt;/p&gt;

&lt;p&gt;You can also guard numeric fields:&lt;/p&gt;

&lt;p&gt;let review = think("Review this product")&lt;br&gt;
    guard {&lt;br&gt;
      rating: 1..5&lt;br&gt;
      length: 100..500&lt;br&gt;
    }&lt;br&gt;
    on_fail: retry(2)&lt;/p&gt;

&lt;p&gt;Pattern Matching on AI Outputs&lt;/p&gt;

&lt;p&gt;ThinkLang has structural pattern matching that works beautifully with AI-generated data:&lt;/p&gt;

&lt;p&gt;type Sentiment {&lt;br&gt;
    label: string&lt;br&gt;
    intensity: int&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;let sentiment = think("Analyze: 'This is the best day ever!'")&lt;/p&gt;

&lt;p&gt;match sentiment {&lt;br&gt;
    { label: "positive", intensity: &amp;gt;= 8 } =&amp;gt; print("Extremely positive!")&lt;br&gt;
    { label: "positive" } =&amp;gt; print("Positive")&lt;br&gt;
    { label: "negative", intensity: &amp;gt;= 8 } =&amp;gt; print("Extremely negative")&lt;br&gt;
    { label: "negative" } =&amp;gt; print("Negative")&lt;br&gt;
    _ =&amp;gt; print("Neutral")&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;Pattern matching + typed AI outputs = clean, readable branching on AI results.&lt;/p&gt;

&lt;p&gt;Context Management&lt;/p&gt;

&lt;p&gt;Control exactly what data the AI sees:&lt;/p&gt;

&lt;p&gt;let user_profile = { name: "Alice", preferences: ["sci-fi", "thriller"] }&lt;br&gt;
  let secret_key = "sk-abc123"&lt;/p&gt;

&lt;p&gt;let recommendation = think("Recommend a movie for this user")&lt;br&gt;
    with context: user_profile&lt;br&gt;
    without context: secret_key&lt;/p&gt;

&lt;p&gt;with context scopes data into the prompt. without context explicitly excludes sensitive data. No accidentally leaking API keys into prompts.&lt;/p&gt;

&lt;p&gt;Pipeline Operator&lt;/p&gt;

&lt;p&gt;Chain AI operations with |&amp;gt;:&lt;/p&gt;

&lt;p&gt;let result = "raw text input"&lt;br&gt;
    |&amp;gt; think("Summarize this")&lt;br&gt;
    |&amp;gt; think("Translate to French")&lt;br&gt;
    |&amp;gt; think("Rate the translation quality")&lt;/p&gt;

&lt;p&gt;Readable, composable, functional-style AI pipelines.&lt;/p&gt;

&lt;p&gt;Built-in Testing Framework&lt;/p&gt;

&lt;p&gt;This is one of my favorite features. ThinkLang has a built-in test framework that understands AI:&lt;/p&gt;

&lt;p&gt;test "sentiment analysis" {&lt;br&gt;
    let result = think("Analyze: 'I love this product'")&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;assert result.label == "positive"
assert result.intensity &amp;gt; 5
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;test "summary quality" {&lt;br&gt;
    let summary = think("Summarize the theory of relativity")&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;assert.semantic(summary, "explains relationship between space and time")
assert.semantic(summary, "mentions Einstein")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;}&lt;/p&gt;

&lt;p&gt;assert.semantic() is an AI-powered assertion. It uses the LLM to evaluate whether the output meets qualitative criteria. No brittle string matching.&lt;/p&gt;

&lt;p&gt;Snapshot Replay&lt;/p&gt;

&lt;p&gt;AI tests are non-deterministic and cost money. ThinkLang solves both problems:&lt;/p&gt;

&lt;p&gt;# Record AI responses to snapshot files&lt;br&gt;
  thinklang test --update-snapshots&lt;/p&gt;

&lt;p&gt;# Replay from snapshots — zero API calls, deterministic results&lt;br&gt;
  thinklang test --replay&lt;/p&gt;

&lt;p&gt;Record once, replay forever. Your CI pipeline runs deterministic AI tests without an API key.&lt;/p&gt;

&lt;p&gt;Cost Tracking&lt;/p&gt;

&lt;p&gt;Every AI call is automatically metered:&lt;/p&gt;

&lt;p&gt;thinklang run my-program.tl --show-cost&lt;/p&gt;

&lt;p&gt;Cost Summary:&lt;br&gt;
    Total calls: 5&lt;br&gt;
    Input tokens: 2,847&lt;br&gt;
    Output tokens: 1,203&lt;br&gt;
    Estimated cost: $0.0234&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;By operation:
  think: 3 calls, $0.0156
  reason: 1 call, $0.0062
  assert.semantic: 1 call, $0.0016
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can also run thinklang cost-report to see aggregated costs across runs. No surprises on your API bill.&lt;/p&gt;

&lt;p&gt;Full Developer Ecosystem&lt;/p&gt;

&lt;p&gt;ThinkLang isn't just a language — it's a complete toolchain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CLI with run, compile, repl, test, and cost-report commands&lt;/li&gt;
&lt;li&gt;VS Code extension with syntax highlighting and 11 code snippets&lt;/li&gt;
&lt;li&gt;Language Server (LSP) providing real-time diagnostics, hover tooltips, code completion, go-to-definition, document symbols, and signature help&lt;/li&gt;
&lt;li&gt;Module system with import/export across .tl files&lt;/li&gt;
&lt;li&gt;Response caching — identical prompts skip the API call automatically&lt;/li&gt;
&lt;li&gt;17 example programs covering every feature&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How I Built It&lt;/p&gt;

&lt;p&gt;I built ThinkLang as a solo developer, and I have to be honest — Claude Code was an incredible partner throughout the process. It helped me move fast, iterate&lt;br&gt;
  on the parser grammar, debug the type checker, and ship a project of this scope. Building an entire language ecosystem (parser, checker, compiler, runtime,&lt;br&gt;
  LSP, testing framework, VS Code extension, docs) solo would have taken significantly longer without it. AI-assisted development is real, and this project is&lt;br&gt;
  proof of it.&lt;/p&gt;

&lt;p&gt;The technical stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;PEG grammar (Peggy) for parsing&lt;/li&gt;
&lt;li&gt;TypeScript for the compiler, runtime, and tooling&lt;/li&gt;
&lt;li&gt;Zod for runtime validation&lt;/li&gt;
&lt;li&gt;Anthropic SDK for the AI runtime&lt;/li&gt;
&lt;li&gt;vscode-languageserver for the LSP&lt;/li&gt;
&lt;li&gt;VitePress for the documentation site&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's Next&lt;/p&gt;

&lt;p&gt;ThinkLang is at v0.1.1 and I have big plans:&lt;/p&gt;

&lt;p&gt;Model Agnostic Support — Right now ThinkLang uses Anthropic's Claude. The next milestone is supporting any provider: OpenAI, Gemini, Mistral, local models via&lt;br&gt;
  Ollama, or any OpenAI-compatible endpoint. Same language, any brain.&lt;/p&gt;

&lt;p&gt;Agentic Native Coding — First-class language primitives for building AI agents. Think tool use, planning loops, and multi-agent coordination as language&lt;br&gt;
  keywords, not library patterns.&lt;/p&gt;

&lt;p&gt;Try It&lt;/p&gt;

&lt;p&gt;npm install -g thinklang&lt;/p&gt;

&lt;p&gt;Create hello.tl:&lt;/p&gt;

&lt;p&gt;type Greeting {&lt;br&gt;
    message: string&lt;br&gt;
    emoji: string&lt;br&gt;
  }&lt;/p&gt;

&lt;p&gt;let greeting = think("Say hello to a developer trying ThinkLang for the first time")&lt;br&gt;
  print(greeting.message)&lt;br&gt;
  print(greeting.emoji)&lt;/p&gt;

&lt;p&gt;Run it:&lt;/p&gt;

&lt;p&gt;thinklang run hello.tl&lt;/p&gt;

&lt;p&gt;Links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/eliashourany/ThinkLang" rel="noopener noreferrer"&gt;https://github.com/eliashourany/ThinkLang&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docs: &lt;a href="https://thinklang.dev" rel="noopener noreferrer"&gt;https://thinklang.dev&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;npm: &lt;a href="https://www.npmjs.com/package/thinklang" rel="noopener noreferrer"&gt;https://www.npmjs.com/package/thinklang&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The project is MIT licensed. I'd love stars, feedback, issues, or contributions. And if you're thinking about what AI-native development looks like — let's&lt;br&gt;
  talk.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>opensource</category>
      <category>typescript</category>
    </item>
  </channel>
</rss>
