<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Paton Wong</title>
    <description>The latest articles on Forem by Paton Wong (@patonw).</description>
    <link>https://forem.com/patonw</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/patonw"/>
    <language>en</language>
    <item>
      <title>Structured Generation: teaching AI agents to color inside the lines</title>
      <dc:creator>Paton Wong</dc:creator>
      <pubDate>Wed, 08 Apr 2026 16:23:08 +0000</pubDate>
      <link>https://forem.com/patonw/structured-generation-taming-ai-agents-with-aerie-workflows-gf6</link>
      <guid>https://forem.com/patonw/structured-generation-taming-ai-agents-with-aerie-workflows-gf6</guid>
      <description>&lt;p&gt;In the previous article, we explored generating free-form text in a workflow, as well as dividing responsibility for different parts of a task among agents. This time, let's look into generating machine-readable structured data.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
 Skip to the action if you're already familiar with structured data and schemas.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Motivation
&lt;/h2&gt;

&lt;p&gt;Why would we want data to be structured? First, it is easier to filter, transform and combine documents with automated tools when we know ahead of time the shape of responses and what properties they can contain.&lt;/p&gt;

&lt;p&gt;For instance, if we had to sort and organize thousands of profiles in unstructured text:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"John was born twenty five years ago and programs Python"&lt;/p&gt;

&lt;p&gt;"Alice is a cryptography expert born in 1998"&lt;/p&gt;

&lt;p&gt;etc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Using traditional text-based tools there are an uncountable number of permutations, phrasings, exceptions and edge cases to consider. Instead, by using a language model to transform texts into structured data, we could use simple operations to fill in missing data and categorize each entry:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"John"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"age"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"occupation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"software engineer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"skills"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;]},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Alice"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"dob"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1998"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"occupation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"cryptographer"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second, many external applications and services require structured inputs. If we can construct structured data, our agents will be able to interact with these systems, bridging natural language and programmatic logic. In the parlance of AI agents, these are referred to as "tools" or "functions".&lt;/p&gt;

&lt;p&gt;Generative language models are exceptionally good at translating between unstructured and structured data. Even many small models can extract structured data from paragraphs reliably. Medium-sized models with long context windows can often handle larger documents while following specific instructions about what to find.&lt;/p&gt;

&lt;h2&gt;
  
  
  JSON Schema
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/JSON" rel="noopener noreferrer"&gt;JSON&lt;/a&gt; (JavaScript Object Notation) is the de-facto standard for structured data across modern services and applications. Not only can programs easily parse JSON, but since it is a self-describing format, even an untrained human user can glean meaning from a JSON document without needing a deep understanding of its syntax. Most modern LLMs can generate JSON reliably when creating examples for a user or for invoking remote tools.&lt;/p&gt;

&lt;p&gt;To instruct language models on the specific structure desired, we can use &lt;a href="https://json-schema.org/overview/what-is-jsonschema" rel="noopener noreferrer"&gt;JSON Schema&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Schemas themselves are written as JSON documents. They dictate what fields are required in the target documents, along with type restrictions and more. It provides a way to describe a JSON document with strict precision, maximum flexibility or anywhere in between.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
 While you can write schemas from scratch, it may be quicker to either use a language model to generate one or a specialized schema editing tool (e.g. &lt;a href="https://json.ophir.dev" rel="noopener noreferrer"&gt;JSONJoy&lt;/a&gt;). By leveraging LLMs you don't need to know the rules for building schemas [^schema-gen].&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;[^schema-gen]: You can describe the desired structure, providing examples and counter-examples, to a language model which will generate a schema. For instance: "Generate a JSON schema for a user containing a name, login, department and an optional role."&lt;/p&gt;

&lt;p&gt;For this tutorial, however, we'll use one of the canonical examples: &lt;a href="https://json-schema.org/learn/json-schema-examples#user-profile" rel="noopener noreferrer"&gt;User Profile&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating Data
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frekgo9kwl4ubkbt4sysu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frekgo9kwl4ubkbt4sysu.png" alt="rename workflow" width="169" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start by creating a new workflow from the command palette.&lt;/p&gt;

&lt;p&gt;Use the rename button to replace the automatic name.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3gg2a5r5rcj5xsclhoq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3gg2a5r5rcj5xsclhoq.png" alt="schema contents" width="649" height="499"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Remove the Chat node using either the node context menu or Delete key.&lt;/p&gt;

&lt;p&gt;Replace it with a &lt;em&gt;LLM › Structured&lt;/em&gt; node. Conversation history is not needed this time, but make sure to connect the Agent.&lt;/p&gt;

&lt;p&gt;Use a &lt;em&gt;JSON › Parse JSON&lt;/em&gt; node to provide the schema to the Structured node. Copy the schema contents from &lt;a href="https://json-schema.org/learn/json-schema-examples#user-profile" rel="noopener noreferrer"&gt;User Profile&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This will force the &lt;em&gt;Structured&lt;/em&gt; node to generate data in the specified format. If the model fails to produce JSON or does not follow the schema, we can set the node to retry a number of times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e7qxc7lem2eb3cnqfe3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7e7qxc7lem2eb3cnqfe3.png" alt="generated" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the prompt describing a user or character and tell the model to follow the schema.&lt;/p&gt;

&lt;p&gt;Attach a &lt;em&gt;Preview&lt;/em&gt; node to the &lt;code&gt;data&lt;/code&gt; pin of the &lt;em&gt;Structured&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;Depending on the model and temperature this may work the first time or it may fail.&lt;/p&gt;

&lt;p&gt;You can try switching models or adjusting the temperature or experiment with the &lt;code&gt;retry&lt;/code&gt; and &lt;code&gt;extract&lt;/code&gt; options.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
 The &lt;code&gt;retry&lt;/code&gt; and &lt;code&gt;extract&lt;/code&gt; options on the &lt;em&gt;Structured&lt;/em&gt; node also provide mechanisms for coping with different failure modes of weaker models. Often when retrying the model will understand its mistake and correct it. Other times, the model will get stuck explaining or apologizing while also producing correct structured data. For the latter case, the &lt;code&gt;extract&lt;/code&gt; option will attempt to find structured data embedded within the response.&lt;/p&gt;

&lt;p&gt;Together, they can prevent most common errors. Sometimes, however, you will still want to handle failure recovery within the workflow. Refer to the documentation for details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Templating
&lt;/h2&gt;

&lt;p&gt;Now that we have a JSON document with a known structure, there are many things we can do with it. Some examples are request routing, database updates, and content filtering. However, for this tutorial, we will only use it to generate unstructured text via a template. At a larger scale, this pattern could also be used to generate reports from longer documents or collections of items.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
 This pattern of generating structured data then formatting it immediately is not strictly necessary. LLMs can follow mostly instructions about formatting directly, though they often surround replies with unwanted verbiage. However, this is just a stand-in for more useful transformations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9604g6ikpttzfvvuj8ja.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9604g6ikpttzfvvuj8ja.png" alt="templating" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add a &lt;em&gt;Value › Template&lt;/em&gt; node to the workflow.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
 The &lt;em&gt;Template&lt;/em&gt; node uses &lt;a href="https://docs.rs/minijinja/latest/minijinja/syntax/index.html" rel="noopener noreferrer"&gt;jinja-like syntax&lt;/a&gt; which supports conditionals, filters, iteration and more.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The node takes a template string which may contain variables. On execution, the node substitutes the variables with concrete values provided by a JSON object via the &lt;code&gt;variables&lt;/code&gt; input. Variables can be simple strings, arrays or dictionaries.&lt;/p&gt;

&lt;p&gt;Attach the &lt;code&gt;variables&lt;/code&gt; input to the &lt;code&gt;data&lt;/code&gt; output of the &lt;em&gt;Structured&lt;/em&gt; node and use this template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jinja"&gt;&lt;code&gt;&lt;span class="c"&gt;## Profile ##&lt;/span&gt;

name: &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;username&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;
e-mail: &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;email&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;
Interests:
  &lt;span class="cp"&gt;{%&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nv"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;interests&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="cp"&gt;%}&lt;/span&gt;
    - &lt;span class="cp"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;item&lt;/span&gt; &lt;span class="cp"&gt;}}&lt;/span&gt;
  &lt;span class="cp"&gt;{%&lt;/span&gt; &lt;span class="k"&gt;endfor&lt;/span&gt; &lt;span class="cp"&gt;%}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
 If the provided context is not a key-value map (e.g. text value, message, etc.) it will be exposed to the template as the variable &lt;code&gt;value&lt;/code&gt;. This can be handy when wrapping a simple value or using a list valued input, without resorting to using a &lt;em&gt;Transform JSON&lt;/em&gt; node to wrap the item in a JSON object.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In addition to generating data directly, we can use the &lt;em&gt;Structured&lt;/em&gt; node to extract structured data from existing text as we'll see in upcoming articles.&lt;/p&gt;

&lt;p&gt;Beyond simple transformations and templating, we could also use structured data to control the flow of execution with conditional branching, iteration or workflow routing which will be covered later.&lt;/p&gt;

&lt;p&gt;Before delving into that, however, we will first cover how to work with external tools to create proper AI agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Transformations
&lt;/h2&gt;

&lt;p&gt;As mentioned in the main article, structured data can be merged and transformed into new structures.&lt;/p&gt;

&lt;p&gt;Examples of things you could do include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;exclude or combine fields&lt;/li&gt;
&lt;li&gt;merge multiple objects&lt;/li&gt;
&lt;li&gt;group elements of a list by field values&lt;/li&gt;
&lt;li&gt;exclude list entries based on value&lt;/li&gt;
&lt;li&gt;remove duplicate entries from a list&lt;/li&gt;
&lt;li&gt;convert a list of entries into a lookup table by name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One popular utility for doing this is the command-line utility &lt;a href="https://jqlang.org/manual/" rel="noopener noreferrer"&gt;jq&lt;/a&gt;. The &lt;em&gt;JSON&lt;/em&gt; sub-menu contains nodes that can be used together to provide analogous functionality.&lt;/p&gt;

&lt;p&gt;For instance, to replicate how &lt;em&gt;Template&lt;/em&gt; automatically wraps single values, you can use &lt;em&gt;JSON › Transform JSON&lt;/em&gt; with a simple filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ value: . }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vakh7purjic06f8g1pu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vakh7purjic06f8g1pu.png" alt="transform" width="800" height="539"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also combine data from multiple branches of the workflow using &lt;em&gt;JSON › Gather JSON&lt;/em&gt;. This node takes multiple inputs and combines them into a single JSON array. The inputs can be existing JSON values, texts, numbers or more. By itself, a heterogeneous list of assorted data can be useful, but confusing to debug. Instead we will transform it into an object with descriptive keys.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
 The &lt;em&gt;JSON › Transform JSON&lt;/em&gt; node uses an optimized implementation called &lt;a href="https://gedenkt.at/jaq/manual/" rel="noopener noreferrer"&gt;jaq&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;jq&lt;/code&gt; syntax can be difficult to comprehend at first. Fortunately, many LLMs are capable of generating filters from a prompt and/or examples.&lt;/p&gt;

&lt;p&gt;With the prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Write a jq filter that takes a list of user entries and creates an object keyed by the username field, removing the username field in the process.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Some models might produce this filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;reduce .[] as $u ({}; .[$u.username] = $u | del(.username))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While others might produce:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[ .[] 
  | { key: ( .username ), value: ( . | del(.username) ) } 
]  | from_entries
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Depending on the complexity of the ask, you may need to iterate with the LLM to fix any problems encountered.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>opensource</category>
      <category>automation</category>
    </item>
    <item>
      <title>Agentic workflows with Aerie</title>
      <dc:creator>Paton Wong</dc:creator>
      <pubDate>Tue, 07 Apr 2026 08:06:39 +0000</pubDate>
      <link>https://forem.com/patonw/agentic-workflows-with-aerie-1724</link>
      <guid>https://forem.com/patonw/agentic-workflows-with-aerie-1724</guid>
      <description>&lt;h2&gt;
  
  
  Introducing Aerie
&lt;/h2&gt;

&lt;p&gt;This is an introduction of a new open-source tool for creating and running AI-powered workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why use workflows?
&lt;/h3&gt;

&lt;p&gt;You hire a brilliant intern and give them unrestricted access to your company's systems with high-level instructions to complete a complex task. They do a reasonably good job without breaking anything important, the first time. Should it be a surprise when a minor misunderstanding of the next task cascades into complete disaster? Yet this is commonly how we manage AI agents. For all their impressive capabilities, language models do not learn from experience as you "engineer" a prompt &lt;sup id="fnref1"&gt;1&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Software agents are systems that make decisions and operate independently from&lt;br&gt;
human supervision on behalf of users. AI agents replace deterministic program&lt;br&gt;
logic with language models. These models, however, are inherently&lt;br&gt;
probabilistic. The more autonomy we give them, the greater the opportunity for&lt;br&gt;
surprises.&lt;/p&gt;

&lt;p&gt;No matter how well-tuned prompts are during development, there are&lt;br&gt;
uncountably many ways for things to go wrong in the wild. The more detailed you&lt;br&gt;
make the prompt to account for pitfalls, the less attention the model can pay&lt;br&gt;
to the core task. Furthermore, failure-retry loops can balloon the context,&lt;br&gt;
confusing the model even further.&lt;/p&gt;

&lt;p&gt;AI powered workflows provide a more reliable alternative to purely&lt;br&gt;
agent-driven systems. A workflow breaks a task down to discrete, well-defined&lt;br&gt;
steps. AI plays a specific but limited role in some of those steps allowing it&lt;br&gt;
to concentrate on what it excels at without extraneous distractions.&lt;/p&gt;
&lt;h3&gt;
  
  
  What is Aerie?
&lt;/h3&gt;

&lt;p&gt;Aerie&lt;sup id="fnref2"&gt;2&lt;/sup&gt; is a graphical tool for building agentic workflows. Programming expertise&lt;br&gt;
is helpful, but not necessary. In this instance graphical is an overloaded term:&lt;br&gt;
aside from the user interface of the visual editor, workflows are structured as&lt;br&gt;
node graphs. Each node represents agents, data transformations, decisions, etc.&lt;br&gt;
Outputs of a node can be connected to inputs of other nodes.&lt;br&gt;
Data flows predictably from one node to the next.&lt;/p&gt;

&lt;p&gt;With this visual approach it's easier to build, debug, explain and iterate on&lt;br&gt;
workflows -- making aerie appropriate for prototyping and collaboration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mazkcpuc7310vluq69q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mazkcpuc7310vluq69q.png" alt="graph legend" width="800" height="625"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;Aerie can be run from &lt;a href="https://github.com/patonw/aerie" rel="noopener noreferrer"&gt;source&lt;/a&gt; or a binary AppImage available on the &lt;a href="https://github.com/patonw/aerie/releases" rel="noopener noreferrer"&gt;releases page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The AppImage can be run directly under Linux without installation. However, you&lt;br&gt;
will usually need to set the correct permissions after downloading the file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +rx aerie-x86_64.AppImage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It can also be run under Windows with &lt;a href="https://learn.microsoft.com/en-us/windows/wsl/tutorials/gui-apps" rel="noopener noreferrer"&gt;WSL&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Building and running from source is recommended, however. The development stack&lt;br&gt;
provides a uniform and predictable environment for the application. On the&lt;br&gt;
other hand, it requires far more disk space and time for the initial start. For&lt;br&gt;
instructions on building the source, see the &lt;a href="https://patonw.github.io/aerie/dev_start.html" rel="noopener noreferrer"&gt;Development Guide&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;You can also install from source using the &lt;a href="https://nixos.org/download/" rel="noopener noreferrer"&gt;nix tool&lt;/a&gt;: &lt;a href="https://patonw.github.io/aerie/user_start.html#installation" rel="noopener noreferrer"&gt;Installation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting started
&lt;/h2&gt;

&lt;p&gt;To get a small taste of the potential of this approach, we will start by&lt;br&gt;
building a trivial workflow. It will pass a user's prompt and conversation&lt;br&gt;
history to an LLM and then rewrite its response as a haiku. Almost every modern&lt;br&gt;
language model can handle this in a single step, but we'll use two agents for&lt;br&gt;
didactic purposes.&lt;/p&gt;

&lt;p&gt;In later articles we'll explore topics like data extraction, tool use and&lt;br&gt;
iteration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqrkge6fc99oyxyg0l9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqrkge6fc99oyxyg0l9r.png" alt="create button" width="265" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the Create button on the command palette to create a new workflow with&lt;br&gt;
default nodes. Rather than an empty document, it will contain a basic chat&lt;br&gt;
agent which you can choose to integrate into your workflow or discard. We'll do&lt;br&gt;
the former this time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m849dvf9lpfy1gp936y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0m849dvf9lpfy1gp936y.png" alt="finish node" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, disconnect the &lt;code&gt;conversation&lt;/code&gt; pin on the Finish node, for now.  You can&lt;br&gt;
do this by right-clicking on the wire itself or the pin on either source or&lt;br&gt;
destination node.&lt;/p&gt;

&lt;p&gt;Normally, this would send the completed conversation of the&lt;br&gt;
workflow to the chat session, viewable in the Chat tab. In the meantime, we'll&lt;br&gt;
be working only in the workflow editor.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Pins on the right side of a node are output pins while pins on the left side&lt;br&gt;
are inputs. Information flows in only one direction along a wire from the&lt;br&gt;
output pin of a source node to an input pin of the destination node.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Normal Agent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g7nlx6exnexbx6vu2vc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g7nlx6exnexbx6vu2vc.png" alt="agent wires" width="764" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we'll use the existing agent to generate a normal response.&lt;br&gt;
Disconnect the &lt;code&gt;temperature&lt;/code&gt; and &lt;code&gt;input&lt;/code&gt; wires between the &lt;em&gt;Start&lt;/em&gt; and &lt;em&gt;Agent&lt;/em&gt; nodes.&lt;/p&gt;

&lt;p&gt;The &lt;em&gt;Start&lt;/em&gt; node is the entry point into the workflow, gathering settings and&lt;br&gt;
inputs from the execution environment and exposing them to the other nodes in&lt;br&gt;
the workflow. These values are only available from the &lt;em&gt;Start&lt;/em&gt; node.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F911fflvdse09ivdhsxu3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F911fflvdse09ivdhsxu3.png" alt="agent settings" width="438" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Agent&lt;/em&gt; nodes define parameters for invoking LLMs via Chat and Structured&lt;br&gt;
nodes. An &lt;em&gt;Agent&lt;/em&gt; node does not generate content by itself. Rather it holds&lt;br&gt;
settings to differentiate itself from other agents and can be re-used in&lt;br&gt;
different stages of the workflow by content generating nodes.&lt;/p&gt;

&lt;p&gt;Set the LLM model using the format &lt;code&gt;{provider}/{model}&lt;/code&gt;. Examples:&lt;br&gt;
&lt;code&gt;ollama/devstral:latest&lt;/code&gt; or &lt;a href="https://openrouter.ai/openrouter/free" rel="noopener noreferrer"&gt;&lt;code&gt;openrouter/openrouter/free&lt;/code&gt;&lt;/a&gt;. Most providers will have a list or database of models they provide (e.g. &lt;a href="https://openrouter.ai/models" rel="noopener noreferrer"&gt;https://openrouter.ai/models&lt;/a&gt; &amp;amp; &lt;a href="https://docs.mistral.ai/getting-started/models" rel="noopener noreferrer"&gt;https://docs.mistral.ai/getting-started/models&lt;/a&gt;).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Local providers like Ollama don't require authentication, but services like &lt;br&gt;
OpenRouter, Anthropic, etc usually require an API key. See API Keys for details.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Set the temperature low (~0.25).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The temperature can be set between 0.0 and 1.0. It controls how words&lt;br&gt;
are selected from a range of possibilities during generation. It is loosely&lt;br&gt;
correlated with creativity. Higher temperatures mean more improbable outputs,&lt;br&gt;
while lower temperatures tend to produce drier generic responses.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd72ahsbw96wthj4xy7fe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd72ahsbw96wthj4xy7fe.png" alt="chat node" width="385" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the agent is configured let's take a look at the &lt;em&gt;Chat&lt;/em&gt; node. This is the node&lt;br&gt;
that actually interacts with the language model provider to generate content.&lt;/p&gt;

&lt;p&gt;It takes configuration values from an &lt;em&gt;Agent&lt;/em&gt; node and optionally a&lt;br&gt;
conversation history -- an ongoing list of user prompts and agent responses.&lt;br&gt;
In this instance, the conversation is supplied by the &lt;em&gt;Start&lt;/em&gt; node, since this&lt;br&gt;
is the first &lt;em&gt;Chat&lt;/em&gt; in our workflow.&lt;/p&gt;

&lt;p&gt;Finally, it takes a prompt, which you can supply from a text value like the&lt;br&gt;
&lt;code&gt;input&lt;/code&gt; pin of the &lt;em&gt;Start&lt;/em&gt; node as we saw earlier with the default workflow. In&lt;br&gt;
this instance, however, leave the pin unwired and type the text prompt directly&lt;br&gt;
into the node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Saving
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r9i3c7132mzeyebgqml.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4r9i3c7132mzeyebgqml.png" alt="autosave" width="680" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we continue, it's a good idea to enable &lt;code&gt;autosave&lt;/code&gt; in the Settings tab.&lt;br&gt;
This will write any changes you make to disk automatically. Alternatively you&lt;br&gt;
will need to click the Save button in the command palette manually for all&lt;br&gt;
changed workflows.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;warning&lt;/strong&gt;&lt;br&gt;
If there are unsaved changes to workflows other than the one displayed, they&lt;br&gt;
may be lost. The app will not warn about discarding unsaved changes when&lt;br&gt;
exiting.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Previews
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh41umd4rbji5sn88t3tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh41umd4rbji5sn88t3tf.png" alt="create preview" width="729" height="693"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So far we've modified existing nodes, but now let's create a new node for&lt;br&gt;
examining wire values during a run. The &lt;em&gt;Preview&lt;/em&gt; node will show intermediate&lt;br&gt;
values when the workflow is run from the editor but has no effect otherwise.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The &lt;em&gt;Preview&lt;/em&gt; node can accept any wire value and will change its display&lt;br&gt;
format according to the type.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Right click on the canvas in the area you want the new node to appear. A&lt;br&gt;
context menu appears with nodes that can be added to this graph. Select the&lt;br&gt;
Preview item to create a new node.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running Workflows
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrctsfle8d0qpcqkyp5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgrctsfle8d0qpcqkyp5i.png" alt="running workflow" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Connect the &lt;code&gt;response&lt;/code&gt; pin of the &lt;em&gt;Chat&lt;/em&gt; node to the &lt;em&gt;Preview&lt;/em&gt;'s input and &lt;strong&gt;Run&lt;/strong&gt; the&lt;br&gt;
workflow using the button in the command palette.&lt;/p&gt;

&lt;p&gt;As the workflow runs, nodes that have finished will be marked with a green&lt;br&gt;
check.&lt;/p&gt;

&lt;p&gt;Nodes that are actively running will have a spinning circle in the corner.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oy78906rk6mwts5j851.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oy78906rk6mwts5j851.png" alt="finished workflow" width="800" height="628"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the workflow has run, the &lt;em&gt;Preview&lt;/em&gt; node will show a standard response to&lt;br&gt;
our prompt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Poetic Agent
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtv5ija67nqch37grino.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtv5ija67nqch37grino.png" alt="agent two" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that We have a first agent generating normal (boring) responses, it's time&lt;br&gt;
to create a second agent to generate poetry. This has a distinct purpose and&lt;br&gt;
personality from the previous agent, so we'll configure it with different&lt;br&gt;
settings.&lt;/p&gt;

&lt;p&gt;Create a second agent from the context menu &lt;em&gt;LLM › Agent&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Connect it to the first agent. It will take configuration values from the first&lt;br&gt;
agent unless you override them.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Different languages will have differing proficiencies at various tasks. Some&lt;br&gt;
will focus more on generating program code while others will be better at&lt;br&gt;
writing long-form text. It can be beneficial to experiment with different&lt;br&gt;
combinations in a workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Override the temperature and set it higher (&amp;gt;0.75).&lt;/p&gt;

&lt;p&gt;You can also override the system message (currently blank) to add personality&lt;br&gt;
or add specific instructions for the current task. Instructions can vary&lt;br&gt;
between formatting requirements, strategies for executing a task or admonitions&lt;br&gt;
about avoiding particular pitfalls.&lt;/p&gt;

&lt;p&gt;We won't provide any instructions this time. However, let's give the agent a&lt;br&gt;
role to play, to impart some flavor on the generated result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9x5ymvyuyjh1zcj3esf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi9x5ymvyuyjh1zcj3esf.png" alt="chat two" width="800" height="491"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a second &lt;em&gt;LLM › Chat&lt;/em&gt; node and connect it to the new agent.&lt;/p&gt;

&lt;p&gt;Since we are asking it to act on prior responses, you will need to connect the&lt;br&gt;
conversation to the previous &lt;em&gt;Chat&lt;/em&gt; node &lt;strong&gt;NOT&lt;/strong&gt; the &lt;em&gt;Start&lt;/em&gt; node.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Connecting it to the &lt;em&gt;Start&lt;/em&gt; node would create a parallel conversation that&lt;br&gt;
omits the previous agent's response.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Finally, connect it to another Preview node so we can compare the results&lt;br&gt;
side-by-side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Incremental Execution
&lt;/h2&gt;

&lt;p&gt;Notice that the new nodes do not have status indicators, yet, in contrast to the&lt;br&gt;
old nodes. This shows which nodes will be executed during an incremental&lt;br&gt;
run. Other nodes with  will be skipped, saving time and avoiding&lt;br&gt;
extra API fees. This allows you to quickly try variations on node parameters or&lt;br&gt;
different combinations of nodes without redundant work.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
Selected nodes and the node under the cursor are also re-executed during an&lt;br&gt;
incremental run.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You can trigger an incremental run with the shortcut &lt;code&gt;Ctrl+R&lt;/code&gt; (see shortcuts&lt;br&gt;
with the &lt;code&gt;?&lt;/code&gt; key).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
The Run button in the command palette will trigger a full re-run of every&lt;br&gt;
node in the workflow.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frielwltmj8sy7s24hynf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frielwltmj8sy7s24hynf.png" alt="run two" width="780" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice that the "normal" &lt;em&gt;Chat&lt;/em&gt; node does not rerun incrementally (assuming you&lt;br&gt;
haven't changed, selected or hovered over it).&lt;/p&gt;

&lt;p&gt;Try changing the second prompt (e.g. haiku → sonnet) and notice the status&lt;br&gt;
indicator disappears.&lt;/p&gt;

&lt;p&gt;Another incremental run should only re-execute that node.&lt;/p&gt;

&lt;p&gt;If you change the second Agent node, one of two things will happen, depending&lt;br&gt;
on whether the &lt;code&gt;cascade&lt;/code&gt; setting is enabled. When &lt;code&gt;cascade&lt;/code&gt; is enabled, a&lt;br&gt;
status reset will propagate from a node to its children and all its&lt;br&gt;
descendants.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;💡 &lt;strong&gt;tip&lt;/strong&gt;&lt;br&gt;
Without &lt;code&gt;cascade&lt;/code&gt; only the Agent node's status is cleared. To have the Chat&lt;br&gt;
node re-run incrementally, you will need to hover over or select it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Chat Sessions
&lt;/h2&gt;

&lt;p&gt;We've been using the Workflow tab exclusively so far. If you go to the Chat&lt;br&gt;
tab, notice that none of the messages appear. That's because the workflow&lt;br&gt;
hasn't added anything to the session. The fix is simple: connect the last Chat&lt;br&gt;
node to the Finish node.&lt;/p&gt;

&lt;p&gt;Why didn't we do this from the beginning? Try another incremental run. You&lt;br&gt;
should get an error about unrelated histories. This is because the&lt;br&gt;
incremental state has an old copy of the conversation.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ℹ️ &lt;strong&gt;note&lt;/strong&gt;&lt;br&gt;
Internally, replacing it with the current conversation would invalidate the&lt;br&gt;
entire workflow state. Rewinding and using the stale conversation is not&lt;br&gt;
permitted, however, since workflows are not allowed to make destructive&lt;br&gt;
changes to the session. They can only add content, but ignoring new messages&lt;br&gt;
and overwriting them with new ones would remove existing history from the&lt;br&gt;
session.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This restriction only applies to workflows. From the Session tab you can perform&lt;br&gt;
various changes to the conversation history.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;📢 &lt;strong&gt;important&lt;/strong&gt;&lt;br&gt;
Why aren't my chats saved?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While what we've seen here isn't particularly groundbreaking or useful, now&lt;br&gt;
you should be comfortable with using the editor to build workflows. Next, we'll&lt;br&gt;
explore generating and manipulating structured data, before moving on to tools,&lt;br&gt;
subgraphs and iteration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Workflow gets stuck on Chat node
&lt;/h3&gt;

&lt;h4&gt;
  
  
  API Keys
&lt;/h4&gt;

&lt;p&gt;API keys specific for each provider must be defined in the &lt;a href="https://github.com/0xPlaygrounds/rig/blob/main/skills/rig/references/providers.md" rel="noopener noreferrer"&gt;environment&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Unfortunately, &lt;a href="https://rig.rs/" rel="noopener noreferrer"&gt;rig&lt;/a&gt; the underlying library to connect to AI &lt;br&gt;
providers usually halts the execution thread instead of triggering a&lt;br&gt;
recoverable error.&lt;/p&gt;

&lt;p&gt;Changes to the environment will not take effect until the application restarts.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;⚠️ &lt;strong&gt;warning&lt;/strong&gt;&lt;br&gt;
While it is common practice to use system/account-wide environment variables,&lt;br&gt;
there are security concerns stemming from this. One alternative is to use&lt;br&gt;
&lt;a href="https://direnv.net/" rel="noopener noreferrer"&gt;direnv&lt;/a&gt; to limit its scope by directory. However, this&lt;br&gt;
requires API key to be stored as plain text.&lt;/p&gt;

&lt;p&gt;A more secure option is to use a password manager/vault application with&lt;br&gt;
console integration, like &lt;a href="https://bitwarden.com" rel="noopener noreferrer"&gt;Bitwarden&lt;/a&gt;,  &lt;a href="https://www.hashicorp.com/en/products/vault" rel="noopener noreferrer"&gt;vault&lt;/a&gt;,  &lt;a href="https://www.passwordstore.org/" rel="noopener noreferrer"&gt;pass&lt;/a&gt;, etc. Some will allow you to launch  applications with environment variables pulled from secure storage.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Enable streaming
&lt;/h4&gt;

&lt;p&gt;In some cases, providers may actively generate a response, but the response&lt;br&gt;
itself will be large, taking minutes to complete. Most providers support&lt;br&gt;
streaming individual tokens, allowing you to see the response as it is&lt;br&gt;
generated, rather than waiting for it to finish.&lt;/p&gt;

&lt;h4&gt;
  
  
  Change providers/models
&lt;/h4&gt;

&lt;p&gt;Some providers have high latency or unreliable connections. If one does not&lt;br&gt;
respond in a reasonable amount of time try another.&lt;/p&gt;

&lt;p&gt;Be aware that some providers (&lt;a href="https://openrouter.ai/" rel="noopener noreferrer"&gt;openrouter&lt;/a&gt; for instance) proxy to other providers. Different models may run on different providers.&lt;/p&gt;

&lt;p&gt;Even on a single provider, models may be allocated different hardware resources&lt;br&gt;
to handle different requirements or due to popularity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Workflow gets stuck elsewhere
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Check console logs
&lt;/h4&gt;

&lt;p&gt;This application is still under active development. Most errors will trigger an&lt;br&gt;
error dialog, but some may cause the run to fail silently. The console may&lt;br&gt;
provider warnings or other indication about what has failed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Can't edit node
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Workflow is running or frozen
&lt;/h4&gt;

&lt;p&gt;The workflow can't be edited while it is running. Wait for it to complete or&lt;br&gt;
use the &lt;strong&gt;Stop&lt;/strong&gt; button to interrupt it.&lt;/p&gt;

&lt;p&gt;The editor can be frozen/unfrozen manually or while examining edit history.&lt;br&gt;
This prevents unintended changes when browsing through the Undo stack.&lt;/p&gt;

&lt;p&gt;To unfreeze the workflow, toggle the button on the control palette.&lt;/p&gt;

&lt;h4&gt;
  
  
  (Dis)connect input pins
&lt;/h4&gt;

&lt;p&gt;Some fields can take values from controls on the node as well as input wires.&lt;/p&gt;

&lt;p&gt;The controls will not be visible unless the wire is disconnected.&lt;/p&gt;

&lt;h4&gt;
  
  
  Toggle optional controls
&lt;/h4&gt;

&lt;p&gt;Some node fields are optional. For example, fields that might override a&lt;br&gt;
previous value will need to be enabled to be edited.&lt;/p&gt;

&lt;h3&gt;
  
  
  Chat history disappears on restarting app
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Set active session
&lt;/h4&gt;

&lt;p&gt;By default no session is active. When no session is active (denoted by an&lt;br&gt;
empty value in the session selection) chats are discarded when the app exits.&lt;br&gt;
To save an ongoing chat, rename the session. The active session is reloaded&lt;br&gt;
the next time you start the app.&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Fine tuning models is a different matter, with steep data and resource requirements. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;A nest of a bird of prey perched high on a cliff or tree top. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>automation</category>
    </item>
  </channel>
</rss>
