<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Bogdan Pistol</title>
    <description>The latest articles on Forem by Bogdan Pistol (@bogdanpi).</description>
    <link>https://forem.com/bogdanpi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/bogdanpi"/>
    <language>en</language>
    <item>
      <title>Cut Your LLM Costs by ~30% With Prompt Optimization (What Actually Works in Production)</title>
      <dc:creator>Bogdan Pistol</dc:creator>
      <pubDate>Sat, 13 Dec 2025 17:07:39 +0000</pubDate>
      <link>https://forem.com/bogdanpi/cut-your-llm-costs-by-30-with-prompt-optimization-what-actually-works-in-production-2fn1</link>
      <guid>https://forem.com/bogdanpi/cut-your-llm-costs-by-30-with-prompt-optimization-what-actually-works-in-production-2fn1</guid>
      <description>&lt;p&gt;If you’ve shipped an LLM-powered feature to production, you’ve probably experienced this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The feature works&lt;/li&gt;
&lt;li&gt;Users like it&lt;/li&gt;
&lt;li&gt;Traffic grows&lt;/li&gt;
&lt;li&gt;Your AI bill quietly explodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The problem is not usually one big mistake. It’s death by a thousand small inefficiencies: bloated prompts, wrong model choices, missing caching, and zero visibility into what’s actually driving cost.&lt;/p&gt;

&lt;p&gt;Over the past months, while working on LLM-heavy applications, we consistently saw &lt;strong&gt;20–40% of spend wasted&lt;/strong&gt;. Not because models are bad, but because prompts and execution paths aren’t treated as production assets.&lt;/p&gt;

&lt;p&gt;This post breaks down where LLM costs really leak and the practical prompt optimization techniques that actually reduce spend in real systems.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why LLM Costs Are Hard to Control
&lt;/h2&gt;

&lt;p&gt;LLM pricing is simple on paper: you pay per token.&lt;br&gt;&lt;br&gt;
In practice, token usage is wildly unpredictable.&lt;/p&gt;

&lt;p&gt;A single request cost depends on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prompt length&lt;/li&gt;
&lt;li&gt;Context size&lt;/li&gt;
&lt;li&gt;Model choice&lt;/li&gt;
&lt;li&gt;Tool calls and retries&lt;/li&gt;
&lt;li&gt;Hidden system prompts&lt;/li&gt;
&lt;li&gt;User behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once an app scales, these variables compound fast. Without instrumentation, most teams only notice the issue when the invoice lands.&lt;/p&gt;




&lt;h2&gt;
  
  
  The “Hidden 30%”: Where the Money Goes
&lt;/h2&gt;

&lt;p&gt;Here are the most common cost leaks we see in production systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Overly Verbose Prompts
&lt;/h3&gt;

&lt;p&gt;Prompts tend to grow organically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extra instructions&lt;/li&gt;
&lt;li&gt;Repeated constraints&lt;/li&gt;
&lt;li&gt;Historical context that is no longer relevant&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every additional token is paid every single request.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What works&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ruthlessly trim instructions&lt;/li&gt;
&lt;li&gt;Remove redundant phrasing&lt;/li&gt;
&lt;li&gt;Prefer clear structure over verbosity&lt;/li&gt;
&lt;li&gt;Measure tokens per prompt version&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Even small reductions add up at scale.&lt;/p&gt;




&lt;h3&gt;
  
  
  Bloated Context Windows
&lt;/h3&gt;

&lt;p&gt;A common anti-pattern:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Just pass everything to the model and let it decide.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Large context windows are expensive and often unnecessary.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What works&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Retrieve only relevant context (RAG with filtering)&lt;/li&gt;
&lt;li&gt;Cap historical messages&lt;/li&gt;
&lt;li&gt;Summarize long threads before reinjecting&lt;/li&gt;
&lt;li&gt;Avoid entire-database prompts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Context relevance matters more than context size.&lt;/p&gt;




&lt;h3&gt;
  
  
  Using Frontier Models for Simple Tasks
&lt;/h3&gt;

&lt;p&gt;Many systems default to a single top-tier model for everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Classification&lt;/li&gt;
&lt;li&gt;Extraction&lt;/li&gt;
&lt;li&gt;Formatting&lt;/li&gt;
&lt;li&gt;Simple reasoning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is convenient, but expensive.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What works&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Route tasks by complexity&lt;/li&gt;
&lt;li&gt;Use cheaper models for deterministic or shallow tasks&lt;/li&gt;
&lt;li&gt;Reserve frontier models for reasoning-heavy paths
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;route&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;task&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;SIMPLE_TASKS&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;call_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cheap-model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;call_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;frontier-model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This alone can cut costs dramatically.&lt;/p&gt;




&lt;h3&gt;
  
  
  No Caching or Reuse
&lt;/h3&gt;

&lt;p&gt;LLMs are often called for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Repeated prompts&lt;/li&gt;
&lt;li&gt;Identical user actions&lt;/li&gt;
&lt;li&gt;Static system instructions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without caching, you pay repeatedly for the same output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What works&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cache deterministic responses&lt;/li&gt;
&lt;li&gt;Cache embeddings&lt;/li&gt;
&lt;li&gt;Reuse prompt fragments instead of regenerating them&lt;/li&gt;
&lt;li&gt;Detect duplicate calls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Caching is one of the highest ROI optimizations.&lt;/p&gt;




&lt;h3&gt;
  
  
  No Guardrails or Visibility
&lt;/h3&gt;

&lt;p&gt;Most teams cannot answer:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which prompt is the most expensive?&lt;/li&gt;
&lt;li&gt;Which model drives most spend?&lt;/li&gt;
&lt;li&gt;Where retries or failures inflate cost?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without observability, optimization is guesswork.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What works&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Track cost per request&lt;/li&gt;
&lt;li&gt;Track tokens per prompt&lt;/li&gt;
&lt;li&gt;Alert on abnormal usage&lt;/li&gt;
&lt;li&gt;Compare prompt versions over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At some point, most teams realize the issue isn’t knowing what to optimize —&lt;br&gt;
it’s not having a single place to see prompts, executions, tokens, models, and cost together.&lt;/p&gt;

&lt;p&gt;Once prompts are scattered across code, config files, and notebooks, even simple questions like&lt;br&gt;
“Which prompt is costing us the most this week?” become hard to answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  Prompt Optimization Is a Production Discipline
&lt;/h2&gt;

&lt;p&gt;The key mindset shift is this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Prompts are not static strings.&lt;br&gt;&lt;br&gt;
They are production assets.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Version them&lt;/li&gt;
&lt;li&gt;Measure them&lt;/li&gt;
&lt;li&gt;Optimize them&lt;/li&gt;
&lt;li&gt;Roll back when needed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Treat prompt changes like code changes. The cost savings compound as usage grows.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where This Breaks Down at Scale
&lt;/h2&gt;

&lt;p&gt;Most of the techniques above are straightforward in isolation.&lt;/p&gt;

&lt;p&gt;What becomes difficult at scale is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tracking prompt versions over time&lt;/li&gt;
&lt;li&gt;Correlating prompts with executions and cost&lt;/li&gt;
&lt;li&gt;Comparing models and optimizations objectively&lt;/li&gt;
&lt;li&gt;Enforcing guardrails across teams&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where teams usually move from ad-hoc scripts and dashboards&lt;br&gt;
to a dedicated LLM observability and cost-control layer.&lt;/p&gt;

&lt;p&gt;For context, this is exactly the problem space we’re working on with &lt;strong&gt;Dakora&lt;/strong&gt; —&lt;br&gt;
giving teams a unified view of prompts, executions, models, and cost, so prompt optimization stops being guesswork and becomes measurable.&lt;/p&gt;

&lt;p&gt;The goal isn’t more dashboards.&lt;br&gt;
It’s making cost, performance, and prompt changes visible in the same place.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Simple Cost-Reduction Checklist
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Measure tokens per request&lt;/li&gt;
&lt;li&gt;Trim prompt verbosity&lt;/li&gt;
&lt;li&gt;Reduce unnecessary context&lt;/li&gt;
&lt;li&gt;Route tasks to cheaper models&lt;/li&gt;
&lt;li&gt;Add caching for repeated calls&lt;/li&gt;
&lt;li&gt;Track cost by prompt and model&lt;/li&gt;
&lt;li&gt;Set basic spend alerts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Most teams that apply these systematically recover &lt;strong&gt;~30% of wasted spend&lt;/strong&gt; without changing product behavior.&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;LLMs are powerful, but they are not free.&lt;br&gt;&lt;br&gt;
As usage scales, prompt quality becomes a cost lever, not just a UX concern.&lt;/p&gt;

&lt;p&gt;If you’re dealing with LLM cost surprises in production,&lt;br&gt;
I’m happy to compare notes or walk through how other teams are approaching this.&lt;/p&gt;

&lt;p&gt;You can explore what we’re building at &lt;a href="https://dakora.io" rel="noopener noreferrer"&gt;https://dakora.io&lt;/a&gt;&lt;br&gt;&lt;br&gt;
feedback is very welcome.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published on dakora.io&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>devops</category>
      <category>costoptimization</category>
    </item>
    <item>
      <title>Building Intelligent Research Agents with OpenAI's Agents Framework</title>
      <dc:creator>Bogdan Pistol</dc:creator>
      <pubDate>Mon, 06 Oct 2025 10:28:01 +0000</pubDate>
      <link>https://forem.com/bogdanpi/building-intelligent-research-agents-with-openais-agents-framework-2j1</link>
      <guid>https://forem.com/bogdanpi/building-intelligent-research-agents-with-openais-agents-framework-2j1</guid>
      <description>&lt;p&gt;Multi-agent systems are transforming how we build AI applications. Instead of relying on a single large language model to handle every task, we can now orchestrate specialized agents that work together, each focused on what it does best. In this tutorial, we'll build a practical research assistant using OpenAI's Agents Framework—and see how clean prompt management makes agent development faster and more reliable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why OpenAI's Agents Framework?
&lt;/h2&gt;

&lt;p&gt;Released in March 2025 as a production-ready evolution of OpenAI's experimental Swarm project, the Agents Framework stands out in a crowded field of agent frameworks for several key reasons:&lt;/p&gt;

&lt;h3&gt;
  
  
  Lightweight by Design
&lt;/h3&gt;

&lt;p&gt;Unlike heavyweight frameworks that require mastering complex graph structures or conversation patterns, OpenAI's Agents SDK introduces just four core primitives:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agents&lt;/strong&gt; - LLMs equipped with instructions and tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Handoffs&lt;/strong&gt; - Enable agents to delegate tasks to specialists&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Guardrails&lt;/strong&gt; - Validate inputs and outputs for safety&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sessions&lt;/strong&gt; - Automatically maintain conversation history&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This minimalist approach means you can build sophisticated multi-agent systems without wrestling with unnecessary abstraction layers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Production-Ready from Day One
&lt;/h3&gt;

&lt;p&gt;While frameworks like LangGraph excel at complex stateful workflows and AutoGen shines in multi-role conversations, OpenAI's Agents SDK focuses on production readiness. It includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built-in tracing for debugging and monitoring&lt;/li&gt;
&lt;li&gt;Automatic schema generation for function tools&lt;/li&gt;
&lt;li&gt;Seamless integration with GPT-4o and other OpenAI models&lt;/li&gt;
&lt;li&gt;Human-in-the-loop (HITL) approval for critical decisions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Provider-Agnostic with 100+ LLM Support
&lt;/h3&gt;

&lt;p&gt;Despite being an OpenAI product, the framework is compatible with over 100 different language models, giving you flexibility as the AI landscape evolves.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We're Building
&lt;/h2&gt;

&lt;p&gt;Our research assistant demonstrates real multi-agent coordination:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Research Planner&lt;/strong&gt; breaks topics into subtopics and questions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analyst&lt;/strong&gt; agents dive deep into each subtopic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synthesizer&lt;/strong&gt; combines findings into coherent insights&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This pattern applies to countless use cases: competitive analysis, market research, technical documentation, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup: Getting Started
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;p&gt;You'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.11.5+ or 3.12+ (earlier 3.11 versions have typing compatibility issues)&lt;/li&gt;
&lt;li&gt;An OpenAI API key (&lt;a href="https://platform.openai.com/api-keys" rel="noopener noreferrer"&gt;get one here&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Installation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the repository&lt;/span&gt;
git clone https://github.com/bogdan-pistol/dakora.git
&lt;span class="nb"&gt;cd &lt;/span&gt;dakora/examples/openai-agents

&lt;span class="c"&gt;# Create virtual environment&lt;/span&gt;
python3 &lt;span class="nt"&gt;-m&lt;/span&gt; venv .venv
&lt;span class="nb"&gt;source&lt;/span&gt; .venv/bin/activate  &lt;span class="c"&gt;# On Windows: .venv\Scripts\activate&lt;/span&gt;

&lt;span class="c"&gt;# Install dependencies&lt;/span&gt;
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Configure Your API Key
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Copy the example environment file&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env

&lt;span class="c"&gt;# Edit .env and add your OpenAI API key&lt;/span&gt;
&lt;span class="c"&gt;# OPENAI_API_KEY=sk-proj-...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vafti9p71ymif7646tc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8vafti9p71ymif7646tc.png" alt=".env file"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Where to get your API key:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Visit &lt;a href="https://platform.openai.com" rel="noopener noreferrer"&gt;OpenAI Platform&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Navigate to API Keys in your account settings&lt;/li&gt;
&lt;li&gt;Create a new secret key&lt;/li&gt;
&lt;li&gt;Copy and paste it into your &lt;code&gt;.env&lt;/code&gt; file&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Running Your First Research
&lt;/h2&gt;

&lt;p&gt;Let's see it in action:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python research_assistant.py &lt;span class="s2"&gt;"AI agent frameworks in 2025"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Example Output
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🔬 Starting research on: AI agent frameworks in 2025

📋 Planning research strategy...

📊 Research Plan:
Strategy: This research plan will employ a literature review, case studies, and expert
interviews to gather data on the evolution, integration, and ethical considerations of
AI agent frameworks, aiming to provide a comprehensive understanding of their state in 2025.
Subtopics: 3

🔍 Analyzing subtopic 1/3: Evolution of AI Agent Frameworks
✓ Analysis complete

🔍 Analyzing subtopic 2/3: Integration and Interoperability of AI Agents
✓ Analysis complete

🔍 Analyzing subtopic 3/3: Ethical and Regulatory Considerations
✓ Analysis complete

🔄 Synthesizing findings...
✓ Synthesis complete

================================================================================

📄 RESEARCH REPORT

================================================================================
**Executive Summary**

The evolution of AI agent frameworks in 2025 is marked by significant advancements
in technology, fostering sophisticated autonomous systems capable of handling complex
tasks across industries. These systems integrate machine learning models like GPT and
BERT, allowing for enhanced natural language processing and multi-agent collaboration...

[Full report output...]

================================================================================

💾 Full report saved to: research_output_20251006_103427.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Understanding the Architecture
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Multi-Agent Workflow
&lt;/h3&gt;

&lt;p&gt;Our system follows a clear pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User Query → Research Planner → Multiple Analysts (parallel) → Synthesizer → Report
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each agent has a specific role, making the system easier to debug, test, and extend.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Code Components
&lt;/h3&gt;

&lt;p&gt;Let's walk through the main building blocks:&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Agent Creation with Managed Prompts
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;dakora.vault&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Vault&lt;/span&gt;

&lt;span class="n"&gt;vault&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Vault&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dakora.yaml&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;create_research_planner&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vault&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;planner_system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Research Planner&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; Instead of hardcoding prompts in Python, we store them as versioned YAML files. This separation of concerns means prompt engineers can iterate on instructions without touching code.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Dynamic Prompt Rendering
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;analyst_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vault&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyst_system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;analyst_instructions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;analyst_prompt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;subtopic&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;subtopic&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;questions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;subtopic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;questions&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[]),&lt;/span&gt;
    &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Part of broader research on: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;analysis_depth&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;standard&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;source_types&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;subtopic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sources&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[])&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;analyst&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyst-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;analyst_instructions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; Each analyst gets customized instructions based on its specific subtopic. The same prompt template adapts to different contexts, reducing duplication.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Agent Execution with OpenAI's Runner
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Runner&lt;/span&gt;

&lt;span class="n"&gt;plan_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_sync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;planner&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Create a research plan for: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;topic&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Process the output
&lt;/span&gt;&lt;span class="n"&gt;plan_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;plan_result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;final_output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; The Runner handles session management, conversation history, and tool execution automatically. You focus on orchestration, not plumbing.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Parallel Analysis
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;findings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;subtopic&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;plan_data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subtopics&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[]),&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;analyst&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyst-&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;analyst_instructions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;analysis_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Runner&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_sync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;analyst&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyze: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;subtopic&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;findings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;subtopic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;subtopic&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analysis&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;analysis_result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;final_output&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why this matters:&lt;/strong&gt; Each subtopic gets dedicated analysis. While this example processes sequentially, the pattern easily extends to parallel execution for faster results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing Dakora: Clean Prompt Management
&lt;/h2&gt;

&lt;p&gt;As our research assistant grew, a challenge emerged: managing increasingly complex agent instructions. This is where Dakora proved invaluable.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Prompt Management Problem
&lt;/h3&gt;

&lt;p&gt;Multi-agent systems often mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dozens of prompt variations&lt;/li&gt;
&lt;li&gt;Dynamic content injection&lt;/li&gt;
&lt;li&gt;Version tracking needs&lt;/li&gt;
&lt;li&gt;Type safety requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hardcoding prompts becomes unmaintainable. Template strings help, but lack structure. We needed something purpose-built.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Dakora?
&lt;/h3&gt;

&lt;p&gt;Dakora is a lightweight Python library designed specifically for prompt template management. Here's what made it the right choice for this project:&lt;/p&gt;

&lt;h4&gt;
  
  
  Type-Safe Inputs
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# prompts/analyst_system.yaml&lt;/span&gt;
&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;analyst_system&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;System prompt for deep analysis&lt;/span&gt;

&lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;subtopic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;questions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;array&amp;lt;string&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;analysis_depth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;standard&lt;/span&gt;
  &lt;span class="na"&gt;source_types&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;array&amp;lt;string&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Input validation happens before the LLM call, catching errors early and saving API costs.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hot Reload During Development
&lt;/h4&gt;

&lt;p&gt;Edit a prompt file, re-run your script—changes apply immediately. No restart required.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Edit analyst prompt to add new analysis section&lt;/span&gt;
vim prompts/analyst_system.yaml

&lt;span class="c"&gt;# Changes apply immediately&lt;/span&gt;
python research_assistant.py &lt;span class="s2"&gt;"your topic"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Version Control Integration
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;analyst_system&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.1.0&lt;/span&gt;  &lt;span class="c1"&gt;# Bumped from 1.0.0&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Added economic impact analysis section&lt;/span&gt;

&lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;You are a research analyst specializing in deep, structured analysis.&lt;/span&gt;

  &lt;span class="s"&gt;Analysis Framework:&lt;/span&gt;
  &lt;span class="s"&gt;1. Current State&lt;/span&gt;
  &lt;span class="s"&gt;2. Key Developments&lt;/span&gt;
  &lt;span class="s"&gt;3. Economic Impact  # New section&lt;/span&gt;
  &lt;span class="s"&gt;4. Challenges &amp;amp; Opportunities&lt;/span&gt;
  &lt;span class="s"&gt;5. Future Outlook&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Track prompt evolution alongside code changes. Roll back if needed. It's all in git.&lt;/p&gt;

&lt;h4&gt;
  
  
  Visual Prompt Editor
&lt;/h4&gt;

&lt;p&gt;Dakora includes an interactive web playground for testing templates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dakora playground
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ok2wtknsnnf2qsgtop3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ok2wtknsnnf2qsgtop3.png" alt="Dakora Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This browser-based editor lets you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test prompts with different inputs&lt;/li&gt;
&lt;li&gt;See rendered output instantly&lt;/li&gt;
&lt;li&gt;Validate type safety&lt;/li&gt;
&lt;li&gt;Share templates with your team&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Try it online at &lt;a href="https://playground.dakora.io" rel="noopener noreferrer"&gt;playground.dakora.io&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How Dakora Fits In
&lt;/h3&gt;

&lt;p&gt;In our research assistant, Dakora handles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt; - All prompts live in &lt;code&gt;prompts/&lt;/code&gt; as YAML files&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validation&lt;/strong&gt; - Type-safe inputs catch errors before API calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rendering&lt;/strong&gt; - Jinja2 templates with custom filters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Versioning&lt;/strong&gt; - Semantic versioning built-in&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The OpenAI Agents Framework handles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Execution&lt;/strong&gt; - Agent coordination and tool calling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State&lt;/strong&gt; - Conversation history and session management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration&lt;/strong&gt; - OpenAI API communication&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This separation means cleaner code, faster iteration, and easier collaboration between developers and prompt engineers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Exploring the Prompts
&lt;/h2&gt;

&lt;p&gt;Let's look at how prompts are structured:&lt;/p&gt;

&lt;h3&gt;
  
  
  Research Planner Prompt
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;planner_system&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Creates research strategies and identifies subtopics&lt;/span&gt;

&lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;You are a research planning specialist. Your task is to analyze research&lt;/span&gt;
  &lt;span class="s"&gt;topics and create comprehensive research strategies.&lt;/span&gt;

  &lt;span class="s"&gt;Research Topic: {{topic}}&lt;/span&gt;
  &lt;span class="s"&gt;Number of Subtopics: {{num_subtopics}}&lt;/span&gt;
  &lt;span class="s"&gt;{% if focus_areas|length &amp;gt; 0 %}&lt;/span&gt;
  &lt;span class="s"&gt;Focus Areas: {{focus_areas|yaml}}&lt;/span&gt;
  &lt;span class="s"&gt;{% endif %}&lt;/span&gt;

  &lt;span class="s"&gt;IMPORTANT: You must respond with ONLY valid JSON, no additional text.&lt;/span&gt;

  &lt;span class="s"&gt;Format your response as this exact JSON structure:&lt;/span&gt;
  &lt;span class="s"&gt;{&lt;/span&gt;
    &lt;span class="s"&gt;"subtopics": [&lt;/span&gt;
      &lt;span class="s"&gt;{&lt;/span&gt;
        &lt;span class="s"&gt;"title": "Subtopic title",&lt;/span&gt;
        &lt;span class="s"&gt;"questions": ["Q1", "Q2", "Q3"],&lt;/span&gt;
        &lt;span class="s"&gt;"sources": ["Source type 1", "Source type 2"],&lt;/span&gt;
        &lt;span class="s"&gt;"priority": 1&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;],&lt;/span&gt;
    &lt;span class="s"&gt;"research_strategy": "Brief overview of the research approach"&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;

&lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;topic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;num_subtopics&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;number&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
  &lt;span class="na"&gt;focus_areas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;array&amp;lt;string&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdoqb03a1dl2rz0w1ko5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdoqb03a1dl2rz0w1ko5w.png" alt="Planner prompt"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Analyst Prompt with Conditional Logic
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;analyst_system&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deep analysis framework with conditional depth&lt;/span&gt;

&lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;You are a research analyst specializing in deep, structured analysis.&lt;/span&gt;

  &lt;span class="s"&gt;Assignment:&lt;/span&gt;
  &lt;span class="s"&gt;Subtopic: {{subtopic}}&lt;/span&gt;
  &lt;span class="s"&gt;Research Questions: {{questions|yaml}}&lt;/span&gt;

  &lt;span class="s"&gt;Analysis Depth: {{analysis_depth}}&lt;/span&gt;
  &lt;span class="s"&gt;{% if analysis_depth == "comprehensive" %}&lt;/span&gt;
  &lt;span class="s"&gt;Provide detailed analysis with specific examples, data points, and multiple viewpoints.&lt;/span&gt;
  &lt;span class="s"&gt;{% elif analysis_depth == "standard" %}&lt;/span&gt;
  &lt;span class="s"&gt;Provide balanced analysis covering main points with supporting examples.&lt;/span&gt;
  &lt;span class="s"&gt;{% else %}&lt;/span&gt;
  &lt;span class="s"&gt;Provide concise analysis focusing on the most critical insights.&lt;/span&gt;
  &lt;span class="s"&gt;{% endif %}&lt;/span&gt;

&lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;subtopic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;questions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;array&amp;lt;string&amp;gt;&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;analysis_depth&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;standard&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;{% if %}&lt;/code&gt; conditionals let one template handle multiple depth levels—no need for separate prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;

&lt;p&gt;Here's how everything is organized:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openai-agents/
├── research_assistant.py       # Main orchestration script
├── prompts/                    # Dakora template directory
│   ├── coordinator_system.yaml # Workflow orchestration
│   ├── planner_system.yaml     # Research strategy
│   ├── analyst_system.yaml     # Deep analysis
│   ├── synthesizer_system.yaml # Finding synthesis
│   └── report_template.yaml    # Output formatting
├── dakora.yaml                 # Dakora configuration
├── requirements.txt            # Python dependencies
├── .env                        # API keys (not in git)
└── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m3otuhbu4o9q2j6xhds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9m3otuhbu4o9q2j6xhds.png" alt="File Structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;dakora.yaml&lt;/code&gt; config file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;registry&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;local&lt;/span&gt;
&lt;span class="na"&gt;prompt_dir&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;./prompts&lt;/span&gt;
&lt;span class="na"&gt;logging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Extending the System
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Add a New Agent Type
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a prompt template:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# prompts/fact_checker.yaml&lt;/span&gt;
&lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;fact_checker&lt;/span&gt;
&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1.0.0&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Validates claims against sources&lt;/span&gt;

&lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;You are a fact-checking specialist. Verify the following claims:&lt;/span&gt;

  &lt;span class="s"&gt;{{claims}}&lt;/span&gt;

  &lt;span class="s"&gt;For each claim, provide:&lt;/span&gt;
  &lt;span class="s"&gt;- Verification status (verified/unverified/false)&lt;/span&gt;
  &lt;span class="s"&gt;- Supporting evidence&lt;/span&gt;
  &lt;span class="s"&gt;- Confidence level&lt;/span&gt;

&lt;span class="na"&gt;inputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;claims&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
    &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Load it in your code:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;fact_checker_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vault&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fact_checker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;fact_checker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Fact Checker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;fact_checker_prompt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;claims&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;findings_text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Implement Agent Handoffs
&lt;/h3&gt;

&lt;p&gt;Use the framework's handoff mechanism:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Handoff&lt;/span&gt;

&lt;span class="n"&gt;fact_checker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Fact Checker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;vault&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fact_checker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;analyst&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;analyst_instructions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;handoffs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;Handoff&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;fact_checker&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the analyst encounters uncertain claims, it can hand off to the fact checker automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Command Line Interface
&lt;/h2&gt;

&lt;p&gt;Dakora includes a CLI for prompt management:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# List all templates&lt;/span&gt;
dakora list

&lt;span class="c"&gt;# View a specific prompt&lt;/span&gt;
dakora get analyst_system

&lt;span class="c"&gt;# Bump version&lt;/span&gt;
dakora bump analyst_system &lt;span class="nt"&gt;--minor&lt;/span&gt;

&lt;span class="c"&gt;# Watch for changes (hot reload)&lt;/span&gt;
dakora watch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7rta7h1r6bg4kq006qt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw7rta7h1r6bg4kq006qt.png" alt="Dakora CLI"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;h3&gt;
  
  
  OpenAI Agents Framework Strengths
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Simplicity&lt;/strong&gt; - Four primitives cover most agent patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Production-Ready&lt;/strong&gt; - Built-in tracing, guardrails, and HITL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt; - Works with 100+ LLMs, not just OpenAI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Growing Ecosystem&lt;/strong&gt; - TypeScript support, voice agents, and more&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  When to Use This Framework
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Great fit for:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Production applications requiring reliability&lt;/li&gt;
&lt;li&gt;Projects already using OpenAI models&lt;/li&gt;
&lt;li&gt;Teams valuing simplicity over complex workflows&lt;/li&gt;
&lt;li&gt;Applications needing human oversight (HITL)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Consider alternatives if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need complex graph-based workflows (try LangGraph)&lt;/li&gt;
&lt;li&gt;Multi-role conversations are central (try AutoGen)&lt;/li&gt;
&lt;li&gt;You're building on non-OpenAI infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Dakora for Prompt Management
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Use Dakora when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Managing 10+ prompts across multiple agents&lt;/li&gt;
&lt;li&gt;Collaborating between developers and prompt engineers&lt;/li&gt;
&lt;li&gt;Version tracking and rollback are important&lt;/li&gt;
&lt;li&gt;Type safety prevents costly API errors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benefits we experienced:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;3x faster prompt iteration&lt;/li&gt;
&lt;li&gt;Zero hardcoded strings in agent code&lt;/li&gt;
&lt;li&gt;Easy A/B testing of prompt variations&lt;/li&gt;
&lt;li&gt;Clear audit trail of prompt changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Code &amp;amp; Documentation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;This Example&lt;/strong&gt;: &lt;a href="https://github.com/bogdan-pistol/dakora/tree/main/examples/openai-agents" rel="noopener noreferrer"&gt;github.com/bogdan-pistol/dakora/examples/openai-agents&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dakora Repository&lt;/strong&gt;: &lt;a href="https://github.com/bogdan-pistol/dakora" rel="noopener noreferrer"&gt;github.com/bogdan-pistol/dakora&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI Agents SDK&lt;/strong&gt;: &lt;a href="https://github.com/openai/openai-agents-python" rel="noopener noreferrer"&gt;github.com/openai/openai-agents-python&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dakora Website&lt;/strong&gt;: &lt;a href="https://dakora.io" rel="noopener noreferrer"&gt;dakora.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interactive Playground&lt;/strong&gt;: &lt;a href="https://playground.dakora.io" rel="noopener noreferrer"&gt;playground.dakora.io&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Official Documentation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI Agents Guide&lt;/strong&gt;: &lt;a href="https://openai.github.io/openai-agents-python/" rel="noopener noreferrer"&gt;openai.github.io/openai-agents-python&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Community
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dakora Discord&lt;/strong&gt;: &lt;a href="https://discord.gg/QSRRcFjzE8" rel="noopener noreferrer"&gt;Join the community&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI Forum&lt;/strong&gt;: &lt;a href="https://community.openai.com" rel="noopener noreferrer"&gt;community.openai.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Try building your own multi-agent system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the example and run it locally&lt;/li&gt;
&lt;li&gt;Modify the analyst prompt to add new analysis sections&lt;/li&gt;
&lt;li&gt;Add a fourth agent type (summarizer, fact-checker, etc.)&lt;/li&gt;
&lt;li&gt;Experiment with different models (GPT-4o, GPT-4o-mini, Claude, etc.)&lt;/li&gt;
&lt;li&gt;Integrate with your own data sources or APIs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The combination of OpenAI's Agents Framework and Dakora's prompt management creates a powerful foundation for building reliable, maintainable AI agents. Start simple, iterate fast, and scale as you learn.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built something cool with OpenAI Agents and Dakora? Share it in the &lt;a href="https://discord.gg/QSRRcFjzE8" rel="noopener noreferrer"&gt;Dakora Discord&lt;/a&gt;—we'd love to see what you create.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>openai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Stop Hardcoding Prompts: A Practical Workflow for AI Teams</title>
      <dc:creator>Bogdan Pistol</dc:creator>
      <pubDate>Thu, 02 Oct 2025 14:32:10 +0000</pubDate>
      <link>https://forem.com/bogdanpi/stop-hardcoding-prompts-a-practical-workflow-for-ai-teams-56l5</link>
      <guid>https://forem.com/bogdanpi/stop-hardcoding-prompts-a-practical-workflow-for-ai-teams-56l5</guid>
      <description>&lt;p&gt;If you’ve built &lt;em&gt;anything&lt;/em&gt; with OpenAI or other LLMs, chances are your prompts live as strings in your codebase:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;responses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-5&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Write a short bedtime story about a unicorn.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;output_text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works fine. Everything is in one place, and you don’t need to think twice about it.&lt;/p&gt;

&lt;p&gt;But this example is a single, one-line prompt. Most production-ready apps use dozens, sometimes hundreds. Some prompts stretch over hundreds of lines, with parameters and variations.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem: Prompt Hell
&lt;/h2&gt;

&lt;p&gt;Inlining prompts directly into your code doesn’t scale:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Files become filled with long blocks of text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every tweak means hunting down the right string, editing it, then rebuilding and redeploying.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Larger prompts with multiple parameters quickly get unreadable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy-pasting across repos or services introduces errors.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You end up with what many call prompt hell — messy, time-consuming, and error-prone.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Naive Fix (and Its Limits)
&lt;/h2&gt;

&lt;p&gt;You might try moving prompts into YAML, JSON, or .env files. That’s better than inline, but it brings its own pain:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Manually editing YAML is brittle (one wrong indent breaks everything).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Validation is poor — missing parameters or wrong types only show up at runtime.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Collaboration gets awkward when multiple people are changing raw text files.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is duct tape, not a workflow.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Better Way: Externalize and Manage Prompts
&lt;/h2&gt;

&lt;p&gt;Prompts deserve the same treatment as code and config: versioned, editable, and testable.&lt;/p&gt;

&lt;p&gt;That’s why tools like Dakora exist — lightweight, open source, and built for small teams who don’t want to fight with prompt sprawl.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://dakora.io/" rel="noopener noreferrer"&gt;Dakora&lt;/a&gt; you can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep prompts in a central vault.&lt;/li&gt;
&lt;li&gt;Edit them in a clean web UI instead of raw YAML.&lt;/li&gt;
&lt;li&gt;Sync changes into your app instantly.&lt;/li&gt;
&lt;li&gt;Back everything with local files under Git for transparency and history.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Quickstart
&lt;/h2&gt;

&lt;p&gt;Getting started is a two-liner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip install dakora
dakora init
dakora playground
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This opens a simple UI playground where you can manage and edit prompts in real time. No redeploys, no fiddly JSON.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why It Matters
&lt;/h2&gt;

&lt;p&gt;LLM apps evolve fast. If you’re shipping features, testing variations, or working with a team, you can’t afford to redeploy every time you tweak wording.&lt;/p&gt;

&lt;p&gt;Prompt management isn’t just a “nice to have.” It’s the difference between hacking on the weekend and running a reliable product.&lt;/p&gt;




&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;Dakora is open source and ready to use today:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;⭐ Star it on &lt;a href="https://github.com/bogdan-pistol/dakora" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;📖 Read the &lt;a href="https://dakora.io/" rel="noopener noreferrer"&gt;docs&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stop hardcoding prompts. Your team — and your future self — will thank you.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@missmii?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Mii Luthman&lt;/a&gt; on &lt;a href="https://unsplash.com/photos/grayscale-photo-of-rope-in-close-up-photography-zh2uzoV3LYk?utm_content=creditCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>opensource</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
