<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Alexis Roberson</title>
    <description>The latest articles on Forem by Alexis Roberson (@alexiskroberson).</description>
    <link>https://forem.com/alexiskroberson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/alexiskroberson"/>
    <language>en</language>
    <item>
      <title>OpenTelemetry for LLM Applications: A Practical Guide with LaunchDarkly and Langfuse</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Wed, 04 Mar 2026 03:11:34 +0000</pubDate>
      <link>https://forem.com/alexiskroberson/opentelemetry-for-llm-applications-a-practical-guide-with-launchdarkly-and-langfuse-1a3a</link>
      <guid>https://forem.com/alexiskroberson/opentelemetry-for-llm-applications-a-practical-guide-with-launchdarkly-and-langfuse-1a3a</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/otel-llm-practical-guide-with-langfuse" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LLM applications have a telemetry problem. Unlike traditional software where you can trace a bug to a specific line of code or a failed API call, LLM failures are a bit more nuanced. A response that's slightly off, a prompt that worked yesterday but not today, or a model swap can quietly degrade your user experience. OpenTelemetry gives you a structured way to pull back the curtain by capturing token usage, model metadata, latency, and agent responses so you truly know what's happening inside your application.&lt;/p&gt;

&lt;p&gt;This tutorial walks you through instrumenting a real LLM application with OTel spans, capturing the right attributes, and fanning out those traces simultaneously to Langfuse and LaunchDarkly's Guarded Releases. Both are LLM observability tools, but they give you different lenses on the same trace data. Langfuse is purpose-built for prompt debugging and cost analysis — surfacing prompt content, completions, and per-agent token usage. &lt;/p&gt;

&lt;p&gt;LaunchDarkly connects that same trace data to the specific model variant that was active during a request, giving you flag-correlated observability with automated rollback if a variant starts degrading your users' experience. One OTel collector, two complementary views, no custom integrations required.&lt;/p&gt;

&lt;p&gt;Guarded releases are LaunchDarkly's observability solution that encompasses application performance thresholds, release auto-remediation, and release monitoring, along with error monitoring and session replay.&lt;/p&gt;

&lt;h2&gt;
  
  
  The WorkLunch App
&lt;/h2&gt;

&lt;p&gt;In order to see the full process of instrumenting an LLM application, I added a new feature in an app called &lt;a href="https://github.com/arober39/worklunch/tree/otel-launchdarkly-langfuse?tab=readme-ov-file" rel="noopener noreferrer"&gt;WorkLunch&lt;/a&gt; where users were able to create/join office communities and swap lunches based on preference. Now they're also able to improve the description field of their lunch post to make it more appealing to potential swappers and receive recommendations for compatible swaps.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73xm3pukeujieovry9kj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73xm3pukeujieovry9kj.png" alt=" " width="794" height="689"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So in the initial description you may write, "Grilled cheese sandwich", then click the AI Suggest button. The app replaces it with, "Golden, buttery grilled cheese with perfectly melted cheese sandwiched between crispy white bread. This comfort food classic is grilled to perfection with a satisfying crunch on the outside and gooey, cheesy goodness on the inside. Simple, delicious, and guaranteed to hit the spot!"&lt;/p&gt;

&lt;p&gt;Now which lunch post are you more than likely to click on?&lt;/p&gt;

&lt;p&gt;This subtle addition takes the app from a fun, simple lunch swap experience to a viable LLM application that still requires the same visibility and observability of traditional systems. OpenTelemetry allows you to extract the necessary data like token count, model name, agent responses, etc in order to properly debug system failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Agent Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuubrkizshre7gy94sfpi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuubrkizshre7gy94sfpi.png" alt=" " width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The WorkLunch backend uses 3 agents to rewrite the lunch post description and find good lunch swaps.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The orchestrator coordinates the other two agents. It receives the user's request and the model type, calls the description agent first, then passes the generated description into the match agent. It acts as the parent span that ties the whole chain together.&lt;/li&gt;
&lt;li&gt;The description agent takes the user's sparse lunch post input and calls Claude to generate an appealing 2-3 sentence description.&lt;/li&gt;
&lt;li&gt;The match agent takes the user's lunch post (including the description just generated) plus a list of other active posts in the community, and uses AI to suggest 2-3 posts that would make good swaps.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These features are controlled by two &lt;a href="https://launchdarkly.com/docs/guides/flags" rel="noopener noreferrer"&gt;feature flags&lt;/a&gt;, one for enabling the AI suggest feature and the other to control which model version the app uses. Every layer gets its own OTel span, creating a trace tree that shows the full request lifecycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you start, you'll need the following installed locally and accounts set up with the services below.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.docker.com/products/docker-desktop/" rel="noopener noreferrer"&gt;Docker Desktop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt; / Npm&lt;/li&gt;
&lt;li&gt;An &lt;a href="https://console.anthropic.com/" rel="noopener noreferrer"&gt;Anthropic&lt;/a&gt; account (or openai)&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://app.launchdarkly.com/" rel="noopener noreferrer"&gt;LaunchDarkly&lt;/a&gt; account (sdk key and access token)&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://cloud.langfuse.com/" rel="noopener noreferrer"&gt;Langfuse&lt;/a&gt; account (public and secret key)&lt;/li&gt;
&lt;li&gt;A &lt;a href="https://supabase.com/" rel="noopener noreferrer"&gt;Supabase&lt;/a&gt; account (database configuration)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Environment variables
&lt;/h3&gt;

&lt;p&gt;Once you're all setup, clone the &lt;a href="https://github.com/arober39/worklunch/tree/otel-launchdarkly-langfuse?tab=readme-ov-file" rel="noopener noreferrer"&gt;WorkLunch repo&lt;/a&gt;. Copy the example &lt;code&gt;.env&lt;/code&gt; file and fill in your values:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your &lt;code&gt;.env&lt;/code&gt; should contain:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# .env&lt;/span&gt;

&lt;span class="c"&gt;# Supabase (required for the app)&lt;/span&gt;
&lt;span class="nv"&gt;EXPO_PUBLIC_SUPABASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://your-project.supabase.co
&lt;span class="nv"&gt;EXPO_PUBLIC_SUPABASE_ANON_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-anon-key

&lt;span class="c"&gt;# LaunchDarkly client-side (required for feature flags in the frontend)&lt;/span&gt;
&lt;span class="nv"&gt;EXPO_PUBLIC_LAUNCHDARKLY_SDK_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mob-your-mobile-key
&lt;span class="nv"&gt;EXPO_PUBLIC_LAUNCHDARKLY_CLIENT_SIDE_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-client-side-id

&lt;span class="c"&gt;# AI Backend URL (where docker compose runs the Python backend)&lt;/span&gt;
&lt;span class="nv"&gt;EXPO_PUBLIC_AI_BACKEND_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:8000

&lt;span class="c"&gt;# --- Docker Compose vars (used by the backend + otel-collector) ---&lt;/span&gt;

&lt;span class="c"&gt;# Anthropic API key for Claude&lt;/span&gt;
&lt;span class="nv"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sk-ant-your-key-here

&lt;span class="c"&gt;# LaunchDarkly server-side SDK key (starts with sdk-, NOT mob-)&lt;/span&gt;
&lt;span class="nv"&gt;LD_SDK_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;sdk-your-key-here

&lt;span class="c"&gt;# Langfuse auth — Base64 of "public_key:secret_key" (keep on one line)&lt;/span&gt;
&lt;span class="nv"&gt;LANGFUSE_AUTH_HEADER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-base64-encoded-string
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Supabase setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new Supabase project and grab your &lt;strong&gt;Project URL&lt;/strong&gt; and &lt;strong&gt;Anon key&lt;/strong&gt; from &lt;strong&gt;Dashboard → Settings → API&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Run the migration files in &lt;code&gt;supabase/migrations/&lt;/code&gt; to create the database schema. Execute them in order in the &lt;strong&gt;Supabase Dashboard → SQL Editor&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;supabase/migrations/

20240101000000_initial_schema.sql        ← tables: profiles, spaces, posts, proposals, messages, trades
20240101000001_rls_policies.sql          ← row-level security policies
20240101000002_storage_setup.sql         ← storage bucket for post photos
20240101000003_disable_email_confirmation.sql  ← simplifies local dev auth
20240205000000_fix_space_memberships_rls_recursion.sql
20240205000001_spaces_delete_policy.sql
20240206000000_space_creator_as_admin.sql
20240206100000_delete_space_rpc.sql
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Be sure to run each file in order as later migrations depend on tables and policies from earlier ones.&lt;/p&gt;

&lt;h3&gt;
  
  
  LaunchDarkly setup
&lt;/h3&gt;

&lt;p&gt;Create two feature flags in your new LaunchDarkly worklunch project:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;ai-suggest-enabled&lt;/code&gt; — Boolean flag, client-side. Gates visibility of the AI Suggest button in the frontend. Set it to &lt;code&gt;true&lt;/code&gt; for users you want to test with.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;llm-model-variant&lt;/code&gt; — String flag, server-side. Controls which Claude model the backend uses. Set the default value to &lt;code&gt;claude-sonnet-4-20250514&lt;/code&gt;. Add a variation for &lt;code&gt;claude-haiku-4-5-20251001&lt;/code&gt; if you want to experiment with a faster/cheaper model.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1vuci2rsfch96eknobc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1vuci2rsfch96eknobc.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny2cmem7myiz2jsrs382.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fny2cmem7myiz2jsrs382.png" alt=" " width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Langfuse setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create a new project in Langfuse (note whether your project URL starts with &lt;code&gt;us.cloud.langfuse.com&lt;/code&gt; or &lt;code&gt;cloud.langfuse.com&lt;/code&gt; — this determines your region)&lt;/li&gt;
&lt;li&gt;Go to &lt;strong&gt;Project Settings → API Keys&lt;/strong&gt; and create a new key pair&lt;/li&gt;
&lt;li&gt;Generate your Base64 auth header and place it inside your .env file:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"pk-lf-your-public-key:sk-lf-your-secret-key"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Quick Start
&lt;/h3&gt;

&lt;p&gt;Once your &lt;code&gt;.env&lt;/code&gt; is configured:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install frontend dependencies&lt;/span&gt;
npm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Start the OTel Collector + Python backend&lt;/span&gt;
docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;

&lt;span class="c"&gt;# In a separate terminal, start the Expo dev server&lt;/span&gt;
npm run web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify traces are flowing by checking the collector logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose logs &lt;span class="nt"&gt;-f&lt;/span&gt; otel-collector
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should see spans with &lt;code&gt;gen_ai.*&lt;/code&gt; attributes and &lt;code&gt;feature_flag&lt;/code&gt; events printed by the debug exporter.&lt;/p&gt;

&lt;p&gt;Now, let's take a look at how each agent is instrumented to send spans to LaunchDarkly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Instrument your LLM application
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Initialize the Tracer and Application
&lt;/h3&gt;

&lt;p&gt;The FastAPI app sets up OTel, LaunchDarkly, CORS, and auto-instrumentation in a single lifespan handler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# backend/app/main.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;contextlib&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;asynccontextmanager&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ldclient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi.middleware.cors&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;CORSMiddleware&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.exporter.otlp.proto.grpc.trace_exporter&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OTLPSpanExporter&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.instrumentation.fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastAPIInstrumentor&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.sdk.trace&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TracerProvider&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry.sdk.trace.export&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BatchSpanProcessor&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;settings&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.routers.suggest&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;router&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;suggest_router&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_otel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Configure OpenTelemetry with OTLP gRPC exporter.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;TracerProvider&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_span_processor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nc"&gt;BatchSpanProcessor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="nc"&gt;OTLPSpanExporter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;OTEL_EXPORTER_ENDPOINT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;insecure&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_tracer_provider&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;setup_launchdarkly&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Initialize LaunchDarkly server SDK.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ldclient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LD_SDK_KEY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;ldclient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@asynccontextmanager&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lifespan&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nf"&gt;setup_otel&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="nf"&gt;setup_launchdarkly&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;yield&lt;/span&gt;
    &lt;span class="n"&gt;ldclient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;provider&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer_provider&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;hasattr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shutdown&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;shutdown&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastAPI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WorkLunch AI Backend&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lifespan&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;lifespan&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# CORS
&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_middleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;CORSMiddleware&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;allow_origins&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;allow_credentials&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;allow_methods&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;allow_headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Instrument FastAPI with OTel
&lt;/span&gt;&lt;span class="n"&gt;FastAPIInstrumentor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;instrument_app&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Routes
&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;include_router&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;suggest_router&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prefix&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/api/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@app.get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/health&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;health&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Route: Flag evaluation + feature flag span event
&lt;/h3&gt;

&lt;p&gt;The FastAPI route is where the LaunchDarkly flag gets evaluated. The &lt;code&gt;feature_flag&lt;/code&gt; span event on this span is what LaunchDarkly's observability layer looks for when correlating traces with flag evaluations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# backend/app/routers/suggest.py
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ldclient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastapi&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;APIRouter&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;orchestrator&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SuggestRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SuggestResponse&lt;/span&gt;

&lt;span class="n"&gt;router&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;APIRouter&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;tracer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;worklunch.routers.suggest&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;DEFAULT_MODEL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;claude-sonnet-4-20250514&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;


&lt;span class="nd"&gt;@router.post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/suggest&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response_model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;SuggestResponse&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;suggest&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;SuggestRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;SuggestResponse&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;suggest.endpoint&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Evaluate the model variant flag
&lt;/span&gt;        &lt;span class="n"&gt;ld_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ldclient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ldclient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;worklunch-backend&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;service&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ld_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;variation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm-model-variant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;DEFAULT_MODEL&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Emit the feature_flag span event — this is what LD correlates with
&lt;/span&gt;        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;feature_flag&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;feature_flag.key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llm-model-variant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;feature_flag.provider.name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;LaunchDarkly&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;feature_flag.variant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.request.model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# The flag-controlled model flows into the orchestrator
&lt;/span&gt;        &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;matched_posts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;orchestrator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;SuggestResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;suggested_description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;matched_posts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;matched_posts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Orchestrator: Parent span for the agent chain
&lt;/h3&gt;

&lt;p&gt;The orchestrator creates a parent span and calls each sub-agent sequentially. Because the sub-agent spans are created while the orchestrator span is active, OTel automatically nests them as children.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# backend/app/agents/orchestrator.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.agents.description_agent&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;generate_description&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.agents.match_agent&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;find_matches&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MatchedPost&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;SuggestRequest&lt;/span&gt;

&lt;span class="n"&gt;tracer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;worklunch.orchestrator&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;SuggestRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;tuple&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MatchedPost&lt;/span&gt;&lt;span class="p"&gt;]]:&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;orchestrator.run&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;orchestrator.model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;orchestrator.title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;orchestrator.active_posts_count&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;active_posts&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

        &lt;span class="c1"&gt;# Step 1: Generate description
&lt;/span&gt;        &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;generate_description&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Step 2: Find matches using the generated description
&lt;/span&gt;        &lt;span class="n"&gt;matched_posts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;find_matches&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;dietary_preferences&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dietary_preferences&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;active_posts&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;active_posts&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;orchestrator.matches_found&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matched_posts&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;matched_posts&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Description Agent: LLM Call with genAI semantic conventions
&lt;/h3&gt;

&lt;p&gt;This is where the &lt;a href="https://opentelemetry.io/docs/specs/semconv/gen-ai/" rel="noopener noreferrer"&gt;OTel GenAI Semantic Conventions&lt;/a&gt; come in. The conventions define a standard schema for LLM spans — &lt;code&gt;gen_ai.system&lt;/code&gt;, &lt;code&gt;gen_ai.request.model&lt;/code&gt;, &lt;code&gt;gen_ai.usage.*&lt;/code&gt;, and prompt/completion content as span events.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# backend/app/agents/description_agent.py
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;anthropic&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;settings&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SuggestRequest&lt;/span&gt;

&lt;span class="n"&gt;tracer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;worklunch.agents.description&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_description&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;SuggestRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;anthropic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Anthropic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;system_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant that writes appealing, concise lunch descriptions &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;for a lunch-swapping app. Given a title and optional details, write a 2-3 sentence &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;description that makes the lunch sound appetizing and highlights what makes it special. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Mention any dietary info naturally if provided. Keep it friendly and casual.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;user_content_parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Lunch title: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;user_content_parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Current description: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;user_content_parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Category: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dietary_preferences&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;user_content_parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Dietary preferences: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;dietary_preferences&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;allergies&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;user_content_parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Allergies to note: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;allergies&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;user_content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_content_parts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_content&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;description_agent.generate&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# GenAI semantic conventions — provider and request attributes
&lt;/span&gt;        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.request.model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.request.max_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.request.temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Log prompt as a span event (keeps large payloads out of the attribute index)
&lt;/span&gt;        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.content.prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;)},&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;system&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;

        &lt;span class="c1"&gt;# Response attributes — model identity, finish reason, token usage
&lt;/span&gt;        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.response.model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.response.finish_reasons&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stop_reason&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;end_turn&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.usage.input_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;usage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;input_tokens&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.usage.output_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;usage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;output_tokens&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Log completion as a span event
&lt;/span&gt;        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.content.completion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.completion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Match Agent: Structured JSON output from an LLM
&lt;/h3&gt;

&lt;p&gt;The match agent follows the same GenAI span pattern but with different parameters (lower temperature for more deterministic output, higher token budget for JSON) and post-processing to parse structured JSON from the LLM response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# backend/app/agents/match_agent.py
&lt;/span&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;

&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;anthropic&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;opentelemetry&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;

&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.config&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;settings&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;app.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ActivePost&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MatchedPost&lt;/span&gt;

&lt;span class="n"&gt;tracer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;trace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_tracer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;worklunch.agents.match&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;find_matches&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;dietary_preferences&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;active_posts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;ActivePost&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;MatchedPost&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;active_posts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

    &lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;anthropic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Anthropic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;settings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;ANTHROPIC_API_KEY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;system_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a lunch-matching assistant. Given a user&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s lunch post and a list of &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;active posts from other users, suggest 2-3 posts that would make good swaps. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Consider complementary flavors, dietary compatibility, and variety. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Respond with valid JSON only — an array of objects with keys: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="s"&gt;post_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reason&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;. Keep reasons to one short sentence.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;posts_text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;- ID: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;, Title: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;, Description: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Category: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;, By: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;p&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;user_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;p&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;active_posts&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;user_parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;My lunch: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Description: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;user_parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Category: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;dietary_preferences&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;user_parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;My dietary preferences: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;dietary_preferences&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;user_parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Available posts to match with:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;posts_text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;user_content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_parts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_content&lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;

    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;tracer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start_as_current_span&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;match_agent.find&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.request.model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.request.max_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.request.temperature&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.content.prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;)},&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;system&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;raw&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;

        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.response.model&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.response.finish_reasons&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;stop_reason&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;end_turn&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.usage.input_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;usage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;input_tokens&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_attribute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.usage.output_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;usage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;output_tokens&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="n"&gt;span&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.content.completion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gen_ai.completion&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Parse the JSON response
&lt;/span&gt;    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;cleaned&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;raw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;cleaned&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startswith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;```

&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
            &lt;span class="n"&gt;cleaned&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cleaned&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="n"&gt;cleaned&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cleaned&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rsplit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;

```&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;matches_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cleaned&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;MatchedPost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;m&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;matches_data&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
    &lt;span class="nf"&gt;except &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JSONDecodeError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;KeyError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;TypeError&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each of these agents, Langfuse receives the full trace including prompt/completion content for debugging. LaunchDarkly receives the same trace and correlates the &lt;code&gt;feature_flag&lt;/code&gt; event with the HTTP span for experimentation metrics.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure the OTel collector
&lt;/h2&gt;

&lt;p&gt;This is where the fan-out happens. The collector receives traces over OTLP and exports them to both backends simultaneously. Two pipelines from the same receiver is the key: you configure one &lt;code&gt;receivers&lt;/code&gt; block and reference it in multiple &lt;code&gt;pipelines&lt;/code&gt; — no duplication of ingestion, no changes needed in application code.&lt;/p&gt;

&lt;h3&gt;
  
  
  otel-collector-config.yaml
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# otel-collector-config.yaml&lt;/span&gt;
&lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;protocols&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;grpc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0:4317&lt;/span&gt;
      &lt;span class="na"&gt;http&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0.0.0.0:4318&lt;/span&gt;

&lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;5s&lt;/span&gt;
    &lt;span class="na"&gt;send_batch_size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;512&lt;/span&gt;

  &lt;span class="c1"&gt;# Stamp traces with the LD project identifier so the endpoint&lt;/span&gt;
  &lt;span class="c1"&gt;# knows which project they belong to&lt;/span&gt;
  &lt;span class="na"&gt;resource/launchdarkly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;attributes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;launchdarkly.project_id&lt;/span&gt;
        &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${env:LD_SDK_KEY}"&lt;/span&gt;
        &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;upsert&lt;/span&gt;

&lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Langfuse — LLM-specific traces with full prompt content&lt;/span&gt;
  &lt;span class="na"&gt;otlphttp/langfuse&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://us.cloud.langfuse.com/api/public/otel&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Authorization&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Basic&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;${env:LANGFUSE_AUTH_HEADER}"&lt;/span&gt;

  &lt;span class="c1"&gt;# LaunchDarkly — flag-correlated observability&lt;/span&gt;
  &lt;span class="c1"&gt;# No auth header needed; identification is via the&lt;/span&gt;
  &lt;span class="c1"&gt;# launchdarkly.project_id resource attribute&lt;/span&gt;
  &lt;span class="na"&gt;otlphttp/launchdarkly&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://otel.observability.app.launchdarkly.com&lt;/span&gt;

  &lt;span class="c1"&gt;# Debug exporter for local development&lt;/span&gt;
  &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;verbosity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;detailed&lt;/span&gt;

&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;pipelines&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Pipeline 1: Full LLM traces to Langfuse (includes prompt content)&lt;/span&gt;
    &lt;span class="na"&gt;traces/llm-observability&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlphttp/langfuse&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Pipeline 2: Flag-correlated traces to LaunchDarkly&lt;/span&gt;
    &lt;span class="na"&gt;traces/feature-flags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;resource/launchdarkly&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlphttp/launchdarkly&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Pipeline 3: Debug output for development&lt;/span&gt;
    &lt;span class="na"&gt;traces/debug&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;receivers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;otlp&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;processors&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;batch&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
      &lt;span class="na"&gt;exporters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;debug&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now if we rerun our application, we should see LaunchDarkly Traces capturing the Otel spans.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;
npm run web &lt;span class="c"&gt;# in a separate terminal&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfdg92zaubvk6tuvbc9b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgfdg92zaubvk6tuvbc9b.png" alt=" " width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How LaunchDarkly processes OTel traces
&lt;/h3&gt;

&lt;p&gt;LaunchDarkly receives traces for logging and converts OTel span data into &lt;strong&gt;events&lt;/strong&gt; for use with Experimentation and Guarded Rollouts. The process works like this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Your application emits a span that covers an HTTP request (or LLM call). This span carries standard HTTP attributes: &lt;code&gt;http.response.status_code&lt;/code&gt;, &lt;code&gt;http.route&lt;/code&gt;, latency derived from span duration.&lt;/li&gt;
&lt;li&gt;On that same span (or a parent span in the same trace), you've emitted a &lt;code&gt;feature_flag&lt;/code&gt; span event with &lt;code&gt;feature_flag.key&lt;/code&gt; and &lt;code&gt;feature_flag.variant&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;LaunchDarkly's collector ingests the trace and looks for HTTP spans that &lt;strong&gt;overlap&lt;/strong&gt; with spans containing at least one &lt;code&gt;feature_flag&lt;/code&gt; event. When it finds a match, it produces a metric event associating the flag variant with the observed latency and error rate (5xx status codes).&lt;/li&gt;
&lt;li&gt;Those metric events flow into Experimentation, where they become the outcome metrics for your flag-controlled A/B test — for example, comparing &lt;code&gt;claude-sonnet-4-20250514&lt;/code&gt; vs &lt;code&gt;claude-haiku-4-5-20251001&lt;/code&gt; on p95 latency and error rate without writing a single line of custom metric instrumentation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every span in the agent chain is nested under a single trace. The collector fans out that trace to both backends simultaneously. Langfuse gets the full LLM details for prompt debugging and cost analysis. LaunchDarkly gets the flag-correlated signal it needs for automated rollout decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key attributes from gen_ai trace spans
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feev4o26mesmcpozrljfq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feev4o26mesmcpozrljfq.png" alt=" " width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Trigger a guarded rollout
&lt;/h2&gt;

&lt;p&gt;With traces flowing into LaunchDarkly and span events carrying your flag evaluations, you can now configure a Guarded Rollout that automatically rolls back the AI Suggest feature if token costs spike or response truncation increases as you increase the percentage of users who see it.&lt;/p&gt;

&lt;p&gt;In the LaunchDarkly UI, navigate to your flag (&lt;code&gt;ai-suggest-enabled&lt;/code&gt;), under Default rule click Edit and select Guarded Rollout.&lt;/p&gt;

&lt;p&gt;You'll need to create two new custom metrics to attach to the guarded rollout. The first will is the AI tokens total rollout metric. This will measure cost-per-request as a gate for releasing the feature to a wider audience and alert if average tokens per request exceeds your baseline by more than 25%. And the second is AI completion truncated error metric, which will catch prompt truncation before users notice degraded output quality. This metric will halt the rollout if the rate climbs above your control baseline.&lt;/p&gt;

&lt;p&gt;For &lt;code&gt;ai.tokens.total&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event kind: Custom&lt;/li&gt;
&lt;li&gt;Event key: &lt;code&gt;ai.tokens.total&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;What do you want to measure?: Value / Size (Numeric) — you're passing the actual token count as the magnitude&lt;/li&gt;
&lt;li&gt;Metric name: AI tokens total&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7od4b6skjlcos2a99a39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7od4b6skjlcos2a99a39.png" alt=" " width="800" height="983"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For &lt;code&gt;ai.completion.truncated&lt;/code&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event kind: Custom&lt;/li&gt;
&lt;li&gt;Event key: &lt;code&gt;ai.completion.truncated&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;What do you want to measure?: Occurrence (Binary) — you're tracking whether truncation happened at least once, not how many times&lt;/li&gt;
&lt;li&gt;Metric name: AI completion truncated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uw8wp28y1u00qehqvkl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3uw8wp28y1u00qehqvkl.png" alt=" " width="627" height="920"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the two newly created metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2v80v23curi9epw5t41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff2v80v23curi9epw5t41.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set the threshold to 25 percent for 1 week.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm7ixlv2fawremunu9ul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsm7ixlv2fawremunu9ul.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click Save.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzdm5a0qkcnvm4ssy5m0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftzdm5a0qkcnvm4ssy5m0.png" alt=" " width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;LaunchDarkly will now monitor both metrics against the &lt;code&gt;ai-suggest-enabled&lt;/code&gt; flag and trigger an automatic rollback if either threshold is breached.&lt;/p&gt;

&lt;h2&gt;
  
  
  What You've Built
&lt;/h2&gt;

&lt;p&gt;At this point you have a fully instrumented LLM application where every layer of the stack tells a story. The FastAPI route evaluates a LaunchDarkly flag and stamps the result onto the trace. The orchestrator creates a parent span that ties the entire agent chain together. Each agent makes a Claude API call and records exactly what was sent, what came back, and how many tokens it cost. The OTel Collector fans all of that out to two backends simultaneously without a single line of application code changing between them.&lt;/p&gt;

&lt;p&gt;Langfuse gives you the LLM-specific view: prompt content, completions, token usage, and latency per agent so you can debug why a description came out wrong or whether the match agent is consistently burning more tokens than expected. LaunchDarkly gives you the experimentation view: which model variant was active during a given request, how latency and error rates compare between claude-sonnet-4-20250514 and claude-haiku, and the automated safety net to roll back if a new variant starts degrading your users' experience. Both tools are consuming the same trace. Neither required a custom integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;LLM applications fail in ways that traditional monitoring wasn't designed to catch. OpenTelemetry gives you the standard schema to do that, and the collector architecture gives you the flexibility to route that signal wherever it's most useful.&lt;/p&gt;

&lt;p&gt;If you're building anything with LLMs in production, start here. Instrument at the agent level, follow the GenAI semantic conventions, and build your observability pipeline before you need it.&lt;/p&gt;

&lt;p&gt;The full source code for the WorkLunch app is available &lt;a href="https://github.com/arober39/worklunch/tree/otel-launchdarkly-langfuse?tab=readme-ov-file" rel="noopener noreferrer"&gt;here&lt;/a&gt;. Clone it, swap in your API keys, and you'll have a working multi-agent trace pipeline running locally in under ten minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://launchdarkly.com/docs/home/releases/guarded-rollouts" rel="noopener noreferrer"&gt;Guarded Releases&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://app.launchdarkly.com/signup" rel="noopener noreferrer"&gt;LaunchDarkly 14 Day Free Trial&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>opentelemetry</category>
      <category>observability</category>
      <category>llm</category>
      <category>multiagent</category>
    </item>
    <item>
      <title>Detection to Resolution: Real world debugging with rage clicks and session replay</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Wed, 04 Mar 2026 03:00:54 +0000</pubDate>
      <link>https://forem.com/alexiskroberson/detection-to-resolution-real-world-debugging-with-rage-clicks-and-session-replay-bfp</link>
      <guid>https://forem.com/alexiskroberson/detection-to-resolution-real-world-debugging-with-rage-clicks-and-session-replay-bfp</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/detection-to-resolution-rage-clicks" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Part 3 of 3: Rage Click Detection with LaunchDarkly&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://dev.to/tutorials/detecting-user-frustration-session-replay"&gt;Part 1&lt;/a&gt;, we set up rage click detection using LaunchDarkly's &lt;a href="https://dev.to/home/observability/session-replay"&gt;Session Replay&lt;/a&gt;. In &lt;a href="https://dev.to/tutorials/connecting-rage-clicks-to-guarded-releases"&gt;Part 2&lt;/a&gt;, we connected those frustration signals to &lt;a href="https://dev.to/home/releases/guarded-rollouts"&gt;guarded releases&lt;/a&gt; for automated rollback protection.&lt;/p&gt;

&lt;p&gt;Now it's time to put it all together. In this final installment, we'll walk through real-world debugging scenarios using our WorkLunch app—a cross-platform application built with React Native and Expo where coworkers can swap lunches.&lt;/p&gt;

&lt;p&gt;These scenarios demonstrate how the integrated system of feature flags, session replay, and guarded releases can transform the way you diagnose and fix production issues.&lt;/p&gt;

&lt;p&gt;All code for this blog post can be found &lt;a href="https://github.com/arober39/worklunch" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Debugging Workflow: An Observability Loop
&lt;/h2&gt;

&lt;p&gt;Before diving into scenarios, let's understand the complete workflow we've built:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa036rz0sh8pmk6sijlcs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa036rz0sh8pmk6sijlcs.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This workflow enables you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Detect&lt;/strong&gt; frustration signals automatically as they happen.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alert&lt;/strong&gt; when thresholds are breached during rollouts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Investigate&lt;/strong&gt; with full session context, not just error logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fix&lt;/strong&gt; with confidence knowing the exact user experience.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify&lt;/strong&gt; the fix works by monitoring the new rollout.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9il9f6juf2jc8l89f8l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp9il9f6juf2jc8l89f8l.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a guarded release detects a spike in rage clicks, it automatically correlates the frustration with specific flag variations:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmpoqobxpen0wvvf1ejb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmpoqobxpen0wvvf1ejb.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The alert tells you exactly which feature caused the problem: "Rage clicks increased 10x for users on &lt;code&gt;join-community-redesign: true&lt;/code&gt;" It’s important to note that rage clicks aren’t the only measure for frustration signals around user experience. &lt;/p&gt;

&lt;p&gt;When items are out of place, confusing or navigation is not consistent across mobile and web, it can cause users to rage scroll. Unlike rage clicks that can be measured and tracked, rage scrolls are a little more nuanced, which will result in relying heavily on session replay and drops and traffic to diagnose.&lt;/p&gt;

&lt;p&gt;Just as llm observability tools monitor model outputs for hallucinations or drift, rage click detection monitors the human layer, catching UX failures that no server-side metric would reveal.&lt;/p&gt;

&lt;p&gt;Using feature flags, session replay and guarded releases, let’s take a closer look at both scenarios to debug in a real-world worklunch app.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario 1: Form Validation Frustration
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Setup
&lt;/h3&gt;

&lt;p&gt;Your team ships a new "Create Community" button redesign and places the feature behind a feature flag called (&lt;code&gt;inline-form-validation&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xg4tceirj5ym6z7cwxc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xg4tceirj5ym6z7cwxc.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the feature flag is toggled on, you notice users on this form are rage clicking the Create Community button. Your metrics show community creation conversions dropped 15 percent since enabling the flag, but there are no errors in the logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23e1ehz1d4giphs0gvj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23e1ehz1d4giphs0gvj8.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thankfully, you connected your feature flag with a Guarded Release and can use it to debug the issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  Investigation Steps
&lt;/h3&gt;

&lt;p&gt;Step 1: Search for sessions with rage clicks&lt;/p&gt;

&lt;p&gt;Since we attached a Guarded Rollout to your feature flag using a metric for detecting rage clicks, we can navigate to the LaunchDarkly’s Session page and filter for those specific sessions.&lt;/p&gt;

&lt;p&gt;In the LaunchDarkly Sessions tab search bar, apply this filter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;has_rage_clicks=true featureflag.key=inline-form-validation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This query filters for sessions with rage clicks and where the inline-form-validation flag is enabled.&lt;/p&gt;

&lt;p&gt;Step 2: Watch for user behavior patterns&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/ZQBY8rR-2ck"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;In the session replay, you see:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User fills out the form completely.&lt;/li&gt;
&lt;li&gt;User clicks Create Community.&lt;/li&gt;
&lt;li&gt;User sees a loading spinner and nothing else.&lt;/li&gt;
&lt;li&gt;User clicks button repeatedly.&lt;/li&gt;
&lt;li&gt;User abandon forms altogether.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 3:  Identify the UX failure&lt;br&gt;
The community was being created successfully meaning the API returned a 200 and the new community existed. The new feature was intended to simply change the button color (baby blue → purple), but it &lt;strong&gt;broke the code that allows the success card to be shown&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Root Cause
&lt;/h3&gt;

&lt;p&gt;When the flag is off, the code does the right thing after a successful create:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Success → Set &lt;code&gt;createdSpace&lt;/code&gt; → Show verification card (name, join code, "Back to community list")&lt;/li&gt;
&lt;li&gt;User taps Back → Return to community list&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the flag is on, the success path was missing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Success → Nothing. The &lt;code&gt;if (!useInlineValidation)&lt;/code&gt; block runs only for the old flow, so when the flag is on, &lt;code&gt;setCreatedSpace&lt;/code&gt; is never called.&lt;/li&gt;
&lt;li&gt;The user stays on the form with no confirmation, no join code, and no way to know the create worked.
&lt;/li&gt;
&lt;/ol&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Only the control (flag OFF) got success UI; the treatment (flag ON) did not&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;useInlineValidation&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;setCreatedSpace&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;space&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;join_code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;space&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;join_code&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h3&gt;
  
  
  The Fix
&lt;/h3&gt;

&lt;p&gt;To fix the issue, you can either rollback to previous working code or fix the issue and toggle flag back on. &lt;/p&gt;

&lt;p&gt;Option A – Roll back the flag: Set &lt;code&gt;inline-form-validation&lt;/code&gt; to off in LaunchDarkly. Users are no longer on the new feature; they get the existing, working code path and see the verification card after Create Community.&lt;/p&gt;

&lt;p&gt;Option B – Fix the new feature so it no longer breaks the success card: The new feature (flag on) breaks the code that shows the success card. Fix it by adding the success handling to the flag-on path. Also, when the flag is on, also call &lt;code&gt;setCreatedSpace&lt;/code&gt; after a successful create so the success card is shown again.&lt;/p&gt;
&lt;h3&gt;
  
  
  Resolution
&lt;/h3&gt;

&lt;p&gt;With the root cause identified from Session Replay, you update the code and resolve the rollout:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Roll back &lt;code&gt;inline-form-validation&lt;/code&gt; to &lt;code&gt;false&lt;/code&gt; so users get the verification card and can return to the list (instant, no deployment needed).&lt;/li&gt;
&lt;li&gt;Fix the flag-on success path so it shows the verification card (or redirects) after create.&lt;/li&gt;
&lt;li&gt;Deploy and re-enable the flag with the guarded rollout (rage click metrics) still attached, then monitor to confirm rage clicks return to baseline.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Time to resolution: 30 minutes to an hour (compared to potentially days of user complaints and support tickets).&lt;/p&gt;
&lt;h2&gt;
  
  
  Scenario 2: The Infinite Scroll Frustration
&lt;/h2&gt;
&lt;h3&gt;
  
  
  The Setup
&lt;/h3&gt;

&lt;p&gt;Your team ships a feature flag (&lt;code&gt;new-filter-location&lt;/code&gt;) that moves the existing Category and Sort controls on the "Lunches for Swap" feed from the top of the list to the bottom.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhthuxn2ijx1ptv5ec8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhthuxn2ijx1ptv5ec8d.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everything works fine until you notice a drop in traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexwn1idv49yxy1g8i3wm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fexwn1idv49yxy1g8i3wm.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since you see no uptick in rage clicks for your Guarded Release, you know it must be another user signal you’re missing that could indicate user frustration. So you decide to investigate further by placing yourself in the user's shoes using session replay.&lt;/p&gt;
&lt;h3&gt;
  
  
  Investigation Steps
&lt;/h3&gt;

&lt;p&gt;Step 1: Filter by flag variation&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;feature_flag.key=new-filter-location feature_flag.result.value=true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This query will filter for sessions where the feature flag &lt;code&gt;new-filter-location&lt;/code&gt; is set to true.&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/KdtErT995bE"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;In session replay, you see:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;users scroll down the feed, then back up, then down again. &lt;/li&gt;
&lt;li&gt;Minimal item taps. &lt;/li&gt;
&lt;li&gt;Constantly moving through the list without making selections.&lt;/li&gt;
&lt;li&gt;User navigates to a different page.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The pattern suggests they're looking for something rather than browsing items.&lt;/p&gt;

&lt;p&gt;Step 3: Identify UX Failure&lt;br&gt;
Replay shows the issue: “Category” and “Sort” are not at the top where users expect them. With the flag on, the controls were moved to below every post in the feed.&lt;/p&gt;
&lt;h3&gt;
  
  
  The Root Cause
&lt;/h3&gt;

&lt;p&gt;The feature flag moves the existing filter bar from the top to the bottom of the list (&lt;code&gt;ListFooterComponent&lt;/code&gt; when the flag is on). Same controls, different position.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Flag ON: same filter bar rendered as ListFooterComponent — breaks user expectation&lt;/span&gt;
&lt;span class="nx"&gt;ListFooterComponent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;useFiltersAtBottom&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;View&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filterBar&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/View&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Fix
&lt;/h3&gt;

&lt;p&gt;Step 1 – Roll back the flag: Set &lt;code&gt;new-filter-location&lt;/code&gt; to &lt;code&gt;false&lt;/code&gt;. Filters return to the top (original position); rage scrolls drop. &lt;/p&gt;

&lt;p&gt;Step 2 – Fix the experiment: Don't move the filters to the bottom. Keep the filter bar at the top regardless of the flag, or remove the flag and leave filters in their original place.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Fix: always show filter bar at top (original place)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="cm"&gt;/* Filters always above the list */&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;View&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filterBar&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;TouchableOpacity&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filterButton&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;onPress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;openCategoryFilter&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Text&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filterButtonText&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;categoryLabel&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Text&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/TouchableOpacity&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;TouchableOpacity&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filterButton&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;onPress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;openSortMenu&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Text&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;filterButtonText&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Sort&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;sortOrder&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;newest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Newest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Oldest&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Text&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/TouchableOpacity&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/View&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;FlatList&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="nx"&gt;ListFooterComponent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Resolution
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Roll back &lt;code&gt;new-filter-location&lt;/code&gt; to &lt;code&gt;false&lt;/code&gt; so filters are at the top again&lt;/li&gt;
&lt;li&gt;Fix keep the filter bar at the top.&lt;/li&gt;
&lt;li&gt;Deploy and re-enable the flag with monitoring.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Building your debugging playbook
&lt;/h2&gt;

&lt;p&gt;Based on these scenarios, here's a systematic approach to rage click debugging:&lt;/p&gt;

&lt;p&gt;Step 1: Triage with Filters&lt;br&gt;
| What to look for | Search query |&lt;br&gt;
| --- | --- |&lt;br&gt;
| All frustrated users | &lt;code&gt;has_rage_clicks=true&lt;/code&gt; |&lt;br&gt;
| Specific page issues | &lt;code&gt;has_rage_clicks=true visited-url="*/spaces/create*"&lt;/code&gt; |&lt;br&gt;
| Browser-specific | &lt;code&gt;has_rage_clicks=true browser_name=Chrome&lt;/code&gt; |&lt;br&gt;
| High frustration | &lt;code&gt;has_rage_clicks=true active_length&amp;gt;180s&lt;/code&gt; |&lt;/p&gt;

&lt;p&gt;Step 2: Identify the Pattern&lt;br&gt;
When reviewing session replays, look for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visual feedback gaps (Did the UI acknowledge the click?)&lt;/li&gt;
&lt;li&gt;Loading states (Is there a spinner? Does it ever resolve?)&lt;/li&gt;
&lt;li&gt;Error visibility (If there's an error, can the user see it?)&lt;/li&gt;
&lt;li&gt;Touch target issues (On mobile, are elements too small or overlapping?)&lt;/li&gt;
&lt;li&gt;Timing problems (Does the click happen before the element is ready?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 3: Correlate with Technical Data&lt;br&gt;
Session replay shows you the user's experience. Pair it with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network tab (API response codes and payloads.)&lt;/li&gt;
&lt;li&gt;Console errors (JavaScript exceptions.)&lt;/li&gt;
&lt;li&gt;Feature flag state (Which variation was the user seeing?)&lt;/li&gt;
&lt;li&gt;Timing (When in the session did frustration peak?)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Step 4: Fix and Verify&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Roll back using your feature flag (instant, no deployment needed).&lt;/li&gt;
&lt;li&gt;Fix the root cause in code.&lt;/li&gt;
&lt;li&gt;Re-deploy with the guarded rollout active.&lt;/li&gt;
&lt;li&gt;Monitor rage click metrics to confirm the fix worked.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Whether you're shipping traditional features or AI-powered experiences, this playbook reflects a devops for ai mindset: continuous monitoring, fast rollback, and data-driven debugging that closes the gap between deployment and user impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bringing it all together
&lt;/h2&gt;

&lt;p&gt;The combination of rage click detection, session replay, and guarded releases creates something powerful: &lt;strong&gt;observability that starts with the human experience&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Traditional monitoring asks: "Is the system healthy?"&lt;/p&gt;

&lt;p&gt;This approach asks: "Are users successful?"&lt;/p&gt;

&lt;p&gt;When you can detect frustration in real-time, watch exactly what users experienced, and roll back problematic features instantly, you fundamentally change how fast you can ship with confidence.&lt;/p&gt;

&lt;p&gt;This workflow fits into the broader ai deployment lifecycle, treating user frustration signals as first-class deployment metrics, just like error rates or latency. By embedding rage click detection into your ai lifecycle management strategy, every release becomes a feedback loop that improves both the product and the process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8liywrd7u5w2ptibv7e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8liywrd7u5w2ptibv7e.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The next time your users are frustrated, you'll know exactly what went wrong, which users were affected, and why. And the best part is you'll fix it before most users even notice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://dev.to/home/observability/session-replay"&gt;Session replay documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/home/releases/guarded-rollouts"&gt;Guarded releases documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/sdk/observability"&gt;Observability SDK reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/tutorials/detecting-user-frustration-session-replay"&gt;Part 1: Detecting user frustration&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/tutorials/connecting-rage-clicks-to-guarded-releases"&gt;Part 2: Connecting to guarded releases&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>monitoring</category>
      <category>observability</category>
      <category>ux</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Day 7 | 🎄✨The Rockefeller tree in NYC: SLOs that actually drive decisions</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Wed, 17 Dec 2025 04:47:02 +0000</pubDate>
      <link>https://forem.com/launchdarkly/day-7-the-rockefeller-tree-in-nyc-slos-that-actually-drive-decisions-1l58</link>
      <guid>https://forem.com/launchdarkly/day-7-the-rockefeller-tree-in-nyc-slos-that-actually-drive-decisions-1l58</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/o11y-that-drives-decisions" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq0zownrqg56knpwe0k8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq0zownrqg56knpwe0k8.png" alt=" " width="601" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Most Subject Level Objectives or SLOs sit in dashboards gathering dust. Seeing that SLOS are performance targets that can be measured, they're extremely important in understanding the quality of a service or system. You define them, measure them, but when your conditions are not met, there’s no followup.&lt;/p&gt;

&lt;p&gt;The biggest drawback is that SLOs are created to add value but if they’re never reinforced, it’s impossible to drive decisions, influence roadmaps, or help during incidents.&lt;/p&gt;

&lt;p&gt;When it comes to defining SLOs, many folks often start at the top of the funnel by picking general metrics to measure, but in order to create SLOs that work, it’s important to understand how the roots impact the leaves.&lt;/p&gt;

&lt;p&gt;In this post, we'll cover the pitfalls leading to out of sync SLOs with a few tips and tricks to ensure what you measure produces business value. You'll also see an example of how to set SLOs in realtime for a flag evaluation feature that you can implement in your own planning process.&lt;/p&gt;

&lt;p&gt;But first, we'll explore a tree metaphor to recap key observability components and how they're influence expands from the roots all the way to the leaves. What if the popular Rockefeller tree in NYC represented the relationship between telemetry data and SLOs?&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Observability Tree
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi31gdfj7opky992d5918.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi31gdfj7opky992d5918.png" alt=" " width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SLOs would essentially be the leaves on the Response branch as shown in the above image, meaning they are visible, measurable targets that everyone can see. The leaves would not be possible without the support of the trunk. &lt;/p&gt;

&lt;p&gt;This is your telemetry data or the traces, logs, events you collect from your system. The data you collect acts as the foundation and support for the branches and leaves.&lt;/p&gt;

&lt;p&gt;The roots represent the things you cannot see but are still vital to overall health of your system. This is the hard part of understanding system behavior, debugging unknown unknowns, and making data-driven decisions.&lt;/p&gt;

&lt;p&gt;Most teams skip the roots entirely. They define SLOs using only the trunk (logs, traces, events), measuring things that can already be measured. However, the business outcomes and user behaviors are buried in what would be defined as the roots. Sometimes you have to dig through the soil to ensure your SLOs don't end up technically accurate yet strategically ineffective.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Makes an SLO Decision-Worthy
&lt;/h2&gt;

&lt;p&gt;So what makes a good SLO? The goal of a SLO is to bridge engineering and business needs to support a high quality user experience. And a good SLO depends on three things, business clarity or asking the right questions, measurability, or can these components actually be measured, and actionable targets, or the  game plan for when things go wrong.&lt;/p&gt;

&lt;p&gt;First, you need business clarity, which are the roots of the previously mentioned observability tree. This means articulating why something matters in concrete terms like dollars, users, retention and also avoiding vague statements like "uptime is important." For instance, if I were measuring the impact of downtime on a checkout feature, I could establish the SLO scope with “each minute of checkout downtime costs us $12,000 in lost revenue based on our average transaction volume.” It is essential to be able to explain the business impact in one clear sentence.&lt;/p&gt;

&lt;p&gt;Second, you need measurability. This is like the trunk of the tree. Your SLO must connect to your golden signals such as latency, traffic, errors, saturation. This is where a lot of aspirational SLOs fall apart. Upper management might want to measure user happiness, but how can engineering translate this into actual metrics? Try to express the business impact in one clear sentence. If that's difficult, it’s usually a sign the problem definition needs a bit more shaping before defining the SLO.&lt;/p&gt;

&lt;p&gt;Third, you need actionable targets, which represent the leaves on the observability tree. This is where most SLOs fail even when they get the first two right. There's a number, maybe even a threshold, but no clear action plan. What happens when you miss it? Who gets paged? What gets paused? Decision-worthy SLOs specify exactly what happens at different levels of degradation, and more importantly, they give everyone the confidence to make decisions based on those levels.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building production resilient SLOs: LaunchDarkly’s Flag evaluation example
&lt;/h2&gt;

&lt;p&gt;We can apply these same principles of building a production-worthy SLO using &lt;a href="https://launchdarkly.com/docs/home/releases/flag-evaluations" rel="noopener noreferrer"&gt;LaunchDarkly’s flag evaluation feature&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The flag evaluation feature in the monitoring tab is an extension of observability where it tracks how often each flag variation is served to different contexts over time, and highlights flag changes that might affect evaluation patterns.&lt;/p&gt;

&lt;p&gt;Now, let’s build a SLO.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Start with the business question
&lt;/h3&gt;

&lt;p&gt;What would be impacted if the flag evaluations monitoring feature broke? Customers use these charts to understand rollout progress, debug targeting issues, and verify that their flags are working as expected. If evaluation data is delayed or missing, they can't trust what they're seeing. They might roll back a working feature thinking it's broken, or fail to catch a real problem because the charts show stale data. This undermines confidence in the platform and increases support load.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Translate to user experience terms
&lt;/h3&gt;

&lt;p&gt;What does "working well" look like? When a customer makes a flag change and checks the monitoring tab, they see updated evaluation counts within a couple minutes. The charts load quickly (under 3 seconds). The data is accurate meaning evaluation counts match what's actually happening in their application. If there's a delay, we tell them explicitly rather than showing stale data as if it's current.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Connect to telemetry
&lt;/h3&gt;

&lt;p&gt;We track several golden signals for this feature. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data pipeline latency&lt;/strong&gt;: time from evaluation event to appearing in charts. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chart load time&lt;/strong&gt;: how long it takes to render the monitoring page. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data accuracy&lt;/strong&gt;: comparing our recorded evaluations against a known sample. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error rate&lt;/strong&gt;: failed queries or chart rendering errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For the sake of this example will set arbitrary numbers for these signals. Let’s say you had a median pipeline latency of 45 seconds, p95 at 2 minutes, p99 at 5 minutes. And a chart load time averages 1.2 seconds. Data accuracy is 99.7 percent (some evaluations drop due to sampling) and error rate is 0.3 percent.&lt;/p&gt;

&lt;p&gt;Using this data, we can set the target.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Set the target
&lt;/h3&gt;

&lt;p&gt;Based on that data, here's our SLO: 98 percent of flag evaluation events will appear in monitoring charts within 3 minutes, with chart load times under 3 seconds at p95.&lt;/p&gt;

&lt;p&gt;Why these numbers? Customer research shows they expect "near real-time" monitoring, which they define as 2-3 minutes. Anything longer feels like stale data. Three seconds for chart loading is the threshold where users perceive delay and start questioning if something's broken. &lt;/p&gt;

&lt;p&gt;We chose 98 percent instead of 99.9 percent because some evaluation events get sampled out intentionally for cost reasons, and occasional data pipeline delays from third-party dependencies are acceptable.&lt;/p&gt;

&lt;p&gt;Now that we have our targets, we can use those thresholds to set conditional responses based on alerts or indicators.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 5: Define operational responses
&lt;/h3&gt;

&lt;p&gt;Responses for Green, Red, or Yellow indicators in production:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If Green &lt;code&gt;(&amp;gt;98%, &amp;lt;3 min, &amp;lt;3 sec load)&lt;/code&gt;, continue normal operations.&lt;/li&gt;
&lt;li&gt;If Yellow &lt;code&gt;(95-98%, or 3-5 min, or 3-5 sec load)&lt;/code&gt;, alert on-call, investigate within 4 hours.&lt;/li&gt;
&lt;li&gt;If Red &lt;code&gt;(&amp;lt;95%, or &amp;gt;5 min, or &amp;gt;5 sec load)&lt;/code&gt;, page immediately, update status page if widespread.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 6: Drive decisions
&lt;/h3&gt;

&lt;p&gt;Now the SLO becomes your decision-making framework. When engineering proposes adding a new feature like "evaluations by SDK" breakdown, the first question is: "Will this keep us within our 3-second chart load SLO?" If the answer is no, we either optimize the implementation or push back on the feature.&lt;/p&gt;

&lt;p&gt;Infrastructure changes get evaluated the same way. Before migrating the data pipeline to a new system, we load tests against both our latency and accuracy targets. If the migration risks our SLO, we either fix the architecture or delay the migration. Another way I've seen SLOs used is planning future work. ie. if a team knows they are in the yellow this month, they may avoid picking up other risky work.&lt;/p&gt;

&lt;p&gt;The SLO transforms from a monitoring target into a decision filter, helping to determine what gets shipped and what doesn’t.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bringing it all together
&lt;/h2&gt;

&lt;p&gt;Great SLOs aren't just leaves you pluck and add to dashboards. They're connected to everything below them from the trunk of solid telemetry to the roots of understanding what actually matters to your business and users. If you skip those foundational layers, your SLOs become technically accurate but strategically useless.&lt;/p&gt;

&lt;p&gt;Start with the roots. Ask what would be impacted if this feature were to break. Work your way up through user experience and technical measurement. Build SLOs that bridge engineering and business with clear thresholds and clear consequences. And finally, make them specific enough to drive real decisions.&lt;/p&gt;

</description>
      <category>observability</category>
      <category>slos</category>
      <category>evaluations</category>
      <category>flags</category>
    </item>
    <item>
      <title>Day 6 | 💸 The famous green character that stole your cloud budget: the cardinality problem</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Tue, 16 Dec 2025 01:17:54 +0000</pubDate>
      <link>https://forem.com/launchdarkly/day-6-the-famous-green-character-that-stole-your-cloud-budget-the-cardinality-problem-420k</link>
      <guid>https://forem.com/launchdarkly/day-6-the-famous-green-character-that-stole-your-cloud-budget-the-cardinality-problem-420k</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/cloud-budget-observability-holiday" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomjjhax47q8ccjhl7t52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomjjhax47q8ccjhl7t52.png" alt=" " width="601" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Every December, engineering teams unwrap the same unwanted gift: their annual observability bill. And every year, it's bigger than the last.&lt;/p&gt;

&lt;p&gt;You know the pattern. Services multiply. Traffic grows. Someone discovers OpenTelemetry and suddenly every microservice is emitting 50 spans per request instead of 5. Then January rolls around and your observability platform sends an invoice that's 30% higher than last quarter.&lt;/p&gt;

&lt;p&gt;Your VP of Engineering wants to know why.&lt;/p&gt;

&lt;p&gt;You could blame it on the famous green character who hates Christmas, or you could join other teams who are getting serious about cost-efficient observability. That is, collecting telemetry data based on &lt;em&gt;value,&lt;/em&gt; not &lt;em&gt;volume.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why "collect everything" no longer works
&lt;/h2&gt;

&lt;p&gt;The old playbook was simple: instrument everything, store it all, figure out what you need later. Storage was cheap enough. Queries were fast enough. No need to overthink it.&lt;/p&gt;

&lt;p&gt;Then, three things happened:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenTelemetry went mainstream. Teams migrated from vendor agents to OTel and began adding spans for everything. This added more visibility, but with 10x the data.&lt;/li&gt;
&lt;li&gt;AI observability tools arrived. Platforms started using LLMs to analyze traces and suggest root causes. Powerful, but also expensive to run against terabytes of unfiltered trace data.&lt;/li&gt;
&lt;li&gt;CFOs started asking questions. &lt;em&gt;"Our_traffic grew 15% but observability costs grew 40%. Explain."&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To stop instrumenting wouldn't be an option and also you want to make informed decisions, but still the biggest culprit, hiding in your telemetry stack is cardinality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cardinality will eat your budget
&lt;/h2&gt;

&lt;p&gt;Cardinality is the observability villain. It sneaks in quietly, one innocent-looking label at a time, and before you know it, it's stolen your entire cloud budget. What is cardinality? It's just the number of unique time series your metrics generate, but it's also the main driver of observability costs that nobody sees coming.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag89tm3jc40l11b4pjbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fag89tm3jc40l11b4pjbl.png" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Low cardinality: &lt;code&gt;http_requests_total&lt;/code&gt; tracked by method and &lt;code&gt;status_code&lt;/code&gt;. Maybe 20 unique combinations. Fairly manageable.&lt;/p&gt;

&lt;p&gt;High cardinality: Same counter, but now you've added &lt;code&gt;user_id&lt;/code&gt;, &lt;code&gt;request_id&lt;/code&gt;, and &lt;code&gt;session_token&lt;/code&gt; as labels. By simply adding these labels, you’ve just created millions of unique time series. Each one needs storage, indexing, and query compute. This will compound your bill faster than you can say deck the halls, except you wouldn’t be able to deck the halls, you’d be paying off your usage bill.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stopping the Green character: set cardinality budgets
&lt;/h2&gt;

&lt;p&gt;Most teams don't set limits on how many time series a service can create even though they should., but you can.&lt;/p&gt;

&lt;p&gt;Start by auditing what you're currently generating. Look for metrics with &amp;gt;100K unique time series, or labels that include UUIDs, request IDs, or email addresses. These are your problem children.&lt;/p&gt;

&lt;p&gt;Then set budgets. Give each service a limit, like 50K time series max. Assign team quotas so the checkout team knows they get 200K total across all their services. Create attribute allowlists that define exactly which labels are allowed in production. Yes, this feels restrictive at first. Your developers will complain. They'll argue that they need that user_id label for debugging. And sometimes they're right. But forcing that conversation up front means they have to justify the cost, not just add labels reflexively.&lt;/p&gt;

&lt;p&gt;Finally, enforce budgets through linters that flag high-cardinality attributes in code review, CI checks that fail if estimates get too high, and dashboards that alert when cardinality spikes. This isn't about being restrictive. It's about being intentional. If you're adding a label, you should know why and what it'll cost.&lt;/p&gt;

&lt;p&gt;Cardinality budgets solve the metrics problem, but what about traces? That's where sampling comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sampling: without the guilt
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27n1ylb4nl8o6thm50co.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27n1ylb4nl8o6thm50co.png" alt=" " width="800" height="830"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not all sampling strategies are created equal, and picking the right one depends on what you're trying to protect.&lt;/p&gt;

&lt;p&gt;Head-based sampling is pretty strict. You decide whether to keep a trace at the very start of a request, before you know if it'll be interesting. Fast checkout gets dropped. Slow checkout that timeout also gets dropped, because the decision happened too early. Not great.&lt;/p&gt;

&lt;p&gt;Tail-based sampling is smarter. Wait until the trace completes, then decide based on what actually happened. Keep errors, high latency, or specific user cohorts. Sample down the boring stuff. This costs more (you have to buffer complete traces) but you keep what matters.&lt;/p&gt;

&lt;p&gt;Probabilistic sampling is the middle ground. Keep 10% of everything, regardless of content. Predictable cost reduction, but you'll still lose some critical events. Works fine for stable services where trends matter more than individual traces.&lt;/p&gt;

&lt;p&gt;Now rule-based sampling is where things get interesting, and honestly where most teams should be spending their energy. The idea is dead simple: different traffic deserves different sampling rates. You keep 100% of traces during feature rollouts because you actually care about every request when you're validating a new flow. &lt;/p&gt;

&lt;p&gt;If you're using LaunchDarkly for &lt;a href="https://launchdarkly.com/docs/home/releases/progressive-rollouts" rel="noopener noreferrer"&gt;progressive rollouts&lt;/a&gt;, you can tie sampling rates directly to flag evaluations. 100% sampling for users in the new variant, 10% for the control group. Your main API endpoints can run at 50% since they're stable and high-volume. Internal health checks that just verify the service is alive need maybe 5%, or even less. I've seen teams go down to 1% for health checks and never miss it. &lt;/p&gt;

&lt;p&gt;The key is that you're making these decisions based on the actual value of the signal, not just applying a blanket rate across everything. Adjust based on context: feature flags, experiments, specific endpoints, user cohorts, whatever makes sense for your system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc07v622ux2ey9khxs3za.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc07v622ux2ey9khxs3za.png" alt=" " width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sampling isn't about compromising visibility. It's about amplifying signals. The noisy 90% of traces you're storing never get looked at anyway.&lt;/p&gt;

&lt;p&gt;Once you've decided what to keep, you still need to decide how long to keep it and at what resolution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Downsample vs. Discard: know when to do which
&lt;/h2&gt;

&lt;p&gt;Not all data reduction is the same, and mixing up downsampling with discarding is how teams accidentally delete data they actually need.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Downsample&lt;/strong&gt; when you need historical context but not full precision. SLO burn rates don't need second-by-second granularity so you can downsample to 1-minute intervals and still catch every trend. An additional practice is to keep high-res data for a week, then downsample to hourly for long-term retention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Discard&lt;/strong&gt; when the data is redundant or has served its purpose. For instance, debug spans from a canary that passed three days ago can be deleted. Or if you captured an error in both a trace and a log, you can pick one source of truth and drop the duplicate.&lt;/p&gt;

&lt;p&gt;The rule of thumb here is If you'll never query it, don't store it. If you might need it for trends in six months, downsample it. If you need it immediately when something breaks, keep it at full resolution with an aggressive retention policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this actually looks like
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w1x7kksz9086r6zu0c1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w1x7kksz9086r6zu0c1.png" alt=" " width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Cost-efficient observability isn't about cutting capabilities. It's about cutting waste.&lt;/p&gt;

&lt;p&gt;Start by auditing your cardinality. Find the metrics generating hundreds of thousands of time series because someone added user_id as a label. Then, set budgets like 50K per service, 200K per team and enforce them through linters and CI checks. Create ways to encourage developers to justify high-cardinality labels before they ship, not after the bill arrives.&lt;/p&gt;

&lt;p&gt;Then you’ll be ready to tackle sampling. Drop the blanket 10% probabilistic rate and switch to rule-based sampling tied to actual value. Keep 100% of traces during feature rollouts. Sample stable endpoints at 10%. Go as low as 1% for health checks. If you're running feature flags, tie sampling to flag evaluations so you capture what matters and discard what doesn't.&lt;/p&gt;

&lt;p&gt;Finally, clean up retention, downsample SLO metrics to 1-minute intervals, discard debug spans from canaries that passed days ago and delete duplicate error data.&lt;/p&gt;

&lt;p&gt;This not only leads to lower bills, but also cleaner dashboards, faster queries, fewer noisy alerts, and teams that spend less time swimming through telemetry and more time fixing actual problems.&lt;/p&gt;

&lt;p&gt;Observability ROI isn't measured in data volume. It's measured in how fast you detect and resolve issues.&lt;/p&gt;

&lt;p&gt;The teams figuring this out in 2025 aren't collecting everything. They're collecting what matters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim0qxzs86ogz0hc21wuw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim0qxzs86ogz0hc21wuw.png" alt=" " width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>observability</category>
      <category>recap</category>
      <category>2025</category>
      <category>holiday</category>
    </item>
    <item>
      <title>Day 3 | 🔔 Jingle All the Way to Zero-Config Observability</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Wed, 10 Dec 2025 19:10:26 +0000</pubDate>
      <link>https://forem.com/launchdarkly/day-3-jingle-all-the-way-to-zero-config-observability-m0p</link>
      <guid>https://forem.com/launchdarkly/day-3-jingle-all-the-way-to-zero-config-observability-m0p</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/zero-config-observability-holiday" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F291b1p70d0qf85hqz353.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F291b1p70d0qf85hqz353.png" alt=" " width="601" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For years, auto-instrumentation promised effortless observability but kept falling short. You'd still end up manually adding spans to business logic, hunting down missing metadata, or trying to piece together how a feature rollout was affecting customers.&lt;/p&gt;

&lt;p&gt;That finally shifted in 2025. With OTel auto-instrumentation maturing and LaunchDarkly adding built-in OTel support to server-side SDKs, teams started getting feature flag context baked into their traces without writing instrumentation code. The zero-config promise actually started delivering.&lt;/p&gt;

&lt;p&gt;Auto-instrumentation has always had a blind spot: it shows you what happened, but not why. You'd see a latency spike, but had no idea which feature flag was active, which users hit it, or what experiment was running.&lt;/p&gt;

&lt;p&gt;Without that context, you're doing detective work. Digging through logs, matching up timestamps, guessing at what caused what. Manual instrumentation helped, but you paid for it in engineering time, inconsistent coverage, and mounting technical debt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Auto-instrumentation that actually knows about your features
&lt;/h2&gt;

&lt;p&gt;The game changed when OTel auto-instrumentation actually got good. Instead of just capturing basic HTTP calls, it now handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Framework-level request tracing.&lt;/li&gt;
&lt;li&gt;Automatic context propagation across services.&lt;/li&gt;
&lt;li&gt;Runtime metadata and environment details.&lt;/li&gt;
&lt;li&gt;Errors and exceptions without manual try-catch blocks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LaunchDarkly takes this further by injecting flag evaluation data straight into OTel spans. Every time you evaluate a flag, you automatically get the flag key, user context, which variation served, and the targeting rule that fired. That data feeds into your existing OTel pipeline, so your traces finally show which features were active and who was affected - not just database queries and API calls.&lt;/p&gt;

&lt;p&gt;So how do you actually set this up?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/sdk/features/opentelemetry-server-side" rel="noopener noreferrer"&gt;To get started with Otel trace hooks and feature flag data&lt;/a&gt;, simply add the hooks to your LaunchDarkly client config.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ldclient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;ldclient&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Config&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;ldotel.tracing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Hook&lt;/span&gt;

&lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;YOUR_SDK_KEY&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hooks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;Hook&lt;/span&gt;&lt;span class="p"&gt;()])&lt;/span&gt;
&lt;span class="n"&gt;ldclient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ldclient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This flows into your existing OpenTelemetry pipeline, enriching every trace with feature-aware context.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;TracingHook&lt;/strong&gt; automatically decorates your OpenTelemetry spans with flag evaluation events. When your application evaluates flags during a request, those evaluations become part of the trace along with the full context about what was evaluated and for whom.&lt;/p&gt;

&lt;p&gt;You can also configure your OpenTelemetry collector or exporter to point to LaunchDarkly's OTLP endpoint, and you're done. &lt;/p&gt;

&lt;p&gt;For HTTP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://otel.observability.app.launchdarkly.com:4318 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For gRPC:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://otel.observability.app.launchdarkly.com:4317
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This feature is also available in .Net, Go, Java, Node.JS and Ruby.&lt;/p&gt;

&lt;p&gt;Auto-instrumentation handles the rest, HTTP spans, database calls, framework-level tracing, error capture, and now, feature flag context.&lt;/p&gt;

&lt;h2&gt;
  
  
  What auto-instrumentation unlocks
&lt;/h2&gt;

&lt;p&gt;When you ship a new feature variant, you immediately see how it performs per cohort. If there's a latency spike in the "new-checkout-flow" variation, you'll know within minutes before it affects user experience.&lt;/p&gt;

&lt;p&gt;That same visibility matters during incidents. When an outage hits, filter traces by flag evaluation to see which features were active when errors occurred. The trace shows you whether it was the new recommendation engine, the optimized query path, or something else entirely.&lt;/p&gt;

&lt;p&gt;This is especially powerful for experimentation. LaunchDarkly processes your OTel traces into metrics automatically, so when you run an A/B test, you get latency, error rate, and throughput calculated per variation without extra config. The same telemetry powering your dashboards powers your experiments.&lt;/p&gt;

&lt;p&gt;The best part of this setup is that it scales without additional work. As teams ship more features behind flags, the telemetry gets more valuable without getting more expensive to maintain. New services inherit feature-aware tracing just by initializing the SDK.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to add custom spans
&lt;/h2&gt;

&lt;p&gt;Zero-config doesn't mean never-config. You'll still want custom spans for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Business logic milestones&lt;/strong&gt;. If you need to measure time-to-recommendation or search-to-purchase, custom spans make that explicit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ML pipeline stages&lt;/strong&gt;. Feature extraction, model inference, and post-processing often warrant their own spans for detailed performance analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-service boundaries&lt;/strong&gt;. Queue producers, stream processors, and async workers may need manual context propagation and span creation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Experiment-specific KPIs&lt;/strong&gt;. If your A/B test measures "items added to cart" or "video completion rate," you'll instrument those as custom metrics.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The important part is you're writing these spans to capture business value, not to patch holes in your instrumentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Delivering real value
&lt;/h2&gt;

&lt;p&gt;Combining mature auto-instrumentation with feature-aware enrichment changes how teams approach observability. It's no longer a separate investment that competes with feature development. It's a byproduct of how you ship features.&lt;/p&gt;

&lt;p&gt;When you evaluate a flag, you get telemetry. When you roll out a feature, you get performance data segmented by variation. When you run an experiment, you get metrics derived from production traces. The instrumentation you would have written manually is now embedded in the tools you already use.&lt;/p&gt;

&lt;p&gt;Observability stops being something you retrofit after launch and becomes something you inherit by default. Which means teams spend less time debugging instrumentation gaps and more time acting on insights.&lt;/p&gt;

&lt;p&gt;That's the promise of zero-config, finally delivered.&lt;/p&gt;

&lt;p&gt;Ready to try it? Explore LaunchDarkly's &lt;a href="https://launchdarkly.com/docs/sdk/features/opentelemetry-server-side" rel="noopener noreferrer"&gt;OpenTelemetry integration documentation&lt;/a&gt; or &lt;a href="http://app.launchdarkly.com/signup" rel="noopener noreferrer"&gt;sign up for a free trial account&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Enable Observability or Experimentation in your LaunchDarkly dashboard and start seeing feature-aware telemetry from your existing traces.&lt;/p&gt;

</description>
      <category>zeroconfig</category>
      <category>observability</category>
      <category>instrumentation</category>
      <category>python</category>
    </item>
    <item>
      <title>Day 2 | 🎅 He knows if you have been bad or good... But what if he gets it wrong?</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Tue, 09 Dec 2025 20:21:47 +0000</pubDate>
      <link>https://forem.com/launchdarkly/day-2-he-knows-if-you-have-been-bad-or-good-but-what-if-he-gets-it-wrong-17k6</link>
      <guid>https://forem.com/launchdarkly/day-2-he-knows-if-you-have-been-bad-or-good-but-what-if-he-gets-it-wrong-17k6</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/day-two-holiday-campaign_2025" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsivhq28jj3txxoit0m8z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsivhq28jj3txxoit0m8z.png" alt=" " width="601" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;"He knows if you've been bad or good..."&lt;/p&gt;

&lt;p&gt;As kids, we accepted the magic. As engineers in 2025, we need to understand the mechanism. So let's imagine Santa's "naughty or nice" system as a modern AI architecture running at scale. What would it take to make it observable when things go wrong?&lt;/p&gt;

&lt;h2&gt;
  
  
  The architecture: Santa's distributed AI system
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feomikg5wgdirdkj7g90s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feomikg5wgdirdkj7g90s.png" alt=" " width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Santa's operation would need three layers. The input layer handles behavioral data from 2 billion children on a point system. "Shared toys with siblings" gets +10 points, "Threw tantrum at store" loses 5.&lt;/p&gt;

&lt;p&gt;The processing layer runs multiple AI agents working together. A Data Agent collects and organizes behavioral events. A Context Agent retrieves relevant history: letters to Santa, past behavior, family situation. A Judgment Agent analyzes everything and makes the Nice/Naughty determination. And a Gift Agent recommends appropriate presents based on the decision.&lt;/p&gt;

&lt;p&gt;The integration layer connects to MCP servers for Toy Inventory, Gift Preferences, Delivery Routes, and Budget Tracking.&lt;/p&gt;

&lt;p&gt;It's elegant. It scales. And when it breaks, it's a nightmare to debug.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: A good child on the Naughty List
&lt;/h2&gt;

&lt;p&gt;It's Christmas Eve at 11:47 PM.&lt;/p&gt;

&lt;p&gt;A parent calls, furious. Emma, age 7, has been a model child all year. She should be getting the bicycle she asked for. Instead, the system says: &lt;strong&gt;Naughty List - No Gift&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You pull up the logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Emma's judgment: 421 NICE points vs 189 NAUGHTY points
Gift Agent tries to check bicycle inventory → TIMEOUT
Gift Agent retries → TIMEOUT  
Gift Agent retries again → TIMEOUT
Gift Agent checks inventory again → Count changed
Gift Agent reasoning: "Inventory uncertain, cannot fulfill request"
Gift Agent defaults to: NAUGHTY LIST
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Emma wasn't naughty. The Toy Inventory MCP was overloaded from Christmas Eve traffic. But the agent's reasoning chain interpreted three timeouts as "this child's request cannot be fulfilled" and failed to the worst possible default.&lt;/p&gt;

&lt;p&gt;With traditional APIs, you'd find the bug on line 47, fix it, and deploy. With AI agents, it's not that simple. The agent decided to interpret timeouts that way. You didn't code that logic. The LLM's 70 billion parameters did.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is the core challenge of AI observability: You're debugging decisions, not code.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI Systems are hard to debug
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Black box reasoning and reproducibility go hand in hand&lt;/strong&gt;. With traditional debugging, you step through the code and find the exact line that caused the problem. With AI agents, you only see inputs and outputs. The agent received three timeouts and decided to default to NAUGHTY_LIST. Why? Neural network reasoning you can't inspect.&lt;/p&gt;

&lt;p&gt;And even if you could inspect it, you couldn't reliably reproduce it. Run Emma's case in test four times and you might get:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Run 1: NICE LIST, gift = bicycle ✓
Run 2: NICE LIST, gift = video game ✓
Run 3: NICE LIST, gift = art supplies ✓
Run 4: NAUGHTY LIST, no gift ✗
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Temperature settings and sampling introduce randomness. Same input, different results every time. Traditional logs show you what happened. AI observability needs to show you why, and in a way you can actually verify.&lt;/p&gt;

&lt;p&gt;Then there's the question of quality. Consider this child:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Refused to eat vegetables (10 times) but helped put away dishes&lt;/li&gt;
&lt;li&gt;Yelled at siblings (3 times) but defended a classmate from a bully&lt;/li&gt;
&lt;li&gt;Skipped homework (5 times) but cared for a sick puppy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is this child naughty or nice? The answer depends on context, values, and interpretation. Your agent returns NICE (312 points), gift = books about empathy. A traditional API would return 200 OK and call it success. For an AI agent, you need to ask: Did it judge correctly?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Costs can spiral out of control&lt;/strong&gt;. Mrs. Claus (Santa's CFO) sees the API bill jump from 5,000 in Week 1 to 890,000 on December 24th. What happened? One kid didn't write a letter. They wrote a 15,000-word philosophical essay. Instead of flagging it, the agent processed every last word, burning through 53,500 tokens for a single child. At scale, this bankrupts the workshop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;And failures cascade in unexpected ways&lt;/strong&gt;. The Gift Agent doesn't just fail when it hits a timeout. It reasons through failure. It interpreted three timeouts as "system is unreliable," then saw the inventory count change and concluded "inventory is volatile, cannot guarantee fulfillment." Each interpretation fed into the next, creating a chain of reasoning that led to: "Better to disappoint than make a promise I can't keep. Default to NAUGHTY_LIST."&lt;/p&gt;

&lt;p&gt;With traditional code, you debug line by line. With AI agents, you need to debug the entire reasoning chain. Not just what APIs were called, but why the agent called them and how it interpreted each result.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Santa Actually Needs
&lt;/h2&gt;

&lt;p&gt;The answer isn't to throw out traditional observability, but to build on top of it. Think of it as three layers.&lt;/p&gt;

&lt;p&gt;This is exactly what we've built at LaunchDarkly. Our platform combines &lt;a href="https://launchdarkly.com/blog/llm-observability-in-ai-configs/" rel="noopener noreferrer"&gt;AI observability&lt;/a&gt;, &lt;a href="https://launchdarkly.com/docs/eu-docs/home/ai-configs/online-evaluations" rel="noopener noreferrer"&gt;online evaluations&lt;/a&gt;, and &lt;a href="https://launchdarkly.com/docs/eu-docs/home/flags/new" rel="noopener noreferrer"&gt;feature management&lt;/a&gt; to help you understand, measure, and control AI agent behavior in production. Let's walk through how each layer works.&lt;/p&gt;

&lt;p&gt;Start with the fundamentals. You still need distributed tracing across your agent network, latency breakdowns showing where time is spent, token usage per request, cost attribution by agent, and tool call success rates for your MCP servers. When the Toy Inventory MCP goes down, you need to see it immediately. When costs spike, you need alerts. This isn't optional. It's table stakes for running any production system.&lt;/p&gt;

&lt;p&gt;For Santa's workshop, this means tracing requests across Data Agent → Context Agent → Judgment Agent → Gift Agent, monitoring MCP server health, tracking token consumption per child evaluation, and alerting when costs spike unexpectedly. It’s important to note, LaunchDarkly's AI observability captures all of this out of the box, providing full visibility into your agent's infrastructure performance and resource consumption.&lt;/p&gt;

&lt;p&gt;Then add semantic observability. This is where AI diverges from traditional systems. You need to capture the reasoning, not just the results. For every decision, log the complete prompt, retrieved context, tool calls and their results, the agent's reasoning chain, and confidence scores.&lt;/p&gt;

&lt;p&gt;When Emma lands on the Naughty List, you can replay the entire decision. The Gift Agent received three timeouts from the Toy Inventory MCP, interpreted "inventory uncertain" as "cannot fulfill request," and defaulted to NAUGHTY_LIST as the "safe" outcome. Now you understand why it happened. And more importantly, you realize this isn't a bug in your code. It's a reasoning pattern the model developed. Reasoning patterns require different fixes than code bugs.&lt;/p&gt;

&lt;p&gt;LaunchDarkly's trace viewer lets you inspect every step of the agent's decision-making process, from the initial prompt to the final output, including all tool calls and the reasoning behind each step. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F660tl39dhmittwnhd0pb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F660tl39dhmittwnhd0pb.png" alt=" " width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally, use &lt;a href="https://launchdarkly.com/docs/eu-docs/home/ai-configs/online-evaluations" rel="noopener noreferrer"&gt;online evals&lt;/a&gt;. Where observability shows what happened, online evals automatically assess quality and take action. Using the LLM-as-a-judge approach, you score every sampled decision. One AI judges another's work:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"accuracy"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"reasoning"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Timeouts should trigger retry logic, not default to 
      worst-case outcome. System error conflated with behavioral judgment."&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"fairness"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"score"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;0.4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"reasoning"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Similar timeout patterns resulted in NICE determination 
      for other children. Inconsistent failure handling."&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This changes the conversation from vague to specific.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without evals&lt;/strong&gt;: "Let's meet tomorrow to review Emma's case and decide if we should rollback."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With evals&lt;/strong&gt;: "Accuracy dropped below 0.7 for the 'timeout cascade defaults to NAUGHTY' pattern. Automatic rollback triggered. Here are the 23 affected cases."&lt;/p&gt;

&lt;p&gt;LaunchDarkly's online evaluations run continuously in production, automatically scoring your agent's decisions and alerting you when quality degrades. You can define custom evaluation criteria tailored to your use case and set thresholds that trigger automatic actions.&lt;/p&gt;

&lt;p&gt;This is where feature management and experimentation come in. Feature flags paired with guarded rollouts let you control deployments and roll back bad ones. Experimentation lets you A/B test different approaches. With AI agents, you're doing the same thing, but instead of testing button colors or checkout flows, you're testing prompt variations, model versions, and reasoning strategies. When your evals detect accuracy has dropped below threshold, you automatically roll back to the previous agent configuration.&lt;/p&gt;

&lt;p&gt;Use feature flags to control which model version, prompt template, or reasoning strategy your agents use and seamlessly roll back when something goes wrong. Our experimentation platform lets you A/B test different agent configurations and measure which performs better on your custom metrics. Check out our guide on feature flagging AI applications.&lt;/p&gt;

&lt;p&gt;You're not just observing decisions. You're evaluating quality in real-time and taking action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Debugging Emma: all three layers in action
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Traditional observability&lt;/strong&gt; shows the Toy Inventory MCP experienced three timeouts that triggered retry logic. Token usage remained average. From an infrastructure perspective, nothing looked catastrophic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Semantic observability&lt;/strong&gt; reveals where the reasoning went wrong. The Gift Agent interpreted the timeouts as "inventory uncertain" and made the leap to "cannot fulfill requests." Rather than recognizing this as a temporary system issue, it treated the timeouts as a data problem and defaulted to NAUGHTY_LIST.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Online evals&lt;/strong&gt; reveal this isn't just a one-off problem with Emma, but a pattern happening across multiple cases. The accuracy judge flagged this decision at 0.3, well below acceptable thresholds. Querying for similar low-accuracy decisions reveals 23 other cases where timeout cascades resulted in NAUGHTY_LIST defaults.&lt;/p&gt;

&lt;p&gt;Each layer tells part of the story. Together, they give you everything you need to fix it before more parents call.&lt;/p&gt;

&lt;p&gt;With LaunchDarkly, all three layers work together in a single platform. You can trace the infrastructure issue, inspect the reasoning chain, evaluate the decision quality, and automatically roll back to a safer configuration, all within minutes of Emma's case being flagged.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Every AI agent system faces these exact challenges. Customer service agents making support decisions. Code assistants suggesting fixes. Content moderators judging appropriateness. Recommendation engines personalizing experiences. They all struggle with the same problems.&lt;/p&gt;

&lt;p&gt;Traditional observability tools weren't built for this. AI systems make decisions, and decisions need different observability than code.&lt;/p&gt;

&lt;p&gt;Santa's system says "He knows if you've been bad or good." But how he knows matters. Because when Emma gets coal instead of a bicycle due to a timeout cascade at 11:47 PM on Christmas Eve, you need to understand what happened, find similar cases, measure if it's systematic, fix it without breaking other cases, and ensure it doesn't happen again.&lt;/p&gt;

&lt;p&gt;You can't do that with traditional observability alone. AI agents aren't APIs. They're decision-makers. Which means you need to observe them differently.&lt;/p&gt;

&lt;p&gt;LaunchDarkly provides the complete platform for building reliable AI agent systems: observability to understand what's happening, online evaluations to measure quality, and feature management to control and iterate safely. Whether you're building Santa's naughty-or-nice system or a production AI application, you need all three layers working together.&lt;/p&gt;

&lt;p&gt;Ready to make your AI agents more reliable? &lt;a href="https://launchdarkly.com/docs/home/ai-configs/quickstart" rel="noopener noreferrer"&gt;Start with our AI quickstart guide&lt;/a&gt; to see how LaunchDarkly can help you ship AI agents with confidence.&lt;/p&gt;

</description>
      <category>observability</category>
      <category>ai</category>
      <category>agents</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Day 1 | 🎄 Observability under the Tree: What Changed in 2025</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Mon, 08 Dec 2025 20:21:33 +0000</pubDate>
      <link>https://forem.com/launchdarkly/day-1-observability-under-the-tree-what-changed-in-2025-dmb</link>
      <guid>https://forem.com/launchdarkly/day-1-observability-under-the-tree-what-changed-in-2025-dmb</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/observability-2025-recap-holiday" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4t7yolt2q1qjnfl94wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4t7yolt2q1qjnfl94wj.png" alt=" " width="601" height="630"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sometimes you need distance to see what really changed. And that’s exactly how 2025 felt in regards to observability. While the focus was rightfully on AI agents and MCP servers, the observability space was experiencing its own maturation. And if you feel you missed this shift, no need to panic, Santa left some gifts under the tree. Those gifts being a universal standard for instrumentation, practical platform consolidation, and AI-native capabilities that finally deliver on the hype. &lt;/p&gt;

&lt;h2&gt;
  
  
  Gift #1: OpenTelemetry or the foundation nobody wanted to build
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwf4hff7hd3pv48e3278.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwf4hff7hd3pv48e3278.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For years, observability felt like the Wild West. Every vendor had their own way of doing things, their own agents, instrumentation libraries. If you wanted to switch tools, you basically had to re-instrument your entire application. It was exhausting, but nobody wanted to be the first to blink (so to speak).&lt;/p&gt;

&lt;p&gt;OpenTelemetry had been building toward something for years, but 2025, in my opinion, was when it reached a tipping point. After gradual adoption since the early 2020s, it crossed from "early adopters only" to "the default choice." The specs had stabilized, the major players finally committed, and the tooling matured enough that adoption became low-risk rather than bleeding-edge.&lt;/p&gt;

&lt;p&gt;The change was subtle at first. A cloud provider here, a major vendor there, all quietly releasing native OTel support. Then it accelerated. Suddenly the question wasn't "should we adopt OpenTelemetry?" but "why haven't we adopted it yet?" Teams started instrumenting once and sending data anywhere they wanted. The instrumentation lock-in that had defined observability for years started to ease. Now, you still had considerations around data migration and historical context, but the path forward got clearer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1cyc91a0jj41nlnkna9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1cyc91a0jj41nlnkna9.png" alt=" " width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even more, here's what made this interesting: standardization didn't just reduce vendor lock-in. It unlocked something bigger, and that is tool scrutinization by reducing scope and being more selective about what tool goes in your toolbox.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gift #2: Platform Consolidation after the dust has settled
&lt;/h2&gt;

&lt;p&gt;With a common data standard gaining real traction, something practical happened: the tooling chaos started to make less sense.&lt;/p&gt;

&lt;p&gt;Think about the typical observability stack from a few years ago. Metrics in one place, logs in another, traces somewhere else. APM here, synthetic monitoring there, and probably two or three other specialized tools for edge cases. Each with its own interface, its own query language, its own billing model, and its own person on the team who understood how it worked.&lt;/p&gt;

&lt;p&gt;2025 was the year teams started asking "why are we doing this to ourselves?"&lt;/p&gt;

&lt;p&gt;Platform consolidation had been promised for years. However, this time felt different. With OTel providing a standard data layer, platforms could actually focus on the analysis and visualization layer instead of fighting about ingestion formats. They could genuinely unify telemetry data in ways that reduced cognitive load.&lt;/p&gt;

&lt;p&gt;It is becoming common practice for teams to cut their tool count from eight or nine down to two or three, and actually improve coverage in the process. It’s important to note that the cost story varied with some teams seeing real savings from eliminating redundant ingestion and storage and others mainly reducing operational overhead rather than raw dollar spend. At the end of the day the consistent theme was relief.&lt;/p&gt;

&lt;p&gt;With tool consolidation, came the freedom to innovate in the world of query execution, where instead of building complex queries to retrieve specific data, teams could interact with data sets using natural language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gift #3 Natural language query execution
&lt;/h2&gt;

&lt;p&gt;I’m no stranger to learning and mastering query languages like the Elasticsearch Query DSL and though optimized for improving latency and data processing, there still remained a continual learning curve.&lt;/p&gt;

&lt;p&gt;2025 brought natural language interfaces that crossed the "actually useful" threshold. Though not perfect as complex queries still benefit from knowing the underlying language, it has proven genuinely helpful for common questions. You could ask questions like:&lt;/p&gt;

&lt;p&gt;"Show me API latency for checkout over the last hour broken down by region."&lt;/p&gt;

&lt;p&gt;And get back related data without memorizing aggregation functions or time window syntax. The true value is removing the barrier of entry, so less technical team members like product managers and business-focused teams can also understand the type of performance issues that were generally understood by DevOps or SREs. In other words, the observability data that had been locked behind specialized knowledge became accessible to more people who needed it.&lt;/p&gt;

&lt;p&gt;But accessibility alone doesn't solve the deeper problem, which is knowing what to look for in the first place to understand real business impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gift #4 Aligned Business KPIs with measurable metrics
&lt;/h2&gt;

&lt;p&gt;Historically, observability was about system health. Is the service up? What's the error rate? How's latency looking? All important questions, but that unfortunately rarely answered the question of why any of this should matter to other teams. The shift toward business outcomes had been building, but 2025 is when it moved from buzzword to practical framework.&lt;/p&gt;

&lt;p&gt;Tools made it easier to connect technical metrics to business impact. That latency spike? You could now configure dashboards and alerts that showed the correlation with conversion drops and revenue impact, rather than just knowing the service was slow. This error rate? You could track it against transaction volume and calculate lost revenue. This performance degradation? &lt;/p&gt;

&lt;p&gt;You could filter by customer segment and see who was actually affected. Going as far as &lt;a href="https://launchdarkly.com/docs/tutorials/detecting-user-frustration-session-replay" rel="noopener noreferrer"&gt;detecting rage clicks&lt;/a&gt; on a page affecting user experience.&lt;/p&gt;

&lt;p&gt;The setup still required work like defining what business metrics mattered, instrumenting those alongside technical metrics, building the correlation logic. But the platforms made it possible without custom data pipelines and analytics infrastructure.&lt;/p&gt;

&lt;p&gt;This changed prioritization too. Instead of treating every alert as equally urgent, teams could focus on what actually moved the needle. The 99th percentile latency issue that only affected a low-value endpoint? Still worth fixing eventually, but not at 2 AM. The intermittent error affecting checkout? All hands on deck.&lt;/p&gt;

&lt;p&gt;The business-outcome framing made observability feel less like a tax and more like a strategic capability. But it also created a new opportunity that begged the question: how can AI truly be used to not just automate processes but make real decisions around data interpretation and root cause analysis with the intent of self healing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gift #5 AI-Native observability, when the promises started delivering
&lt;/h2&gt;

&lt;p&gt;Which brings us to AI, the talk of the town that has finally started delivering in meaningful ways in 2025.&lt;/p&gt;

&lt;p&gt;Not the "AI-powered" marketing speak that got slapped on every dashboard, but real AI-native observability that changed how teams work. Anomaly detection that's actually tuned for your environment and doesn't require a data science degree to configure. Root cause analysis that connects the dots across services, traces, logs, and business metrics without requiring manual correlation. Predictive insights that warn you about capacity issues before they become incidents. All built into dashboards like &lt;a href="https://launchdarkly.com/docs/home/observability/vega" rel="noopener noreferrer"&gt;Vega AI&lt;/a&gt; for thorough investigations.&lt;/p&gt;

&lt;p&gt;The capabilities themselves weren't entirely new as statistical anomaly detection has existed for years. What changed was the integration, the usability, and the accuracy. The systems got good enough that you could trust them to surface real issues without drowning you in false positives.&lt;/p&gt;

&lt;p&gt;All the previous observability gifts created the perfect foundation for AI to enter the chat.  From OTel's standardized data formats creating consistent structure leading to more reliable pattern recognition. To the consolidated platforms that provide a unified view making natural language interfaces even more powerful because you can ask not just "what happened" but "why did this happen and what should I do about it." By the time you add connections between metrics and business impact, you already have a full observability system primed and ready for AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: What LaunchDarkly left in your stockings this year
&lt;/h2&gt;

&lt;p&gt;While the industry was catching up on the observability fundamentals, LaunchDarkly was actively building toward the same vision, with one crucial difference: observability unified with feature flag context. Here's what landed in 2025 that aligns with these broader trends.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/changelog/introducing-vega-the-launchdarkly-observability-ai-early-access/" rel="noopener noreferrer"&gt;Vega AI: Making AI-Native Observability Real&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Speaking of AI capabilities that actually deliver, LaunchDarkly introduced Vega, an AI-powered debugging companion that goes beyond generic copilots. Vega combines two integrated capabilities: an AI debugging assistant that summarizes errors, identifies root causes, and suggests code fixes with GitHub integration, and a natural language search assistant that lets you query observability data without memorizing syntax. The key difference? Context. Vega understands feature flags alongside telemetry data, connecting what changed in your release with what broke in production, enabling faster triage and smarter fixes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/changelog/c-server-sdk-hooks-and-tracing-support/" rel="noopener noreferrer"&gt;OpenTelemetry Support: Embracing the Standard&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the spirit of OTel's tipping point, LaunchDarkly's C++ Server SDK added native OpenTelemetry tracing support. Teams can now enrich their distributed traces with LaunchDarkly context, sending observability data to power Guardian while maintaining the flexibility to route telemetry wherever they need. It's exactly the kind of integration that becomes possible when standardization reduces the friction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/changelog/session-replay-for-android-and-ios-beta/" rel="noopener noreferrer"&gt;Session Replay for Mobile: Closing the Visibility Gap&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Web session replay has been table stakes, but mobile remains a black box. LaunchDarkly's new native iOS and Android SDKs for Session Replay now capture UI interactions, gestures, and network events tied directly to feature flag evaluations. When an issue occurs, you can see exactly what your users experienced and which flag variation they encountered, bringing the same level of observability to mobile that web teams have enjoyed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/changelog/regression-debugging-for-guarded-releases/" rel="noopener noreferrer"&gt;Regression Debugging: Automated Error Detection&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Guarded Releases capability now includes regression debugging that automatically surfaces frontend errors tied to new feature flag variations without manual instrumentation. When a rollout triggers a regression, relevant errors appear directly in the monitoring view, with one-click access to deeper analysis in the observability view. It's the kind of integration that makes detecting issues faster and connecting dots easier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/changelog/llm-observability/" rel="noopener noreferrer"&gt;LLM Observability: AI-Specific Telemetry&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As GenAI applications moved to production, teams needed visibility beyond standard metrics. LaunchDarkly's LLM observability tracks not just performance metrics like latency and error rates, but semantic details including prompts, token usage, and responses. It addresses the unique challenge of monitoring AI features where understanding quality matters as much as performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guardrail Metrics and Feature Flag Monitoring: Business Impact by Default&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Two updates reinforced the shift toward business-outcome observability. Guardrail metrics let teams define default trusted metrics that automatically apply to any guarded rollout, ensuring consistent measurement across releases. Feature Flag Monitoring provides a unified view showing errors, web vitals, and the impact of feature changes in one place, with flag change annotations overlaid to see correlation. You ship a feature, you immediately see how it's performing, all with the context of which flags are active.&lt;/p&gt;

&lt;p&gt;Together, these updates represent LaunchDarkly's commitment to the same principles driving industry-wide change: standardized instrumentation, consolidated platforms, natural language accessibility, business-outcome alignment, and AI-native capabilities. The difference is in the integration, where feature flags provide the missing context that helps teams understand not just what happened, but what they changed that caused it to happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  What next?
&lt;/h2&gt;

&lt;p&gt;When I look at these changes together, I see something more significant than individual feature releases or product launches. The observability landscape of 2025 represents a meaningful shift in what's practical and accessible.&lt;/p&gt;

&lt;p&gt;It's more accessible seeing that more people on the team can explore data and ask questions without specialized training. It's more focused because we have better tools for measuring what actually matters for business outcomes, not just what's easy to instrument. It's more intelligent in that AI handles more of the grunt work of correlation and pattern matching. And it's more practical because costs are more manageable and the tool sprawl that was slowly crushing teams can hopefully start to recede.&lt;/p&gt;

&lt;p&gt;As we head into 2026, I'm not thinking about what new features will launch. I'm thinking about what becomes possible when these foundations are in place. When observability isn't a specialized skill but a shared capability. When understanding system behavior is directly tied to business strategy. When the friction between having a question and getting an answer continues to decrease.&lt;/p&gt;

&lt;p&gt;That's the real gift 2025 left us. Not just better tools, but a better way of working. One that's still evolving, but noticeably more mature than where we were.&lt;/p&gt;

&lt;p&gt;Join us for Day 2 of the Holiday Campaign where we'll discuss the reality of observability for AI agents.&lt;/p&gt;

</description>
      <category>observability</category>
      <category>holiday</category>
      <category>recap</category>
    </item>
    <item>
      <title>Detecting User Frustration: Understanding rage clicks and session replay</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Wed, 26 Nov 2025 00:02:09 +0000</pubDate>
      <link>https://forem.com/launchdarkly/detecting-user-frustration-understanding-rage-clicks-and-session-replay-aa3</link>
      <guid>https://forem.com/launchdarkly/detecting-user-frustration-understanding-rage-clicks-and-session-replay-aa3</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/detecting-user-frustration-session-replay" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Part 1 of 3: Rage Click Detection with LaunchDarkly&lt;/p&gt;

&lt;p&gt;The holidays are around the corner and with it comes the expected uptick in traffic, and as traffic increases the need to preserve user experience becomes that much more imperative. You can ship out a new feature, everything seems to be going as planned, and then all of a sudden you start to see a spike in support tickets. The error logs aren’t helpful or show nothing and the metrics look fine. &lt;/p&gt;

&lt;p&gt;So, what happened?&lt;br&gt;
Wouldn’t you like to literally put yourself in the user's shoes by seeing exactly what happened in a user session? Enter session replay and more specifically rage clicks as a barometer for user experience.&lt;/p&gt;

&lt;p&gt;In this three-part series, we'll explore how LaunchDarkly's session replay and observability features help you detect, diagnose, and fix user experience issues in real-time. Part 1 covers the fundamentals: what rage clicks are, how to detect them, and how to get started with session replay.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is a rage click?
&lt;/h2&gt;

&lt;p&gt;A rage click occurs when a user rapidly clicks the same element multiple times in frustration, usually because the element appears clickable but isn't responding as expected. It's one of the strongest behavioral signals that something is wrong with your user experience.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why rage clicks matter:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Silent failures&lt;/strong&gt;: Many bugs don't throw errors or trigger alerts. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real user impact&lt;/strong&gt;: Unlike synthetic monitoring or load tests, rage clicks show you exactly what real users experienced in production conditions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Early warning system&lt;/strong&gt;: Rage click spikes often appear before users start filing support tickets or abandoning your app entirely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Actionable insights&lt;/strong&gt;: Unlike vague complaints like 'the site is slow,' rage clicks point to specific UI elements that need attention.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Anatomy of a rage click
&lt;/h2&gt;

&lt;p&gt;Not every series of rapid clicks qualifies as rage clicks. LaunchDarkly uses three criteria to distinguish genuine frustration from normal user behavior:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvoz98ycqnun4zjeo53i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwvoz98ycqnun4zjeo53i.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These defaults work well for most applications, but LaunchDarkly lets you adjust them based on your specific use case. We'll cover customization later in this post.&lt;/p&gt;
&lt;h2&gt;
  
  
  Beyond rage clicks: Other Frustration Signals
&lt;/h2&gt;

&lt;p&gt;While rage clicks are the most obvious indicator of user frustration, LaunchDarkly's session replay automatically captures several other behavioral patterns:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjioa2supsdgds21b226.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjioa2supsdgds21b226.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Together, these signals paint a complete picture of user frustration, which gives you qualitative context that complements your quantitative metrics.&lt;/p&gt;

&lt;p&gt;Let's walk through implementing session replay in a complete application, from installation to viewing your first rage click detection.&lt;/p&gt;
&lt;h2&gt;
  
  
  Getting Started with LaunchDarkly session replay
&lt;/h2&gt;
&lt;h3&gt;
  
  
  HolisticSelf App
&lt;/h3&gt;

&lt;p&gt;This health tracking app, written in Javascript, demonstrates LaunchDarkly session replay implementation for detecting user frustration through rage click monitoring.&lt;/p&gt;

&lt;p&gt;Find all the code for this demo &lt;a href="https://github.com/arober39/HolisticSelfApp" rel="noopener noreferrer"&gt;here&lt;/a&gt;, with the LaunchDarkly integration instructions in this &lt;a href="https://github.com/arober39/HolisticSelfApp/blob/main/LAUNCHDARKLY_SETUP.md" rel="noopener noreferrer"&gt;.md file&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Implementation Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;File Structure&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;src/
├── services/
│   └── launchdarkly.js    &lt;span class="c"&gt;# LaunchDarkly initialization service&lt;/span&gt;
├── main.jsx                &lt;span class="c"&gt;# App entry point (initializes LaunchDarkly)&lt;/span&gt;
└── App.jsx                 &lt;span class="c"&gt;# Main React component&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic initialization on app load.&lt;/li&gt;
&lt;li&gt;Anonymous user tracking (no login required).&lt;/li&gt;
&lt;li&gt;Strict privacy mode for health data protection.&lt;/li&gt;
&lt;li&gt;Network recording for API debugging.&lt;/li&gt;
&lt;li&gt;Zero custom rage click code needed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install Dependencies&lt;/strong&gt;&lt;br&gt;
First, install the required LaunchDarkly packages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;launchdarkly-js-client-sdk @launchdarkly/observability @launchdarkly/session-replay
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&amp;lt; Note: The observability plugin requires JavaScript SDK version 3.7.0 or later. &amp;gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Configure Environment Variables&lt;/strong&gt;.&lt;br&gt;
Create a .env file in your project root. You can get your client-side ID from LaunchDarkly: &lt;strong&gt;Project Settings&lt;/strong&gt; &amp;gt; &lt;strong&gt;Environments&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;VITE_LAUNCHDARKLY_CLIENT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your-client-side-id-here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqa2bf5skf5uucv3xmnz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frqa2bf5skf5uucv3xmnz.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Initialize LaunchDarkly Client&lt;/strong&gt;.&lt;br&gt;
Create &lt;code&gt;src/services/launchdarkly.js&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// LaunchDarkly session replay and Observability Client Initialization&lt;/span&gt;
&lt;span class="c1"&gt;// Based on: https://launchdarkly.com/docs/sdk/observability/javascript&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;initialize&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;launchdarkly-js-client-sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Observability&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@launchdarkly/observability&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;SessionReplay&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LDRecord&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@launchdarkly/session-replay&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;ldClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;isInitialized&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;initializeLaunchDarkly&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;clientSideId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isInitialized&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LaunchDarkly client already initialized&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;ldClient&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Default to anonymous user if none provided&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userContext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`anonymous-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;anonymous&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;manualStart&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;privacySetting&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;default&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// 'none', 'default', or 'strict'&lt;/span&gt;
    &lt;span class="nx"&gt;startSessionReplay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Initialize LaunchDarkly client with observability and session replay plugins&lt;/span&gt;
    &lt;span class="c1"&gt;// Reference: https://launchdarkly.com/docs/sdk/observability/javascript&lt;/span&gt;
    &lt;span class="nx"&gt;ldClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;clientSideId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Observability&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
          &lt;span class="na"&gt;manualStart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;manualStart&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}),&lt;/span&gt;
        &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;SessionReplay&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
          &lt;span class="na"&gt;manualStart&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;manualStart&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="na"&gt;privacySetting&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;privacySetting&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Redacts PII based on setting&lt;/span&gt;
        &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="c1"&gt;// Wait for client to be ready&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ldClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;waitForInitialization&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="nx"&gt;isInitialized&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LaunchDarkly client initialized with observability and session replay&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Start plugins if not using manual start&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;manualStart&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;startSessionReplay&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Start session replay recording&lt;/span&gt;
        &lt;span class="c1"&gt;// Note: Rage click detection is automatically handled by LaunchDarkly&lt;/span&gt;
        &lt;span class="c1"&gt;// Configure thresholds in Project Settings &amp;gt; Observability &amp;gt; Session settings&lt;/span&gt;
        &lt;span class="c1"&gt;// Reference: https://launchdarkly.com/docs/home/observability/settings&lt;/span&gt;
        &lt;span class="nx"&gt;LDRecord&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
          &lt;span class="na"&gt;forceNew&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Continue existing session if available&lt;/span&gt;
          &lt;span class="na"&gt;silent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Show console warnings&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Session replay started&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;ldClient&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Failed to initialize LaunchDarkly:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Implement the LaunchDarkly Client in Main&lt;/strong&gt;.&lt;br&gt;
Initialize LaunchDarkly in &lt;code&gt;src/main.jsx&lt;/code&gt; for:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;React&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;ReactDOM&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react-dom/client&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./App&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./styles.css&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;initializeLaunchDarkly&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./services/launchdarkly&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Get LaunchDarkly client-side ID from environment variable&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;LAUNCHDARKLY_CLIENT_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;import&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;meta&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;VITE_LAUNCHDARKLY_CLIENT_ID&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Initialize LaunchDarkly before React renders&lt;/span&gt;
&lt;span class="c1"&gt;// This ensures session replay starts as early as possible&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;LAUNCHDARKLY_CLIENT_ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;initializeLaunchDarkly&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;LAUNCHDARKLY_CLIENT_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// User context (null = anonymous user)&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-health-app&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;privacySetting&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;strict&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// Recommended for health apps&lt;/span&gt;
      &lt;span class="na"&gt;enableNetworkRecording&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Failed to initialize LaunchDarkly:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;LaunchDarkly client-side ID not configured. Set VITE_LAUNCHDARKLY_CLIENT_ID environment variable.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;ReactDOM&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createRoot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;document&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getElementById&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;root&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;render&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;React&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;StrictMode&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;App&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/React.StrictMode&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have initialized our LaunchDarkly client and configured session replay at the start of the app, we can configure rage detection in the LaunchDarkly UI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configuring Rage Click Detection
&lt;/h3&gt;

&lt;p&gt;Rage click detection is configured in the LaunchDarkly UI, not in code. This makes it easy to adjust thresholds without redeploying.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access Settings&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to LaunchDarkly.&lt;/li&gt;
&lt;li&gt;Navigate to &lt;strong&gt;Project Settings&lt;/strong&gt; &amp;gt; &lt;strong&gt;Observability&lt;/strong&gt; &amp;gt; &lt;strong&gt;Session settings&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Find the &lt;strong&gt;Rage clicks&lt;/strong&gt; section.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjyllgkqh0cbuy2bnnz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgjyllgkqh0cbuy2bnnz5.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Adjust the rage click settings to reflect the user clicking 5+ times within 2 seconds in the same area:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Minimum clicks -&amp;gt; Set number of clicks required to trigger to 5 clicks.&lt;/li&gt;
&lt;li&gt;Click radius -&amp;gt; Set the pixel radius for click proximity to 8 pixels.&lt;/li&gt;
&lt;li&gt;Elapsed time -&amp;gt; Set the time window for detecting rage clicks to 2 seconds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These settings apply to all sessions automatically and require no code changes needed.&lt;/p&gt;

&lt;h4&gt;
  
  
  How it works:
&lt;/h4&gt;

&lt;p&gt;Session recording&lt;br&gt;
When the session replay plugin is initialized, LaunchDarkly automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Records all DOM changes.&lt;/li&gt;
&lt;li&gt;Captures every click with precise coordinates and timestamps.&lt;/li&gt;
&lt;li&gt;Tracks scroll events, form inputs, and navigation.&lt;/li&gt;
&lt;li&gt;Sends data to LaunchDarkly servers in the background.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Automatic detection&lt;br&gt;
LaunchDarkly's backend analyzes recorded sessions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identifies rapid click patterns.&lt;/li&gt;
&lt;li&gt;Applies your configured thresholds.&lt;/li&gt;
&lt;li&gt;Marks sessions with has_rage_clicks=true attribute.&lt;/li&gt;
&lt;li&gt;Associates rage clicks with specific elements and pages.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;
  
  
  Viewing session replay Results
&lt;/h4&gt;

&lt;p&gt;In order to test this integration, you can do the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a test button that intentionally does nothing:&lt;/li&gt;
&lt;li&gt;Rapidly click the button 5+ times within 2 seconds in the same spot.&lt;/li&gt;
&lt;li&gt;Wait a few minutes for LaunchDarkly to process the session&lt;/li&gt;
&lt;li&gt;Check LaunchDarkly dashboard by navigating to &lt;strong&gt;Monitor&lt;/strong&gt; &amp;gt; &lt;strong&gt;Sessions&lt;/strong&gt; with the option to Filter by has_rage_clicks=true.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To start, your app should look something like this:&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/Df6BW-omniQ"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;And when you navigate to Launchdarkly UI -&amp;gt; &lt;strong&gt;Sessions&lt;/strong&gt; and you should be able to see the complete session replay.&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/yrlwtxNjai8"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;In order to test the rage clicks integration, we can add a custom button by adding the following code to the &lt;code&gt;AilmentsListScreen.jsx&lt;/code&gt; file within the header div.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;h1&amp;gt;&lt;/span&gt;Health Tracker&lt;span class="nt"&gt;&amp;lt;/h1&amp;gt;&lt;/span&gt;
{/* Test button for rage click detection - click rapidly 5+ times within 2 seconds */}
&lt;span class="nt"&gt;&amp;lt;button&lt;/span&gt;
  &lt;span class="na"&gt;style=&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;
    &lt;span class="na"&gt;marginTop:&lt;/span&gt; &lt;span class="err"&gt;'10&lt;/span&gt;&lt;span class="na"&gt;px&lt;/span&gt;&lt;span class="err"&gt;',&lt;/span&gt;
    &lt;span class="na"&gt;padding:&lt;/span&gt; &lt;span class="err"&gt;'8&lt;/span&gt;&lt;span class="na"&gt;px&lt;/span&gt; &lt;span class="err"&gt;16&lt;/span&gt;&lt;span class="na"&gt;px&lt;/span&gt;&lt;span class="err"&gt;',&lt;/span&gt;
    &lt;span class="na"&gt;backgroundColor:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="na"&gt;#ff4444&lt;/span&gt;&lt;span class="err"&gt;',&lt;/span&gt;
    &lt;span class="na"&gt;color:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="na"&gt;white&lt;/span&gt;&lt;span class="err"&gt;',&lt;/span&gt;
    &lt;span class="na"&gt;border:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="na"&gt;none&lt;/span&gt;&lt;span class="err"&gt;',&lt;/span&gt;
    &lt;span class="na"&gt;borderRadius:&lt;/span&gt; &lt;span class="err"&gt;'4&lt;/span&gt;&lt;span class="na"&gt;px&lt;/span&gt;&lt;span class="err"&gt;',&lt;/span&gt;
    &lt;span class="na"&gt;cursor:&lt;/span&gt; &lt;span class="err"&gt;'&lt;/span&gt;&lt;span class="na"&gt;pointer&lt;/span&gt;&lt;span class="err"&gt;',&lt;/span&gt;
    &lt;span class="na"&gt;fontSize:&lt;/span&gt; &lt;span class="err"&gt;'12&lt;/span&gt;&lt;span class="na"&gt;px&lt;/span&gt;&lt;span class="err"&gt;',&lt;/span&gt;
  &lt;span class="err"&gt;}}&lt;/span&gt;
  &lt;span class="na"&gt;onClick=&lt;/span&gt;&lt;span class="s"&gt;{(e)&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt; {
    // Intentionally does nothing - for testing rage clicks
    e.preventDefault();
    console.log('Test button clicked (intentionally non-functional for rage click testing)');
  }}
  title="Test rage click detection: Click rapidly 5+ times within 2 seconds in the same spot"
&amp;gt;
  🧪 Test Rage Click (Click Rapidly)
&lt;span class="nt"&gt;&amp;lt;/button&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that you have your test button, you can try clicking it 5+ times within 2 seconds in the same spot as shown in the session below.&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/jUdAXOj4TFU"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;Finally, you should see your full session replay under the sessions tab.&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/EFb8Iu23e5g"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;It’s important to note that all sessions for a specific app are appended in the session replay tab. So if a user is inactive and comes back to the same tab, the video will be longer. However, if the user starts a new tab or its been more than 4 hours a new video will be created for that session.&lt;/p&gt;

&lt;p&gt;After enabling rage click, you can now:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Filter sessions by has_rage_clicks=true to find frustrated users.&lt;/li&gt;
&lt;li&gt;Replay sessions to see exactly what caused frustration.&lt;/li&gt;
&lt;li&gt;Identify specific UI elements that trigger rage clicks.&lt;/li&gt;
&lt;li&gt;View correlated errors and network requests in the timeline.&lt;/li&gt;
&lt;li&gt;Prioritize fixes based on real user frustration data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Advanced Search Queries
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Basic: All sessions with rage clicks&lt;/span&gt;
&lt;span class="nv"&gt;has_rage_clicks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;

&lt;span class="c"&gt;# Filter by page URL&lt;/span&gt;
&lt;span class="nv"&gt;has_rage_clicks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;AND visited-url contains &lt;span class="s2"&gt;"/checkout"&lt;/span&gt;

&lt;span class="c"&gt;# Filter by specific HTML element clicked&lt;/span&gt;
&lt;span class="nv"&gt;clickSelector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"button.submit-order"&lt;/span&gt;

&lt;span class="c"&gt;# Filter by button text&lt;/span&gt;
clickTextContent contains &lt;span class="s2"&gt;"Place Order"&lt;/span&gt;

&lt;span class="c"&gt;# Rage clicks with long session duration&lt;/span&gt;
&lt;span class="nv"&gt;has_rage_clicks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;AND active_length &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; 120s

&lt;span class="c"&gt;# Combine multiple conditions&lt;/span&gt;
&lt;span class="nv"&gt;has_rage_clicks&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true &lt;/span&gt;AND &lt;span class="nv"&gt;browser&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Chrome"&lt;/span&gt; AND &lt;span class="nv"&gt;device_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Desktop"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Troubleshooting
&lt;/h3&gt;

&lt;p&gt;Rage clicks Not detected&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify rage click detection is enabled in &lt;strong&gt;Project Settings&lt;/strong&gt; &amp;gt; &lt;strong&gt;Observability&lt;/strong&gt; &amp;gt; &lt;strong&gt;Session settings&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Check that your click pattern meets the configured thresholds (default: 5 clicks within 8 pixels in 2 seconds).&lt;/li&gt;
&lt;li&gt;Ensure the LaunchDarkly client initialized successfully (check browser console).&lt;/li&gt;
&lt;li&gt;Wait a few minutes for LaunchDarkly to process sessions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Session replay Not Working&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify LaunchDarkly account has observability features enabled.&lt;/li&gt;
&lt;li&gt;Check that both plugins are properly initialized.&lt;/li&gt;
&lt;li&gt;Ensure Content Security Policy allows connections to LaunchDarkly.&lt;/li&gt;
&lt;li&gt;Check browser console for initialization errors.&lt;/li&gt;
&lt;li&gt;Verify the client-side ID is correct.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Privacy and Security Considerations
&lt;/h3&gt;

&lt;p&gt;Session replay is powerful, but it comes with important privacy responsibilities. LaunchDarkly provides several layers of protection to ensure you're capturing useful debugging data without exposing sensitive user information.&lt;/p&gt;

&lt;p&gt;Default Privacy Mode: Strict Protection&lt;br&gt;
By default, LaunchDarkly operates in strict privacy mode, which provides the safest option:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;All text inputs are obfuscated&lt;/strong&gt;: Form fields, text areas, and input boxes show as masked characters.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PII regex matching&lt;/strong&gt;: Text matching patterns for emails, phone numbers, social security numbers, addresses, and credit cards are automatically masked.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Images and media preserved&lt;/strong&gt;: Visual elements remain visible for UX debugging.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means you can safely record sessions without worrying about capturing passwords, credit card numbers, or other sensitive data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;Implementing rage click detection with LaunchDarkly session replay is a pretty straightforward process and involves installing the observability sdk with specific plugins. The real power comes from LaunchDarkly's automatic detection and the ability to replay sessions with full context (errors, logs, network requests) to understand exactly what frustrated users.&lt;/p&gt;

&lt;p&gt;By detecting rage clicks, we can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify broken or confusing UI elements.&lt;/li&gt;
&lt;li&gt;Understand user frustration patterns.&lt;/li&gt;
&lt;li&gt;Prioritize fixes based on real user data.&lt;/li&gt;
&lt;li&gt;Improve user experience proactively.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was accomplished without any additional code changes. LaunchDarkly handled everything automatically, and you were able to adjust sensitivity through the dashboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  What's Next: From Detection to Action
&lt;/h3&gt;

&lt;p&gt;You now have the foundation for detecting user frustration with LaunchDarkly's session replay. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically capture rage clicks, rage scrolls, and form abandons.&lt;/li&gt;
&lt;li&gt;Search for sessions with specific frustration patterns.&lt;/li&gt;
&lt;li&gt;Watch full session replays with correlated errors and logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But detection alone isn't enough. The real magic comes from connecting these frustration signals directly to your feature releases, so you can catch issues during progressive rollouts and roll back instantly if something breaks.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;Part 2, we'll explore Guarded Releases&lt;/strong&gt;: how to automatically monitor rage clicks during feature rollouts, set up alerts for frustration spikes, and enable automated rollback when metrics exceed thresholds.&lt;/p&gt;

&lt;p&gt;You'll learn how to create a closed-loop system where user frustration signals trigger immediate action, which prevents small issues from becoming widespread problems.&lt;/p&gt;

&lt;p&gt;Additional Resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://launchdarkly.com/docs/home/observability/session-replay" rel="noopener noreferrer"&gt;LaunchDarkly session replay Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://launchdarkly.com/docs/sdk/observability" rel="noopener noreferrer"&gt;Observability SDK Reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://launchdarkly.com/docs/home/observability/settings" rel="noopener noreferrer"&gt;Observability Settings&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>monitoring</category>
      <category>observability</category>
      <category>ux</category>
      <category>javascript</category>
    </item>
    <item>
      <title>From DevOps to Developer Advocacy: Finding My Path in the Age of AI</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Thu, 06 Nov 2025 15:22:14 +0000</pubDate>
      <link>https://forem.com/launchdarkly/from-devops-to-developer-advocacy-finding-my-path-in-the-age-of-ai-1knc</link>
      <guid>https://forem.com/launchdarkly/from-devops-to-developer-advocacy-finding-my-path-in-the-age-of-ai-1knc</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuqfgylodzr33grv4qzt.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyuqfgylodzr33grv4qzt.jpg" alt=" " width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When my college professor challenged me to try computer science for just one semester, I never imagined it would lead me here. I was planning to be an English major. Fast forward through a CS degree, multiple tech internships, teaching at GirlsWhoCode, and a stint as a DevOps engineer—I found myself at a crossroads, seeking something that married my technical skills with my passion for communication and teaching.&lt;/p&gt;

&lt;p&gt;That's when a friend mentioned developer advocacy. I had never heard of it in school. But once I understood the role, I knew I'd found my calling.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pivot: Why DevRel?
&lt;/h2&gt;

&lt;p&gt;After working as a DevOps engineer, I started exploring product management. I was drawn to the big-picture thinking: &lt;em&gt;What should we build? Why should we build it? What's next?&lt;/em&gt; I wanted something more people-facing, something that let me connect with the broader tech community.&lt;/p&gt;

&lt;p&gt;But here's what drew me to developer advocacy over product management: &lt;strong&gt;freedom and creativity&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;In product management, you focus intensely on one product. In developer advocacy, you work across multiple products and solutions. You get to determine what you focus on, what content you create, and how you show developers what's possible. You're not just conceptualizing ideas, but you're building solutions, writing code, creating content, and directly engaging with the community.&lt;/p&gt;

&lt;p&gt;For me, DevRel was the perfect marriage of my computer science background and my original desire to study English. I could code &lt;em&gt;and&lt;/em&gt; communicate. I could build &lt;em&gt;and&lt;/em&gt; teach.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does a Developer Advocate Actually Do?
&lt;/h2&gt;

&lt;p&gt;When people ask me what I do, I break it down into these core areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Technical Writing&lt;/strong&gt;: Creating documentation, tutorials, and guides that help developers succeed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coding&lt;/strong&gt;: Building demos, sample applications, and proof-of-concepts that showcase what's possible.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Teaching/Speaking&lt;/strong&gt;: Breaking down complex concepts and making them digestible, whether on stage, on video, or in writing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community Engagement&lt;/strong&gt;: Building authentic relationships with developers, listening to their feedback, and being their voice internally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning in Public&lt;/strong&gt;: Constantly exploring new technologies and sharing the journey with others.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bridge the gap between community and internal teams&lt;/strong&gt;: Ensuring feedback flows both ways.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every day is different. It's goal-driven, but you set your own metrics. As I like to say, it feels like getting a PhD—you're doing research and providing deliverables against a hypothesis you've developed.&lt;/p&gt;

&lt;h2&gt;
  
  
  For the Introverts: You Don't Have to Be a Social Butterfly
&lt;/h2&gt;

&lt;p&gt;One question I get a lot: "Do you have to be an extrovert to do DevRel?"&lt;/p&gt;

&lt;p&gt;Absolutely not. You don't need to be a social butterfly. What you need is authenticity and the ability to connect meaningfully with people, whether that's one-on-one, in writing, or through video content. Some of the best developer advocates I know are introverts who've found their own channels for community engagement.&lt;/p&gt;

&lt;p&gt;It's funny because people often assume I'm naturally outgoing, which in some cases is accurate especially if I'm excited about something or an event, but the truth is I'm a mix of both an introvert and an extrovert. &lt;/p&gt;

&lt;p&gt;I have no problem talking to people and putting myself out there when networking. At the same time, I find a lot of rest and ideas in solitude and deep thought, which, in the age of AI, is so necessary for creativity and original thinking.&lt;/p&gt;

&lt;p&gt;If you consider yourself an introvert, I wouldn't automatically assume DevRel isn't for you. Rather, you should consider whether you enjoy teaching others, building in public (mostly online), and can translate business goals into content strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Takes to Thrive in DevRel
&lt;/h2&gt;

&lt;p&gt;Based on my journey and what I've seen work for others, here's what helps you succeed:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comfort with ambiguity.&lt;/strong&gt; If you're self-driven, you'll thrive. There's not a lot of hand-holding. You chart your own course.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Goal-setting discipline.&lt;/strong&gt; Break down big goals into a roadmap. Set metrics for yourself, no matter how big or small.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical foundation and love for learning.&lt;/strong&gt; A huge part of this role is learning new technologies so you can connect with different tech communities. If you don't genuinely enjoy learning, this will burn you out. Most DevRel folks have spent years working as engineers before transitioning into the field. The reason is simple—empathy. &lt;/p&gt;

&lt;p&gt;The best way to relate to developer problems and to speak from experience is being in the trenches: writing code, debugging, searching for solutions, getting frustrated, resisting the urge to throw your laptop, going for a walk instead, coming back with one piece of insight that changes your perspective, getting a working solution, and getting to close out your 50 tabs spread across 2-3 screens. &lt;/p&gt;

&lt;p&gt;It's a rite of passage and will help you understand how developers learn best.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding your strengths.&lt;/strong&gt; This role centers around your personal brand and building your credibility. Your company's awareness grows by association with your authentic voice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The ability to simplify complexity.&lt;/strong&gt; You have to take the super cool things and build bridges to people who don't understand them yet. This is where teaching ability matters most.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Authentic relationship building.&lt;/strong&gt; People want to know you care and that what they say is valued. Authenticity beats perfection every time.&lt;/p&gt;

&lt;h2&gt;
  
  
  DevRel's Biggest Stage Yet: The AI Revolution
&lt;/h2&gt;

&lt;p&gt;I recently came across a &lt;a href="https://www.youtube.com/watch?v=2vnjJo_zauI" rel="noopener noreferrer"&gt;talk by Angie Jones&lt;/a&gt;, VP of Engineering at Block, where she refocused our attention to the impact of AI education on DevRel strategy and how AI should not be seen as a threat, but an opportunity to lead the charge forward.&lt;/p&gt;

&lt;p&gt;It completely changed how I think about developer advocacy in the age of AI. She outlined how her team navigated a massive pivot when Block shifted focus to Goose, their AI agent. The lessons she shared resonated deeply with my own experience watching AI transform not just software development, but the entire DevRel landscape.&lt;/p&gt;

&lt;p&gt;Angie's talk crystallized several truths about how AI education has reshaped DevRel's goals and focus:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vulnerability builds trust.&lt;/strong&gt; We all have our areas of expertise in which we can show up as the expert, but AI leveled the playing field, where the only way of maintaining authenticity was to admit that we are not AI experts. We are fellow learners who are in the trenches with you.&lt;/p&gt;

&lt;p&gt;You do have to have a level of confidence in what you're talking about to build trust, but trust is maintained through honesty. Saying "I don't know, but I'll find out" is better than what my aunt often referred to as "speaking from the hip" and leading people down the wrong path.&lt;/p&gt;

&lt;p&gt;Which leads to the next point that resonated with me from the talk.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Approachability is a superpower.&lt;/strong&gt; Developers don't want gurus on pedestals. They want fellow travelers who are a few steps ahead, willing to share the messy process of tinkering with new technology. The emphasis is on being &lt;em&gt;a few steps ahead&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;That's what informs strategy. When we touch on the pulse of the developer community and are able to see how fast or slow the heart is beating for certain topics, we can create content that is ready for the impending demand.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build in public.&lt;/strong&gt; I like when Angie mentions this as the concept of showing your work. This is also tied to trust. Because AI can do so many things including write code, it can be easy to just show the final product, but the true value is in the process. &lt;/p&gt;

&lt;p&gt;This can be intimidating at first, but what I've found is in doing so you answer the question of &lt;em&gt;how&lt;/em&gt;, so users can move quickly to building stuff themselves without getting stuck in theory jail.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Focus on what matters.&lt;/strong&gt; AI forced us to focus on what matters. Yes, this was the case before, but now that we can essentially build anything, we have to finetune our skills as teachers. &lt;/p&gt;

&lt;p&gt;Now that AI is here and has delivered on its promises from previous years—where AI is no longer stuck in a chat window, but now has arms and legs to move with agency across the internet, performing tasks—we have to focus on AI education and AI enablement.&lt;/p&gt;

&lt;h2&gt;
  
  
  Finding Your Path Forward
&lt;/h2&gt;

&lt;p&gt;Looking back at my journey from that first computer science class to where I am now, at the risk of being overly sentimental, I realize that developer advocacy found me as much as I found it. It gave me a way to combine everything I love, including building, teaching, learning, and connecting with people.&lt;/p&gt;

&lt;p&gt;If you're considering DevRel, my advice is simple: start where you are. You don't need to have it all figured out. Share what you're learning. Build something small in public. Connect authentically with one developer at a time. The path will reveal itself.&lt;/p&gt;

&lt;p&gt;And if you're already in DevRel, remember that we're in a unique position right now. We get to shape how an entire generation of developers thinks about and uses AI. That's not just a job. it's a responsibility and an opportunity.&lt;/p&gt;

&lt;p&gt;The industry will keep changing. New technologies will emerge. But the core of what we do, which is helping developers succeed, building trust through authenticity, and making complex things accessible will always matter.&lt;/p&gt;

</description>
      <category>devrel</category>
      <category>devops</category>
      <category>career</category>
      <category>ai</category>
    </item>
    <item>
      <title>A Deeper Look at LaunchDarkly Architecture: More than Feature Flags</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Tue, 28 Oct 2025 21:15:14 +0000</pubDate>
      <link>https://forem.com/launchdarkly/a-deeper-look-at-launchdarkly-architecture-more-than-feature-flags-2gg0</link>
      <guid>https://forem.com/launchdarkly/a-deeper-look-at-launchdarkly-architecture-more-than-feature-flags-2gg0</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/ld-arch-deep-dive" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;When developers first encounter LaunchDarkly, they often see it as a feature flag management tool. Turns out calling LaunchDarkly a feature flag tool is like calling a Swiss Army knife "a device for opening wine bottles." Even though that would still be useful. Although technically true, you're missing about 90% of the picture. &lt;/p&gt;

&lt;p&gt;LaunchDarkly has quietly evolved into a full feature delivery platform that happens to use flags as the foundation for four interconnected pillars: Release Management, Observability &amp;amp; Monitoring, and Analytics &amp;amp; Experimentation, and AI Configs.&lt;br&gt;
Understanding how these pillars work together, including the backend infrastructure reveals why LaunchDarkly has become mission-critical for modern software delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Foundation: Feature Flag Management
&lt;/h2&gt;

&lt;p&gt;At the heart of LaunchDarkly lies its feature flag management system. Think of feature flags as the control switches for your application's behavior. But unlike traditional configuration management, LaunchDarkly's flags are dynamic, real-time, and incredibly sophisticated.&lt;/p&gt;

&lt;p&gt;Feature flag management serves as the foundation layer because it enables everything else. Without the ability to control feature visibility and behavior at runtime, none of the other pillars could function. This foundation includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://launchdarkly.com/docs/home/flags/create" rel="noopener noreferrer"&gt;&lt;strong&gt;Feature Flags&lt;/strong&gt;&lt;/a&gt;: Binary or multi-variant toggles that control feature availability.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://launchdarkly.com/docs/home/ai-configs" rel="noopener noreferrer"&gt;&lt;strong&gt;AI Configs&lt;/strong&gt;&lt;/a&gt;: Dynamic configuration for AI model parameters and behaviors.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://launchdarkly.com/docs/home/flags/target-rules" rel="noopener noreferrer"&gt;&lt;strong&gt;Targeting Rules&lt;/strong&gt;&lt;/a&gt;: Sophisticated logic for determining who sees what features.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://launchdarkly.com/docs/home/flags/contexts#overview" rel="noopener noreferrer"&gt;&lt;strong&gt;Context Management&lt;/strong&gt;&lt;/a&gt;: User, device, and organizational context for personalized experiences.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I spent an embarrassing amount of time in my hammock thinking about why this is the foundation layer. The answer is simple: without runtime control over features, you're back to deploying code every time you want to change something. And if you've ever been on-call during a Friday deployment that went sideways, you know that's its own level of trauma.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Four Pillars (Or how to sleep through deployments)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9vwshwoz2jx69c0ezk6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9vwshwoz2jx69c0ezk6.png" alt="Image of LaunchDarkly Architecture Overview.." width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Release Management (Yellow)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Release Management pillar focuses on safely delivering features to production. This includes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/releases/releasing" rel="noopener noreferrer"&gt;&lt;strong&gt;Releases&lt;/strong&gt;&lt;/a&gt;: Traditional feature rollouts with full control over timing and audience.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/releases/guarded-rollouts" rel="noopener noreferrer"&gt;&lt;strong&gt;Guarded Rollouts&lt;/strong&gt;&lt;/a&gt;: Progressive rollouts combined with real-time monitoring and automatic rollback capabilities. This is the feature that will single-handedly help you get more sleep. When you enable a guarded rollout, LaunchDarkly monitors metrics like error rates, latency, and custom business metrics. If it detects a regression, it can automatically roll back the change before users are impacted. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/releases/progressive-rollouts" rel="noopener noreferrer"&gt;&lt;strong&gt;Progressive Rollouts&lt;/strong&gt;&lt;/a&gt;: Automated gradual rollouts that increase traffic to a new feature over time (e.g., 10% -&amp;gt; 25% -&amp;gt; 50% -&amp;gt; 100%)&lt;/p&gt;

&lt;p&gt;The key insight here is Release Management isn't about deploying code anymore. It's about deploying business value while your code sits safely in production, waiting for permission to run.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;2. Observability &amp;amp; Monitoring (Blue)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This pillar answers life's most important production question: "Wait, what's happening right now?" This includes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/observability/session-replay" rel="noopener noreferrer"&gt;&lt;strong&gt;Session Replay&lt;/strong&gt;&lt;/a&gt;: Record and replay user sessions to understand exactly what users experienced. For instance, if the user says a button didn’t work, then you can literally watch what they did.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/releases/feature-monitoring" rel="noopener noreferrer"&gt;&lt;strong&gt;Feature Monitoring&lt;/strong&gt;&lt;/a&gt;: Track feature health, performance, and adoption in real-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/observability/alerts" rel="noopener noreferrer"&gt;&lt;strong&gt;Alerts&lt;/strong&gt;&lt;/a&gt;: Proactive notifications when metrics breach thresholds.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/observability" rel="noopener noreferrer"&gt;&lt;strong&gt;Errors, Logs, Traces&lt;/strong&gt;&lt;/a&gt;: The ultimate trio of debugging, all in one place, all correlated with which flags were active when things went sideways.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/observability/dashboards" rel="noopener noreferrer"&gt;&lt;strong&gt;Dashboards&lt;/strong&gt;&lt;/a&gt;: Customizable visualizations of all observability data.&lt;/p&gt;

&lt;p&gt;What makes LaunchDarkly's observability unique is the &lt;strong&gt;feature-level granularity&lt;/strong&gt;. Traditional monitoring says "error rate increased at 2:47pm." LaunchDarkly says "error rate increased at 2:47pm when you toggled the new-payment-processor flag to 30% rollout." One of these lets you fix the problem from your hammock. The other leads you down a git rabbit hole.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;3. Analytics &amp;amp; Experimentation (Green)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Analytics &amp;amp; Experimentation pillar helps teams make data-driven decisions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/experimentation" rel="noopener noreferrer"&gt;&lt;strong&gt;Experimentation&lt;/strong&gt;&lt;/a&gt;: Full-featured A/B testing and multivariate experiments. Run controlled experiments to measure the impact of features on business metrics.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/product-analytics" rel="noopener noreferrer"&gt;&lt;strong&gt;Product Analytics&lt;/strong&gt;&lt;/a&gt;: Warehouse-native analytics that integrates with your data infrastructure (like Snowflake) to provide deep insights into user behavior.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://launchdarkly.com/docs/home/metrics#overview" rel="noopener noreferrer"&gt;&lt;strong&gt;Metrics&lt;/strong&gt;&lt;/a&gt;: Track both engineering metrics (error rates, latency) and business metrics (conversion, revenue, engagement).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Guarded Rollouts&lt;/strong&gt; (also appears here): While primarily a release mechanism, Guarded Rollouts use Experimentation methodology to automatically detect regressions during rollouts.&lt;/p&gt;

&lt;p&gt;The Experimentation pillar transforms feature flags from simple on/off switches into scientific instruments for measuring impact.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. AI Configs (Purple)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The unveiling of AI Configs marks a shift in LaunchDarkly's shift from simply creating and storing feature flag values to storing values related to Large Language Models (LLMs). This gives way to pretty neat opportunities like customizing, testing, and rolling out new LLMs. &lt;/p&gt;

&lt;h2&gt;
  
  
  How the Pillars are Interconnected
&lt;/h2&gt;

&lt;p&gt;The four pillars aren't just sitting next to each other making awkward small talk. They're in a deeply committed relationship with constant communication. Here's how:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Release Management -&amp;gt; Observability&lt;/strong&gt;: When you toggle a flag or start a rollout, observability tools immediately begin tracking the impact. Error rates, traces, and logs are automatically correlated with the flag change.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability -&amp;gt; Analytics&lt;/strong&gt;: The data collected through monitoring feeds directly into Experimentation and analytics. You're not just watching for errors; you're measuring business impact.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analytics -&amp;gt; Release Management&lt;/strong&gt;: Experiment results inform which variations to roll out. Metrics from guarded rollouts trigger automatic decisions (rollback or continue).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Configs -&amp;gt; All Pillars&lt;/strong&gt;: AI configurations add a dynamic layer across the ecosystem:&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;To Release Management&lt;/strong&gt;: Model versions, prompts, and parameters can be toggled like features, enabling safe AI deployments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;To Observability&lt;/strong&gt;: Track model performance, latency, token usage, and output quality in real-time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;To Analytics&lt;/strong&gt;: A/B test different prompts, models, or parameters to optimize AI outcomes and measure business impact.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature Flags Enable Everything&lt;/strong&gt;: Without the foundational flag management system, none of these capabilities would work. Flags are the control point that makes progressive delivery, real-time monitoring, and controlled Experimentation possible.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Infrastructure: Flag Delivery Network (FDN)
&lt;/h2&gt;

&lt;p&gt;![&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9cdqwcsjn4pie8r0byxm.png" rel="noopener noreferrer"&gt;Image of Flag Delivery Network Infrastructure.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Understanding the pillars is only half the story. The infrastructure that delivers flags to your applications is equally crucial. LaunchDarkly's Flag Delivery Network is what makes real-time feature control possible at massive scale.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;What is the FDN?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://launchdarkly.com/docs/home/getting-started/architecture" rel="noopener noreferrer"&gt;Flag Delivery Network&lt;/a&gt; is LaunchDarkly's proprietary infrastructure combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LaunchDarkly's core infrastructure&lt;/strong&gt;: Central flag management and rule evaluation engine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content Delivery Network (CDN)&lt;/strong&gt;: Over 100 points of presence globally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-region streaming service&lt;/strong&gt;: Real-time flag updates via persistent connections.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it as a specialized CDN, but instead of delivering static assets, it delivers feature flag configurations and streams real-time updates.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;How the FDN Works&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Flag Creation&lt;/strong&gt;: A developer, PM, or operator creates or modifies a flag in the LaunchDarkly dashboard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Global Distribution&lt;/strong&gt;: The flag configuration is immediately pushed to all CDN edge locations worldwide.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SDK Connection&lt;/strong&gt;: Your application's LaunchDarkly SDK connects to the nearest CDN point of presence.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Initial Load&lt;/strong&gt;: The SDK retrieves all flag configurations and stores them in memory.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-time Updates&lt;/strong&gt;: The SDK maintains a streaming connection; when flags change, updates arrive in milliseconds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local Evaluation&lt;/strong&gt;: Flag rules are evaluated locally in your application, which means no round-trip to LaunchDarkly required for each flag check.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;SDK Modes: Polling vs. Streaming&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;LaunchDarkly SDKs can operate in two modes: streaming mode or polling mode.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streaming Mode&lt;/strong&gt; (recommended for server-side SDKs):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maintains a persistent connection to the FDN.&lt;/li&gt;
&lt;li&gt;Receives flag updates in real-time.&lt;/li&gt;
&lt;li&gt;Ideal for long-running applications like backend services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Polling Mode&lt;/strong&gt; (common for mobile/client-side SDKs):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Periodically checks for flag updates.&lt;/li&gt;
&lt;li&gt;Lower resource usage on mobile devices.&lt;/li&gt;
&lt;li&gt;Configurable polling interval.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Six Layers of Resilience&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;LaunchDarkly's architecture includes multiple layers of failover to ensure flags are always available:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;In-memory cache&lt;/strong&gt;: SDKs store flags locally; if the network fails, your app continues working.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback values&lt;/strong&gt;: Every flag evaluation includes a default fallback value.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CDN redundancy&lt;/strong&gt;: 100+ global Points of Presence (POPs) ensure low latency and high availability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-region infrastructure&lt;/strong&gt;: LaunchDarkly's core systems span multiple cloud regions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SDK resilience&lt;/strong&gt;: Automatic retry logic and circuit breakers.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Relay Proxy option&lt;/strong&gt;: Deploy your own local flag cache for maximum control.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What this means in practice: your app never depends on LaunchDarkly being reachable. Flags work even during a complete LaunchDarkly outage.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Putting It All Together: A Real-World Scenario&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Let me walk you through how this actually works in practice. Picture this: your team wants to launch a new checkout flow. In the old days (2023), this would involve:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extensive staging testing.&lt;/li&gt;
&lt;li&gt;A carefully planned deployment window.&lt;/li&gt;
&lt;li&gt;Someone's weekend.&lt;/li&gt;
&lt;li&gt;Hoping for the best.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, it looks more like:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Release&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create feature flag new-checkout-flow.&lt;/li&gt;
&lt;li&gt;Configure AI configs for the personalized product recommendation model (prompt version, temperature, model selection).&lt;/li&gt;
&lt;li&gt;Configure targeting rules (start with your internal test accounts, because you're not a monster).&lt;/li&gt;
&lt;li&gt;Start a guarded rollout to 10% of production traffic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Flag Delivery&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flag configuration and AI Config hit the FDN globally (this happens in milliseconds, which is faster than you can say "Jenkins pipeline").&lt;/li&gt;
&lt;li&gt;Your mobile and web SDKs receive the update via their streaming connections.&lt;/li&gt;
&lt;li&gt;Flags are evaluated locally in-memory, adding exactly zero latency to your checkout flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Observability Does Its Thing&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Session replay starts capturing user interactions with the new flow.&lt;/li&gt;
&lt;li&gt;Traces show checkout API performance in real-time.&lt;/li&gt;
&lt;li&gt;Error monitoring tracks any exceptions (there will be exceptions, there are always exceptions).&lt;/li&gt;
&lt;li&gt;Dashboards update with adoption metrics and AI performance indicators.&lt;/li&gt;
&lt;li&gt;Everything comes back as green.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Analytics &amp;amp; Decision Making&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Metrics automatically track conversion rate, cart abandonment, purchase value.&lt;/li&gt;
&lt;li&gt;AI-specific metrics measure recommendation click-through rates and relevance scores.&lt;/li&gt;
&lt;li&gt;Guarded rollout monitors for regressions (error rate, latency, AI hallucinations, angry user emails).&lt;/li&gt;
&lt;li&gt;A/B test different prompt variations or model parameters to optimize recommendations.&lt;/li&gt;
&lt;li&gt;If metrics look good: automatic progression to 25% -&amp;gt; 50% -&amp;gt; 100%.&lt;/li&gt;
&lt;li&gt;If metrics look bad: automatic rollback before your on-call engineer finishes their coffee.&lt;/li&gt;
&lt;li&gt;You receive a Slack notification.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5: The Result&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Feature safely released with automated safeguards.&lt;/li&gt;
&lt;li&gt;AI models deployed and optimized without risking production.&lt;/li&gt;
&lt;li&gt;Full visibility into user experience and system health.&lt;/li&gt;
&lt;li&gt;Data-driven decision on whether to keep the feature and which AI configuration performs best.&lt;/li&gt;
&lt;li&gt;Zero code deploys after the initial setup.&lt;/li&gt;
&lt;li&gt;You solved this entire problem without leaving your hammock.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why This Architecture Matters&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;LaunchDarkly's architecture represents a fundamental shift in how we think about software delivery:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Traditional Approach&lt;/strong&gt;: Deploy -&amp;gt; Hope -&amp;gt; React to incidents -&amp;gt; Debug -&amp;gt; Fix -&amp;gt; Deploy again.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;LaunchDarkly Approach&lt;/strong&gt;: Deploy once -&amp;gt; Control via flags -&amp;gt; Monitor continuously -&amp;gt; Experiment safely -&amp;gt; Optimize based on data.&lt;/p&gt;

&lt;p&gt;The key innovation is making &lt;strong&gt;the feature flag the control point&lt;/strong&gt; for delivery, observability, Experimentation and AI Configs. This creates a closed feedback loop:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Release Management&lt;/strong&gt; provides the controls (the steering wheel).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability&lt;/strong&gt; provides the visibility (the headlights and dashboard).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analytics&lt;/strong&gt; provides the insights (the GPS telling you which route is actually faster).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Configs&lt;/strong&gt; provide the intelligence (the adaptive cruise control that adjusts based on conditions).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feature flags&lt;/strong&gt; tie it all together (they're the guardrails, ensuring the car stays on the road).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;LaunchDarkly isn't just a feature flag tool, but it's a complete feature delivery platform built on four interconnected pillars. The Flag Delivery Network ensures that control decisions made in the dashboard reach your applications globally in milliseconds, while multiple layers of resilience guarantee availability even during outages.&lt;br&gt;
Understanding this architecture helps explain why LaunchDarkly has become essential infrastructure for companies that need to ship software quickly without breaking things. &lt;/p&gt;

&lt;p&gt;The combination of real-time control, comprehensive observability, data-driven experimentation, and intelligent AI configuration management, built on a foundation of reliable, fast feature flag evaluation, enables true continuous delivery. &lt;/p&gt;

&lt;p&gt;Whether you're managing a handful of flags, orchestrating complex progressive rollouts across a global user base, or safely deploying and iterating on AI models with the same rigor as your traditional features, LaunchDarkly's architecture scales to meet your needs while keeping the developer experience simple and the operational risk low.&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>observability</category>
      <category>releases</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Add observability to your React Native application in 5 minutes</title>
      <dc:creator>Alexis Roberson</dc:creator>
      <pubDate>Wed, 01 Oct 2025 19:31:44 +0000</pubDate>
      <link>https://forem.com/launchdarkly/add-observability-to-your-react-native-application-in-5-minutes-9nn</link>
      <guid>https://forem.com/launchdarkly/add-observability-to-your-react-native-application-in-5-minutes-9nn</guid>
      <description>&lt;p&gt;Originally published in the LaunchDarkly &lt;a href="https://launchdarkly.com/docs/tutorials/react-native-observability" rel="noopener noreferrer"&gt;Docs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In modern application development, feature flags are the guardrails that keep experiments controlled and rollbacks safe when conditions shift. If feature flags act as the guardrails, observability provides the visibility: the headlights (traces), mirrors (logs), and dashboard instruments (metrics) that reveal what’s happening in the environment and how well a feature is performing. Together, feature flags and observability unlock powerful insights by correlating code changes with real-time system behavior. This combination reduces time-to-diagnosis and builds greater confidence when rolling out new features.&lt;/p&gt;

&lt;p&gt;In this post, we’ll walk through just how to add observability to a React Native application using LaunchDarkly’s observability SDK. To demonstrate the process, we’ll build on the PlusOne app, a simple counter app that includes increment (+1), reset, and error-triggering buttons. This lightweight demo provides a clean foundation to showcase how logs, traces, and errors can seamlessly flow into LaunchDarkly for monitoring and debugging.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbto4v541tqqyorr5al1i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbto4v541tqqyorr5al1i.png" alt="Screenshot final result of PlusOne app." width="694" height="1374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;LaunchDarkly account. &lt;a href="https://app.launchdarkly.com/signup" rel="noopener noreferrer"&gt;Sign up for a free one here&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Visual Studio or another code editor of choice.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All code from this tutorial can be found &lt;a href="https://github.com/arober39/PlusOne" rel="noopener noreferrer"&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up your environment
&lt;/h2&gt;

&lt;p&gt;Before running a React Native app, make sure your development environment is set up correctly. &lt;a href="https://reactnative.dev/docs/set-up-your-environment?platform=android#cocoapods" rel="noopener noreferrer"&gt;You can find the full setup instructions for both Android and iOS here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In this tutorial, we'll be running iOS, but keep in mind Expo Orbit, the platform we'll be using to run our iOS simulator, requires both Xcode and Android Studio to be installed.&lt;/p&gt;

&lt;p&gt;After going through the instructions you should have the following installed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Node JS (preferably via &lt;a href="https://www.freecodecamp.org/news/node-version-manager-nvm-install-guide/" rel="noopener noreferrer"&gt;nvm&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;Watchman for file monitoring&lt;/li&gt;
&lt;li&gt;JDK via zulu package manager.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.android.com/studio/index.html" rel="noopener noreferrer"&gt;Android Studio&lt;/a&gt;. Don’t forget to set your Android_Home environment variables.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://apps.apple.com/us/app/xcode/id497799835?mt=12" rel="noopener noreferrer"&gt;Xcode&lt;/a&gt; for the iOS simulator.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://cocoapods.org/" rel="noopener noreferrer"&gt;Cocoapods&lt;/a&gt; for iOS dependency management.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://expo.dev/orbit" rel="noopener noreferrer"&gt;Expo orbit&lt;/a&gt; for running expo apps Android or iOS.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're using Android, don't forget to add your environment variables to bash or zsh profile.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export ANDROID_HOME=$HOME/Library/Android/sdk
export PATH=$PATH:$ANDROID_HOME/emulator
export PATH=$PATH:$ANDROID_HOME/platform-tools
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Starting up the PlusOne app
&lt;/h2&gt;

&lt;p&gt;To get started, let’s clone the &lt;a href="https://github.com/arober39/PlusOne#" rel="noopener noreferrer"&gt;repo&lt;/a&gt; for the PlusOne app and run &lt;code&gt;npm install&lt;/code&gt; to ensure the proper dependencies are present in our node_modules file. &lt;/p&gt;

&lt;p&gt;Clone the repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/arober39/PlusOne
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install dependencies using npm&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd PlusOne
npm install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We’ll also need to run both the prebuild command to generate the ios file and the expo run command to run the iOS simulator.&lt;/p&gt;

&lt;p&gt;Prebuild for iOS&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npx expo prebuild
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run expo app&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm expo run:ios
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can view the iOS app in the iPhone simulator using npm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# iOS
npm run ios

# Android
npm run android
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The app should look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbto4v541tqqyorr5al1i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbto4v541tqqyorr5al1i.png" alt="Screenshot final result of PlusOne app." width="694" height="1374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Feel free to interact with the app to ensure all is working as expected.&lt;/p&gt;

&lt;p&gt;As you can see in the code, we have three buttons: one that adds one to the displayed number, one to bring the count back to zero and an intentional Error button to test error monitoring within the LaunchDarkly UI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/index.tsx&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;StyleSheet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;TouchableOpacity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;View&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react-native&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Index&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setCount&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handleReset&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setCount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handleIncrement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setCount&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;prev&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;prev&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;triggerRecordedError&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Simulated controlled error from Plus One app&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nf"&gt;alert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;You intentionally threw an error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;};&lt;/span&gt;

 &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;View&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
     &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;View&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;header&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
       &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Text&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headerText&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Plus&lt;/span&gt; &lt;span class="nx"&gt;One&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Text&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;     &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/View&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;     &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;View&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;counterWrapper&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
       &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Text&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;counterText&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;count&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Text&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;     &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/View&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;     &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;View&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;actionsRow&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
       &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ButtonBox&lt;/span&gt; &lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Reset&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;onPress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleReset&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;       &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ButtonBox&lt;/span&gt; &lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;+1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;onPress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;handleIncrement&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;       &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ButtonBox&lt;/span&gt; &lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="nx"&gt;onPress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;triggerRecordedError&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;     &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/View&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;   &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/View&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt; &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;ButtonBoxProps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="na"&gt;label&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
 &lt;span class="nl"&gt;onPress&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;ButtonBox&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;onPress&lt;/span&gt; &lt;span class="p"&gt;}:&lt;/span&gt; &lt;span class="nx"&gt;ButtonBoxProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;TouchableOpacity&lt;/span&gt; &lt;span class="nx"&gt;onPress&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;onPress&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;button&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="nx"&gt;activeOpacity&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="mf"&gt;0.8&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
     &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Text&lt;/span&gt; &lt;span class="nx"&gt;style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;buttonText&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;label&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Text&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;   &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/TouchableOpacity&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt; &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="cm"&gt;/* The rest of the application code */&lt;/span&gt; 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that we have verified a working app, we can add observability support by downloading the observability React Native &lt;a href="https://launchdarkly.com/docs/sdk/observability/react-native" rel="noopener noreferrer"&gt;SDK&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Install LaunchDarkly SDK dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install @launchdarkly/react-native-client-sdk
npm install @launchdarkly/observability-react-native
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, you’ll need to initialize the React Native LD client in the _layout file. Replace the in the layout file by pasting the following code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// app/_layout.tsx&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Observability&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@launchdarkly/observability-react-native&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;AutoEnvAttributes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LDOptions&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;LDProvider&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ReactNativeLDClient&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@launchdarkly/react-native-client-sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Stack&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;expo-router&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;LDOptions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="na"&gt;applicationInfo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Plus-One&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Sample Application&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1.0.0&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="na"&gt;versionName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;v1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="p"&gt;},&lt;/span&gt;
 &lt;span class="na"&gt;debug&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
   &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Observability&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
     &lt;span class="na"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-react-native-app&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="na"&gt;serviceVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;1.0.0&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="p"&gt;})&lt;/span&gt;
 &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userContext&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;test-hello&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;RootLayout&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setClient&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ReactNativeLDClient&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="c1"&gt;// Initialize client&lt;/span&gt;
   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;featureClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ReactNativeLDClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mob-abc123&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="nx"&gt;AutoEnvAttributes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Enabled&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="p"&gt;);&lt;/span&gt;

   &lt;span class="nx"&gt;featureClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;identify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;userContext&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="na"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;any&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

   &lt;span class="nf"&gt;setClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;featureClient&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

   &lt;span class="c1"&gt;// Cleanup function that runs when component unmounts&lt;/span&gt;
   &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;featureClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
   &lt;span class="p"&gt;};&lt;/span&gt;
 &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;

 &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
   &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;LDProvider&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
     &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Stack&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;
   &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/LDProvider&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt; &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we’re importing the Observability SDK as well as a few LD libraries to add options and attributes to the LD client.- Initialized the SDK and &lt;a href="https://launchdarkly.github.io/observability-sdk/sdk/@launchdarkly/observability-react-native/interfaces/ReactNativeOptions.html" rel="noopener noreferrer"&gt;plugin options&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defined the user &lt;a href="https://launchdarkly.com/docs/home/flags/contexts" rel="noopener noreferrer"&gt;context&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Lastly, you initialized the client.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that you have defined your LD React Native client, you can implement different observability methods within your application logic. &lt;/p&gt;

&lt;p&gt;We can do this by importing the LDObserve library in the app/_layout.tsx file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LDObserve&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@launchdarkly/observability-react-native&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, add the recordError() method within the triggerRecordedError function inside the app/_layout.tsx file. This will allow for error messages to be sent back to the LD UI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;triggerRecordedError&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Simulated controlled error from Plus One app&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="nx"&gt;LDObserve&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;recordError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;feature&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test-button&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
     &lt;span class="nf"&gt;alert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;You intentionally threw an error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="p"&gt;}&lt;/span&gt;
 &lt;span class="p"&gt;};&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Before being able to receive data in the LD UI, you’ll need to add your mobile key to the React Native LD client, which can be found by logging in to the LD UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g5pjgutbxvsn7m0v895.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g5pjgutbxvsn7m0v895.png" alt="Screenshot of Sign in page" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once logged in, tap the settings button at the bottom left.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy33si9hs6hmubjnhr9lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy33si9hs6hmubjnhr9lw.png" alt="Screenshot of landing page after sign in" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the Projects page and click create to create a new project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h0sbd3so638gk57gml6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6h0sbd3so638gk57gml6.png" alt="Screenshot of Project page." width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Define the new Project and click Create Project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmzb25558g8i41acm9y9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmmzb25558g8i41acm9y9.png" alt="Screenshot of New project widget page." width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, define the environment where you would like your data to be sent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuh0s6pnqbrttp13l8vwh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuh0s6pnqbrttp13l8vwh.png" alt="Screenshot of page to create new environment." width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, grab the mobile key by pressing the three dots for the environment and selecting the mobile key, which will copy the key to your keyboard. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i8kemq5jgm9hqt4xqo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i8kemq5jgm9hqt4xqo1.png" alt="Screenshot of steps to copy mobile key" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, add it to the app/_layout file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;   &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;featureClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ReactNativeLDClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
     &lt;span class="err"&gt;‘&lt;/span&gt;&lt;span class="nx"&gt;mob&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;abc123&lt;/span&gt;&lt;span class="err"&gt;’&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="nx"&gt;AutoEnvAttributes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Enabled&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
     &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
   &lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, you can generate data by interacting with your app in the iOS app simulator.&lt;/p&gt;

&lt;p&gt;Feel free to restart the app to ensure data is displaying in real time.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm expo run:ios
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you navigate back to the LD UI, you should be able to see the logs, traces, and errors under the Monitor section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Logs
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xm0in4i7991ewmwv2jc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xm0in4i7991ewmwv2jc.png" alt="Screenshot of final logs page." width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Traces
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz2ux27vj6kwebw8wyc7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz2ux27vj6kwebw8wyc7.png" alt="Screenshot of final traces page." width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Errors
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0yywi6xwmy9h26wtbgz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0yywi6xwmy9h26wtbgz.png" alt="Screenshot of final error page." width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In just a few minutes, we’ve taken the PlusOne React Native app from a simple counter to a fully observable application connected to LaunchDarkly. By setting up the SDK, initializing observability plugins, and recording errors, we now have a live feedback loop where application behavior is visible in the LaunchDarkly UI. This makes it far easier to diagnose issues, validate feature flag rollouts, and ensure smooth user experiences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;Looking ahead, there are many ways to expand on what we’ve built by including features like recording &lt;a href="https://launchdarkly.com/docs/sdk/features/observability-metrics" rel="noopener noreferrer"&gt;custom metrics&lt;/a&gt; and &lt;a href="https://launchdarkly.com/docs/sdk/features/session-replay-config" rel="noopener noreferrer"&gt;session replay&lt;/a&gt;, which provide even deeper insights into app behavior. By integrating observability at the foundation of your React Native projects, you equip your team with the clarity needed to debug faster, ship features more confidently, and deliver reliable experiences to your users.&lt;/p&gt;

&lt;p&gt;You can also read &lt;a href="https://launchdarkly.com/blog/welcome-highlight-to-launchdarkly/" rel="noopener noreferrer"&gt;this article&lt;/a&gt; to learn more about observability and guarded releases. &lt;/p&gt;

</description>
      <category>observability</category>
      <category>reactnative</category>
      <category>sdk</category>
      <category>ios</category>
    </item>
  </channel>
</rss>
