<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Jyoti Thakur</title>
    <description>The latest articles on Forem by Jyoti Thakur (@thakurjyoti05).</description>
    <link>https://forem.com/thakurjyoti05</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/thakurjyoti05"/>
    <language>en</language>
    <item>
      <title>Building AI Agents with Strands Agents: My Hands-On Experience from the AWS BeSA Workshop</title>
      <dc:creator>Jyoti Thakur</dc:creator>
      <pubDate>Mon, 09 Mar 2026 16:10:57 +0000</pubDate>
      <link>https://forem.com/thakurjyoti05/building-ai-agents-with-strands-agents-my-hands-on-experience-from-the-aws-besa-workshop-9bk</link>
      <guid>https://forem.com/thakurjyoti05/building-ai-agents-with-strands-agents-my-hands-on-experience-from-the-aws-besa-workshop-9bk</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Artificial Intelligence is evolving rapidly, and one of the most exciting developments is the rise of &lt;strong&gt;&lt;a href="https://en.wikipedia.org/wiki/AI_agent" rel="noopener noreferrer"&gt;AI Agent&lt;/a&gt;&lt;/strong&gt; — systems that can reason, plan, and take actions autonomously.&lt;/p&gt;

&lt;p&gt;Recently, I participated in the &lt;strong&gt;BeSA Workshop&lt;/strong&gt;, where we explored how to build intelligent agents using the &lt;strong&gt;&lt;a href="https://strandsagents.com" rel="noopener noreferrer"&gt;Strands Agents&lt;/a&gt; framework&lt;/strong&gt;. The workshop focused on hands-on labs that demonstrated how AI agents can process tasks, interact with tools, and perform multi-step reasoning.&lt;/p&gt;

&lt;p&gt;In this article, I will walk through my experience completing the workshop labs and explain how each lab helped me understand the concepts behind building AI agents.&lt;/p&gt;

&lt;p&gt;By the end of this article, you will have a clear understanding of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What is Agentic AI&lt;/li&gt;
&lt;li&gt;How Strands Agents work&lt;/li&gt;
&lt;li&gt;How to build and run AI agents step-by-step&lt;/li&gt;
&lt;li&gt;Key insights from each workshop lab&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is Agentic AI?
&lt;/h2&gt;

&lt;p&gt;Agentic AI refers to artificial intelligence systems that can &lt;strong&gt;autonomously plan and execute tasks&lt;/strong&gt; to achieve a goal.&lt;/p&gt;

&lt;p&gt;Unlike traditional AI models that simply respond to prompts, AI agents can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand complex instructions&lt;/li&gt;
&lt;li&gt;Break tasks into smaller steps&lt;/li&gt;
&lt;li&gt;Use tools or external systems&lt;/li&gt;
&lt;li&gt;Make decisions during execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach allows AI systems to behave more like &lt;strong&gt;intelligent assistants capable of performing real tasks&lt;/strong&gt; rather than just generating text responses.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Strands Agents?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://strandsagents.com" rel="noopener noreferrer"&gt;Strands Agents&lt;/a&gt; is a framework designed to help developers build &lt;strong&gt;AI-powered agents&lt;/strong&gt; capable of reasoning and performing tasks.&lt;/p&gt;

&lt;p&gt;Instead of manually managing prompts and logic, the framework allows developers to define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agents&lt;/li&gt;
&lt;li&gt;Tools&lt;/li&gt;
&lt;li&gt;Workflows&lt;/li&gt;
&lt;li&gt;Task execution logic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes it easier to design AI systems that can &lt;strong&gt;solve problems step-by-step and interact with external services&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workshop Overview
&lt;/h2&gt;

&lt;p&gt;The BeSA workshop consisted of multiple hands-on labs designed to gradually introduce the concepts of building AI agents.&lt;/p&gt;

&lt;p&gt;Each lab focused on a specific capability, starting from basic agent creation and moving toward more advanced agent workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disclaimer
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This article reflects my experience participating in the BeSA Workshop. While I have summarized and explained the labs in my own words, some technical explanations and descriptions were guided with external assistance to ensure clarity. The concepts, architecture diagrams, and lab exercises are based on official AWS BeSA Workshop materials.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The labs covered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agent initialization&lt;/li&gt;
&lt;li&gt;Prompt handling&lt;/li&gt;
&lt;li&gt;Tool integration&lt;/li&gt;
&lt;li&gt;Multi-step task execution&lt;/li&gt;
&lt;li&gt;Advanced agent workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Lab 1: Building Your First Strands Agent&lt;/li&gt;
&lt;li&gt;Lab 2: Running Agents Locally with Ollama&lt;/li&gt;
&lt;li&gt;Lab 3: Integrating AI Agents with AWS Services&lt;/li&gt;
&lt;li&gt;Lab 4: Integrating MCP Servers with Strands Agents&lt;/li&gt;
&lt;li&gt;Lab 5: Streaming Agent Responses&lt;/li&gt;
&lt;li&gt;Lab 6: Securing Agents with Guardrails&lt;/li&gt;
&lt;li&gt;Lab 7: Adding Persistent Memory to Agents&lt;/li&gt;
&lt;li&gt;Lab 8: Observability and Evaluation for Agents&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Lab 1: Building Your First Strands Agent
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The goal of this lab was to understand the basic structure of the &lt;a href="https://strandsagents.com" rel="noopener noreferrer"&gt;Strands Agents&lt;/a&gt; framework and how AI agents are initialized.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltsl7agis72v5mjtng6d.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltsl7agis72v5mjtng6d.PNG" alt=" " width="800" height="461"&gt;&lt;/a&gt;&lt;br&gt;
The architecture shows how the agent interacts with the language model to process user requests.&lt;br&gt;
The workshop begins inside JupyterLab, where the first notebook introduces the most fundamental concept of the Strands SDK: the Agent primitive.&lt;/p&gt;

&lt;p&gt;What makes this part exciting is how quickly you can build a working AI agent.&lt;/p&gt;

&lt;p&gt;With just a few lines of Python, you can create an agent powered by Claude Sonnet 4 running on &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;
&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant that provides concise responses.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Tell me a joke.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s all it takes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No infrastructure setup&lt;/li&gt;
&lt;li&gt;No API Gateway configuration&lt;/li&gt;
&lt;li&gt;No complex deployment steps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By default, the Strands SDK connects to &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; and automatically uses Claude Sonnet 4 in the AWS region configured in your account.&lt;/p&gt;

&lt;h3&gt;
  
  
  Adding Tools to Your Agent
&lt;/h3&gt;

&lt;p&gt;Once the basic agent is running, the lab walks through how to extend the agent’s capabilities using tools.&lt;/p&gt;

&lt;p&gt;There are two types of tools you can add:&lt;/p&gt;

&lt;p&gt;1️⃣ Built-in Tools&lt;/p&gt;

&lt;p&gt;The strands-agents-tools package provides ready-to-use utilities like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calculator&lt;/li&gt;
&lt;li&gt;Other helper tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools allow your agent to perform tasks beyond simple text generation.&lt;/p&gt;

&lt;p&gt;2️⃣ Custom Tools&lt;/p&gt;

&lt;p&gt;You can also create your own tools using the @tool decorator.&lt;/p&gt;

&lt;p&gt;This allows the AI agent to call your Python functions whenever it needs them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Calling Tools Directly
&lt;/h3&gt;

&lt;p&gt;The notebook also demonstrates that tools can be triggered directly from code.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;calculator&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gives developers flexibility to use tools either through the agent’s reasoning process or directly in the application code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this lab, we initialized our first agent and explored how the framework handles prompts and responses.&lt;/p&gt;

&lt;p&gt;The notebook demonstrates how an agent receives input, processes it using the underlying model, and returns a generated output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Execution&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9z9j09cje0uj6zakdwse.PNG" alt="Lab 1 Sample Execution 1" width="800" height="358"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7gg8rl4ek12i0rbpnboj.PNG" alt="Lab 1 Sample Execution 2" width="800" height="311"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopmvcvazkyircpcnfrjy.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopmvcvazkyircpcnfrjy.PNG" alt="Lab 1 Sample Execution 3" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learning
&lt;/h2&gt;

&lt;p&gt;Lab-1 demonstrates how the Strands SDK makes it incredibly simple to build AI agents.&lt;/p&gt;

&lt;p&gt;Instead of worrying about infrastructure or complex setup, developers can focus on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Building intelligent workflows&lt;/li&gt;
&lt;li&gt;Adding useful tools&lt;/li&gt;
&lt;li&gt;Designing agent behavior&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This simplicity is what makes the framework powerful for rapidly building AI-driven applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lab 2: Running Agents Locally with Ollama
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The goal of this lab was to demonstrate that not every AI agent needs to rely on cloud services.&lt;br&gt;
In some situations—such as offline environments, privacy-sensitive tasks, or cost-efficient development—running models locally can be a better option.&lt;/p&gt;

&lt;p&gt;In this lab, we replace Amazon Bedrock with &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;, allowing the agent to run entirely on a local machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1im68qd9rftp4na3gpf9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1im68qd9rftp4na3gpf9.PNG" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Strands SDK supports multiple model providers through a flexible model abstraction layer. This means developers can easily switch between different model backends without changing the overall agent logic.&lt;/p&gt;

&lt;p&gt;In this lab, the agent is configured to use a local model through the OllamaModel provider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands.models.ollama&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OllamaModel&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;ollama_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OllamaModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;llama3.2:3b&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:11434&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;ollama_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;file_read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;file_write&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;list_directory&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s what’s happening:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The agent connects to a locally running &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; server.&lt;/li&gt;
&lt;li&gt;The model used is Llama 3.2 with 3B parameters.&lt;/li&gt;
&lt;li&gt;The agent is equipped with tools that allow it to interact with the local file system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These tools include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;file_read → read files (including PDFs)&lt;/li&gt;
&lt;li&gt;file_write → create or modify files&lt;/li&gt;
&lt;li&gt;list_directory → explore folders&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With these capabilities, the agent becomes a local file operations assistant. It can perform tasks such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summarizing documents like a shareholder letter&lt;/li&gt;
&lt;li&gt;Generating a project README&lt;/li&gt;
&lt;li&gt;Creating or editing files automatically&lt;/li&gt;
&lt;li&gt;Navigating directories on your system&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most interesting part is that everything happens locally, meaning no data is sent to external APIs or cloud services.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Execution&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqjn85ffcm0z3rci7h08.PNG" alt="Lab 2 Sample Execution 1" width="800" height="340"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bpjv6oa1k8ff6r8k5wl.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bpjv6oa1k8ff6r8k5wl.PNG" alt="Lab 2 Sample Execution 2" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learning
&lt;/h2&gt;

&lt;p&gt;One of the biggest advantages of the Strands SDK is model portability. Thanks to its provider abstraction layer, switching between models is very simple. For example, you can move from Claude on &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; to Llama running locally on &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; with only a small code change. This flexibility allows developers to run agents in the cloud or locally depending on needs such as privacy, offline usage, or cost-efficient development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lab 3: Integrating AI Agents with AWS Services
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The objective of this lab was to build a Restaurant Assistant that can answer menu-related questions and manage table reservations.&lt;/p&gt;

&lt;p&gt;To achieve this, the agent integrates with two AWS managed services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; Knowledge Bases for retrieving information from restaurant menus&lt;/li&gt;
&lt;li&gt;Amazon DynamoDB for handling reservation data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This lab demonstrates how agents can interact with external systems using tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96hozh9a4ntku1nckzgj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F96hozh9a4ntku1nckzgj.PNG" alt=" " width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1️⃣ Agent Layer&lt;br&gt;
The assistant is built using the &lt;a href="https://strandsagents.com" rel="noopener noreferrer"&gt;Strands Agents&lt;/a&gt; SDK with a model running on Amazon Bedrock.&lt;br&gt;
The agent receives user requests and decides which tool to use.&lt;/p&gt;

&lt;p&gt;2️⃣ Knowledge Retrieval (RAG)&lt;br&gt;
For menu-related questions, the agent uses the retrieve tool connected to &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; Knowledge Bases to fetch relevant menu information.&lt;/p&gt;

&lt;p&gt;3️⃣ Reservation Management&lt;br&gt;
Reservation operations are handled through custom tools that interact with Amazon DynamoDB to create, retrieve, or delete bookings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The assistant is created using the Strands Agents SDK, and multiple tools are added to extend its capabilities.&lt;/p&gt;

&lt;p&gt;The lab introduces three different ways to define tools:&lt;/p&gt;

&lt;p&gt;1️⃣ Inline Tool Definition&lt;/p&gt;

&lt;p&gt;Using the @tool decorator directly within the agent code.&lt;/p&gt;

&lt;p&gt;2️⃣ Standalone Tool Module&lt;/p&gt;

&lt;p&gt;Defining the tool in a separate file and importing it into the project.&lt;/p&gt;

&lt;p&gt;3️⃣ TOOL_SPEC Schema&lt;/p&gt;

&lt;p&gt;Using a structured dictionary similar to the &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; Converse API schema.&lt;/p&gt;

&lt;p&gt;This method allows precise control over:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Required and optional parameters&lt;/li&gt;
&lt;li&gt;Success and error response structures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example agent setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands_tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;current_time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;retrieve&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;retrieve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;current_time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;get_booking_details&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;create_booking&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;delete_booking&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One particularly useful tool is retrieve, which automatically connects to the &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; Knowledge Base when the KNOWLEDGE_BASE_ID environment variable is configured.&lt;/p&gt;

&lt;p&gt;This means the agent can perform Retrieval-Augmented Generation (RAG) without building a custom retrieval pipeline.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Execution&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyn7gqef2v8lll9sxvmwy.PNG" alt="Lab 3 Sample Execution 1" width="800" height="342"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffaaetr93xh4wquxgy18o.PNG" alt="Lab 3 Sample Execution 2" width="800" height="337"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkg6avcdsof5tocjy1kc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqkg6avcdsof5tocjy1kc.PNG" alt="Lab 3 Sample Execution 3" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learning
&lt;/h2&gt;

&lt;p&gt;Lab 3 shows how the &lt;a href="https://strandsagents.com" rel="noopener noreferrer"&gt;Strands Agents&lt;/a&gt; SDK integrates with AWS services using tools and boto3. The built-in retrieve tool enables quick RAG with Amazon Bedrock Knowledge Bases, while custom tools allow interaction with Amazon DynamoDB. The TOOL_SPEC approach adds structured, production-ready tool definitions. Together, these features enable building AI assistants that work with real-time data and external systems. &lt;/p&gt;

&lt;h2&gt;
  
  
  Lab 4: Integrating MCP Servers with Strands Agents
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this lab, we explore how Model Context Protocol (MCP) servers can extend Strands Agents by exposing external tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F758fxxhidjz7cv2m4lp5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F758fxxhidjz7cv2m4lp5.PNG" alt=" " width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this setup, the Strands Agents SDK connects to an MCP server that exposes tools.&lt;/p&gt;

&lt;p&gt;The flow works like this:&lt;/p&gt;

&lt;p&gt;User Query → Agent → MCP Server → External Tool → Agent Response&lt;/p&gt;

&lt;p&gt;The MCP server can run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Locally using stdio&lt;/li&gt;
&lt;li&gt;Remotely using HTTP-based MCP servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes it easy for agents to access tools such as documentation, APIs, or databases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48urqdo4fiqnk1iz8qwi.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F48urqdo4fiqnk1iz8qwi.PNG" alt=" " width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the lab, an MCP client connects to a server that provides AWS documentation tools.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands.tools.mcp&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;MCPClient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;mcp&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;StdioServerParameters&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stdio_client&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;mcp_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;MCPClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;stdio_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nc"&gt;StdioServerParameters&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;command&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;uvx&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;args&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;awslabs.aws-documentation-mcp-server@latest&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;mcp_client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;mcp_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list_tools_sync&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is Amazon Bedrock&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s pricing model?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The lab also demonstrates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Creating custom MCP servers using FastMCP&lt;/li&gt;
&lt;li&gt;Direct tool execution with call_tool_sync&lt;/li&gt;
&lt;li&gt;Timeout configuration&lt;/li&gt;
&lt;li&gt;Connecting multiple MCP servers to a single agent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sample Execution&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faegpqrnqyp2v4ooarvwj.PNG" alt="Lab 4 Sample Execution 1" width="800" height="339"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft50lj9a8i9oui4sfx68b.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft50lj9a8i9oui4sfx68b.PNG" alt="Lab 4 Sample Execution 2" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learning
&lt;/h2&gt;

&lt;p&gt;MCP enables a plug-and-play tool ecosystem for AI agents. Any MCP-compatible server—such as documentation services, APIs, or databases—can instantly become a tool for the agent without writing extra integration code. Multiple MCP servers can also be connected to one agent for more powerful workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lab 5: Streaming Agent Responses
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The goal of this lab was to demonstrate how AI agents can stream responses in real time instead of waiting for the full output. This is useful for building interactive applications such as dashboards, APIs, or chat interfaces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi0wap2z6qhfhbv3j8dn.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwi0wap2z6qhfhbv3j8dn.PNG" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In a streaming architecture, the Strands Agents SDK sends response events continuously as the model processes a request.&lt;/p&gt;

&lt;p&gt;The flow looks like this:&lt;/p&gt;

&lt;p&gt;User Request → Agent → Streaming Events → Application UI/API&lt;/p&gt;

&lt;p&gt;Each event contains structured information such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;message output&lt;/li&gt;
&lt;li&gt;tool usage&lt;/li&gt;
&lt;li&gt;intermediate data&lt;/li&gt;
&lt;li&gt;final result&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows applications to display partial responses, progress updates, and tool activity in real time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr18qdxde03o2ykjutz29.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr18qdxde03o2ykjutz29.PNG" alt=" " width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The lab demonstrates two ways to stream agent responses.&lt;/p&gt;

&lt;p&gt;1️⃣ Async Streaming (stream_async)&lt;/p&gt;

&lt;p&gt;Best suited for async frameworks like FastAPI.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream_async&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Calculate 2+2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;current_tool_use&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Tool: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;current_tool_use&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach lets developers process events as they arrive and build real-time streaming applications.&lt;/p&gt;

&lt;p&gt;2️⃣ Callback Handlers&lt;/p&gt;

&lt;p&gt;For synchronous environments such as scripts or CLI applications, a callback_handler function can be used to capture events as they occur without using async code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Execution&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F51x8b9p32977vfq0gl84.PNG" alt="Lab 5 Sample Execution 1" width="800" height="341"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmjfmt7m7qycdbkc9t4l5.PNG" alt="Lab 5 Sample Execution 2" width="800" height="339"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falkr04yq7w2apnt744bb.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Falkr04yq7w2apnt744bb.PNG" alt="Lab 5 Sample Execution 3" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learning
&lt;/h2&gt;

&lt;p&gt;Lab-5 shows how agents can stream responses using async iterators or callback handlers. The stream_async method with FastAPI is ideal for production APIs, while callback handlers provide a simple option for scripts and command-line tools. Both methods give developers real-time visibility into agent responses and tool usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lab 6: Securing Agents with Guardrails
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The objective of this lab was to build a safe and compliant AI assistant by integrating Amazon Bedrock Guardrails with the Strands Agents SDK.&lt;br&gt;
The guardrails ensure the agent cannot respond to harmful or restricted requests such as financial advice, hate speech, or sharing sensitive data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts4q5j2qog12tkxb9u52.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts4q5j2qog12tkxb9u52.PNG" alt=" " width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this setup, guardrails are placed between the user input and the AI model.&lt;/p&gt;

&lt;p&gt;User Request → Guardrails Check → AI Model → Safe Response&lt;/p&gt;

&lt;p&gt;The guardrails filter or block unsafe content before the model generates a response. If a policy is violated, the system returns a predefined blocked message instead of the model output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The guardrails are attached directly to the &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; model configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;bedrock_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BedrockModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us.anthropic.claude-sonnet-4-5-20250929-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;guardrail_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;guardrail_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;guardrail_version&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DRAFT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;guardrail_trace&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;enabled&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;guardrail_redact_input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;guardrail_redact_input_message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Guardrail Intervened and Redacted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bedrock_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[...])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The guardrails apply multiple safety controls, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Topic policies → block restricted topics like financial advice&lt;/li&gt;
&lt;li&gt;Content policies → filter hate, violence, or harmful prompts&lt;/li&gt;
&lt;li&gt;Word policies → block specific words or phrases&lt;/li&gt;
&lt;li&gt;Blocked messaging → show a custom message when a rule is triggered&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A useful feature is automatic input redaction, where blocked user input is replaced with a neutral placeholder in the conversation history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Execution&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejzu5fj6vwdybnikvc8d.PNG" alt="Lab 6 Sample Execution 1" width="800" height="335"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmekltp56ugwik3orfbd.PNG" alt="Lab 6 Sample Execution 2" width="800" height="342"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40jpfz8rkspvvp2mo6uh.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F40jpfz8rkspvvp2mo6uh.PNG" alt="Lab 6 Sample Execution 3" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learning
&lt;/h2&gt;

&lt;p&gt;Lab-6 shows how &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt; Guardrails can protect AI applications by enforcing safety policies at the model level. With a simple BedrockModel configuration, the entire agent gains enterprise-grade security and content filtering, making it safer for real-world applications like customer support systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lab 7: Adding Persistent Memory to Agents
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The objective of this lab was to enable long-term memory for agents so they can remember user preferences across conversations. This is achieved using Mem0, integrated as a tool in the &lt;a href="https://strandsagents.com" rel="noopener noreferrer"&gt;Strands Agents&lt;/a&gt; SDK.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pzy2xf20fqw4n0ff44s.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pzy2xf20fqw4n0ff44s.PNG" alt=" " width="800" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this setup, the agent connects to a memory layer that stores user information.&lt;/p&gt;

&lt;p&gt;User Interaction → Agent → Memory Tool → Memory Database → Personalized Response&lt;/p&gt;

&lt;p&gt;The agent can store and retrieve user preferences, enabling more personalized interactions over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The memory functionality is added using the built-in mem0_memory tool.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands_tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;mem0_memory&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;memory_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;SYSTEM_PROMPT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;mem0_memory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;websearch&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Store a preference
&lt;/span&gt;&lt;span class="n"&gt;memory_agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mem0_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;store&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I prefer tea over coffee.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;USER_ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Retrieve it later
&lt;/span&gt;&lt;span class="n"&gt;memory_agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mem0_memory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;retrieve&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;drink preferences&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;USER_ID&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The memory backend can be configured using different storage options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/documentation-overview/opensearch-service/" rel="noopener noreferrer"&gt;Amazon OpenSearch&lt;/a&gt; Serverless for scalable cloud deployments&lt;/li&gt;
&lt;li&gt;FAISS for local development&lt;/li&gt;
&lt;li&gt;Mem0 Platform API for a fully managed service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sample Execution&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk5zfritetf9fx5bfriho.PNG" alt="Lab 7 Sample Execution 1" width="800" height="335"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrudojj9lalsypss610v.PNG" alt="Lab 7 Sample Execution 2" width="800" height="329"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks3cljmk65juio775k2b.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fks3cljmk65juio775k2b.PNG" alt="Lab 7 Sample Execution 3" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learning
&lt;/h2&gt;

&lt;p&gt;Lab-7 shows how agents can maintain persistent memory using the mem0_memory tool. It supports three main actions—store, retrieve, and list—allowing agents to remember user preferences without building a custom database. When combined with Amazon OpenSearch Serverless, it provides a scalable and serverless memory layer for personalized AI applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lab 8: Observability and Evaluation for Agents
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Objective&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The objective of this lab was to add observability and evaluation to an AI agent so developers can monitor its behavior and measure response quality. This is achieved using &lt;a href="https://www.langfuse.com/" rel="noopener noreferrer"&gt;Langfuse&lt;/a&gt; for tracing and &lt;a href="https://ragas.io" rel="noopener noreferrer"&gt;RAGAS&lt;/a&gt; for automated evaluation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8btnfhmv7m2wmhfvhskm.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8btnfhmv7m2wmhfvhskm.PNG" alt=" " width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The architecture introduces a monitoring layer around the agent.&lt;/p&gt;

&lt;p&gt;User Query → Agent → Trace Data → Langfuse → Evaluation (RAGAS) → Score Results&lt;/p&gt;

&lt;p&gt;Every agent action—such as tool calls, reasoning steps, and model responses—is captured as a trace. These traces are then evaluated using metrics to measure the quality and relevance of the agent’s output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Tracing is enabled using OpenTelemetry by setting environment variables:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OTEL_EXPORTER_OTLP_ENDPOINT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;langfuse_endpoint&lt;/span&gt;
&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;OTEL_EXPORTER_OTLP_HEADERS&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authorization=Basic &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;auth_token&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;retrieve&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;current_time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...],&lt;/span&gt;
    &lt;span class="n"&gt;trace_attributes&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session.id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;abc-1234&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user.id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user@domain.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;langfuse.tags&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent-SDK&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Observability&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this setup, the Strands Agents automatically sends traces to &lt;a href="https://www.langfuse.com/" rel="noopener noreferrer"&gt;Langfuse&lt;/a&gt; without modifying the agent code.&lt;/p&gt;

&lt;p&gt;For evaluation,&lt;a href="https://ragas.io" rel="noopener noreferrer"&gt;RAGAS&lt;/a&gt; metrics analyze the responses using an LLM judge such as Amazon Nova Premier.&lt;/p&gt;

&lt;p&gt;The system evaluates responses using metrics like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AspectCritic → checks if the response meets specific criteria&lt;/li&gt;
&lt;li&gt;RubricsScore → evaluates response quality using multi-level scoring&lt;/li&gt;
&lt;li&gt;Context Relevance → checks if retrieved documents match the query&lt;/li&gt;
&lt;li&gt;Response Groundedness → verifies if the answer is based on retrieved context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Evaluation scores are then written back to the Langfuse trace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;langfuse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;trace_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;trace_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rag_context_relevance&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.92&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sample Execution&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1cydq3zx2emhplad31z.PNG" alt="Lab 8 Sample Execution 1" width="800" height="335"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1xxrcyw0im6lh51m04cl.PNG" alt="Lab 8 Sample Execution 2" width="800" height="333"&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo8aau5cckieo4g35mit2.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo8aau5cckieo4g35mit2.PNG" alt="Lab 8 Sample Execution 3" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Learning
&lt;/h2&gt;

&lt;p&gt;Lab-8 demonstrates how to build a complete monitoring and evaluation pipeline for AI agents. By combining &lt;a href="https://www.langfuse.com/" rel="noopener noreferrer"&gt;Langfuse&lt;/a&gt; for tracing and &lt;a href="https://ragas.io" rel="noopener noreferrer"&gt;RAGAS&lt;/a&gt; for response evaluation, developers can track not only what the agent did, but also how well it performed, creating a powerful feedback loop for improving AI applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The BeSA Workshop provided a hands-on introduction to building intelligent AI agents using the &lt;a href="https://strandsagents.com" rel="noopener noreferrer"&gt;Strands Agents&lt;/a&gt; SDK.&lt;/p&gt;

&lt;p&gt;Throughout the labs, we explored how agents can be progressively enhanced with powerful capabilities such as tool integration, local model execution, AWS service connectivity, streaming responses, safety guardrails, persistent memory, and observability.&lt;/p&gt;

&lt;p&gt;Each lab demonstrated how modern AI systems are moving beyond simple prompt-response models toward agentic architectures that can reason, interact with tools, and perform real-world tasks.&lt;/p&gt;

&lt;p&gt;By combining &lt;a href="https://strandsagents.com" rel="noopener noreferrer"&gt;Strands Agents&lt;/a&gt; with AWS services like &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt;, DynamoDB, and OpenSearch Serverless, developers can build scalable AI applications that are both powerful and production-ready.&lt;/p&gt;

&lt;p&gt;The workshop clearly shows that the future of AI development lies in building intelligent agents that can plan, act, and improve continuously through feedback and monitoring.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Workshop Taught Me
&lt;/h2&gt;

&lt;p&gt;This workshop helped me understand how AI is evolving from simple prompt-based systems to agents that can actually perform tasks. Instead of just generating responses, agents can now use tools, access external data, and make decisions step by step.&lt;/p&gt;

&lt;p&gt;One thing that stood out to me was how easy the &lt;a href="https://strandsagents.com" rel="noopener noreferrer"&gt;Strands Agents&lt;/a&gt;SDK makes it to build these systems. Starting with a simple agent, we gradually added capabilities like tool integration, streaming responses, guardrails, memory, and observability.&lt;/p&gt;

&lt;p&gt;Overall, the workshop gave me a clearer picture of how developers can build more practical and reliable AI applications by combining agent frameworks with cloud services. It was a great hands-on experience that made the concepts of agentic AI much easier to understand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Acknowledgement
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Note: All architecture diagrams and lab materials referenced in this article are from the BeSA Workshop provided by AWS. I sincerely thank the BeSA team for designing these learning labs and enabling hands-on experience with Strands Agents and AWS services.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>ai</category>
      <category>devops</category>
    </item>
  </channel>
</rss>
