<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Rini Susan V S</title>
    <description>The latest articles on Forem by Rini Susan V S (@rinisvs).</description>
    <link>https://forem.com/rinisvs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/rinisvs"/>
    <language>en</language>
    <item>
      <title>Tying It All Together : How Strands Agents Enhance Retail Agent Performance Analysis</title>
      <dc:creator>Rini Susan V S</dc:creator>
      <pubDate>Tue, 30 Sep 2025 07:45:21 +0000</pubDate>
      <link>https://forem.com/aws-builders/tying-it-all-together-how-strands-agents-weave-together-retail-performance-analysis-50p4</link>
      <guid>https://forem.com/aws-builders/tying-it-all-together-how-strands-agents-weave-together-retail-performance-analysis-50p4</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;Strands Agents is a simple-to-use, code-first framework for building agents. It is an open-source SDK by Amazon Web Services [AWS]. Strands comprise three key components: a language model, a system prompt, and a set of tools. Strands supports multiple agent architecture patterns, scaling from a single agent up to complex networks of agents.&lt;/p&gt;

&lt;p&gt;Strands is not tied to a single LLM provider, and can work with models on Amazon Bedrock by default. It also supports open source models such as LlamaAPI, Ollama, OpenAI, and others. Strands supports running agents in various environments – including Amazon EC2, AWS Lambda, AWS Fargate, and Amazon Bedrock AgentCore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69gqurb9v2nx0hw2myuw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69gqurb9v2nx0hw2myuw.jpg" alt=" " width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Software performance testing evaluates how retail applications behave under various workloads and conditions, ensuring a reliable customer experience. This is crucial for systems such as e-commerce platforms, point-of-sale systems, and inventory management. Identifying and resolving performance bottlenecks before they impact users minimizes lost sales.&lt;/p&gt;

&lt;p&gt;The rise of Large Language Models [LLMs] and Generative AI presents new challenges for performance testing and engineering. Unlike traditional applications, testers now deal with dynamic, context-aware AI agents that interact with different knowledge bases and tools. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Use Case: A Retail Customer Support Agent
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Preamble&lt;/strong&gt;&lt;br&gt;
Imagine a customer support agent for an e-commerce company that&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Answer policy questions by searching the Knowledge Base&lt;/li&gt;
&lt;li&gt;Check order status by calling an API through an Action Group.&lt;/li&gt;
&lt;li&gt;Consolidate information and provide a helpful response to the customer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In production, performance is critical, and even a 10-second delay can lead to a frustrated customer and a lost sale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Performance Puzzle&lt;/strong&gt;&lt;br&gt;
The total time a user waits for an answer is the sum of several processes. The Strands Agents framework provides visibility to understand each of these steps. The agent's trace usually includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Orchestration &amp;amp; Reasoning: The agent's underlying Foundation Model (FM) interprets the user's prompt and decides what to do.&lt;/li&gt;
&lt;li&gt;Knowledge Base Retrieval: If it's a policy question, the agent queries the knowledge base.&lt;/li&gt;
&lt;li&gt;Action Group Invocation: To check an order, the agent triggers a Lambda function that calls an internal Order Status API.&lt;/li&gt;
&lt;li&gt;Final Response Generation: The retrieved information is passed back to the LLM, which generates the final response.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Strands Agents in Action
&lt;/h2&gt;

&lt;p&gt;The Jupyter Notebook walks through the process of creating an Agent, based on the e-commerce customer support use case described earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An active AWS account.&lt;/li&gt;
&lt;li&gt;An Amazon Bedrock Agent created with:&lt;/li&gt;
&lt;li&gt;A Knowledge Base attached&lt;/li&gt;
&lt;li&gt;An Action Group configured to invoke a Lambda function for checking order status.&lt;/li&gt;
&lt;li&gt;The agentId and agentAliasId of your created agent.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;boto3&lt;/code&gt; library installed and configured with appropriate IAM permissions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Helper Function to Invoke the Agent&lt;/strong&gt;&lt;br&gt;
Using the Strands SDK's tool interfaces, we can build our own custom tools. Any Python function can be used as a tool by using the @tool decorator.&lt;br&gt;
Create a reusable function to invoke our agent. This function will capture the agent's response and also the full trace of its internal operations, which is crucial for performance analysis.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from strands.tools import tool
@tool
def invoke_bedrock_agent(prompt: str, session_id: str):
    """
    Invokes the Bedrock agent, captures the response, and returns the full event stream.

    Args:
        prompt (str): The user's query for the agent.
        session_id (str): A unique identifier for the conversation session.

    Returns:
        list: A list of all events received from the agent's response stream.
    """
    print(f"\nUser prompt: '{prompt}'")

    events = []
    start_time = time.time()

    try:
        response = bedrock_agent_runtime_client.invoke_agent(
            agentId=AGENT_ID,
            agentAliasId=AGENT_ALIAS_ID,
            sessionId=session_id,
            inputText=prompt,
            enableTrace=True # CRITICAL: This enables the detailed trace!
        )

        event_stream = response['completion']
        for event in event_stream:
            events.append(event)

        final_response = ""

        trace_data = None
        # Extract the final response and the trace data
        for event in events:
          if 'chunk' in event:
            final_response += event['chunk']['bytes'].decode('utf-8')
          if 'trace' in event:
            trace_data = event['trace']['trace']

    except Exception as e:
        print(f"An error occurred: {e}")
        return None
    finally:
        end_time = time.time()
        print("")
        print(f"Total Latency: {end_time - start_time:.2f} seconds")

    return events
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Performance Test Scenarios&lt;/strong&gt;&lt;br&gt;
Let’s run a few tests to establish a baseline for different types of queries.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Scenario A: Knowledge Base Query&lt;br&gt;
This tests the agent's ability to retrieve information from the attached knowledge base. The primary latency here will be in the &lt;code&gt;Retrieve&lt;/code&gt; step.&lt;br&gt;
&lt;code&gt;session_id_kb = str(uuid.uuid4())&lt;br&gt;
prompt_kb = "What is the return policy for clothes?"&lt;br&gt;
kb_events = invoke_bedrock_agent(prompt_kb, session_id_kb)&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scenario B: Action Group Query (API Call)&lt;br&gt;
This tests the agent's ability to invoke an external tool (our order status Lambda). Latency will be a combination of reasoning and the actual Lambda/API execution time.&lt;br&gt;
&lt;code&gt;session_id_ag = str(uuid.uuid4())&lt;br&gt;
prompt_ag = "Can you check the status for order #B-98765?"&lt;br&gt;
ag_events = invoke_bedrock_agent(prompt_ag, session_id_ag)&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Analyzing the Performance Trace&lt;/strong&gt;&lt;br&gt;
The real value comes from parsing the &lt;code&gt;trace&lt;/code&gt; data returned in the event stream. Create a function, trace_analysis() to extract and analyze this data, and use @tool decorator.&lt;/p&gt;

&lt;p&gt;Invoke the trace_analysis function with knowledge base events and action group events data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;def trace_analysis(kb_events)&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1x10imc1x7i0bzohkv4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn1x10imc1x7i0bzohkv4.png" alt=" " width="609" height="215"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;def trace_analysis(ag_events)&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1tdkrn3d74mh62n4826.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1tdkrn3d74mh62n4826.png" alt=" " width="611" height="246"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Strands Agent Function&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;import strands&lt;br&gt;
from strands import Agent&lt;br&gt;
agent = Agent(&lt;br&gt;
tools=[trace_analysis,invoke_bedrock_agent]&lt;br&gt;
)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strands Agent - Knowledge Base Query&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F035oqipijwbim55kr4ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F035oqipijwbim55kr4ao.png" alt=" " width="598" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Strands Agent – Lambda Function Invocation&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzibtrpe0cvyk99nuwjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffzibtrpe0cvyk99nuwjt.png" alt=" " width="549" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudWatch Monitoring
&lt;/h2&gt;

&lt;p&gt;The token count and invocation latency can also be observed in AWS Cloud Watch, under GenAI Observability section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkzr0otdloiiah5rase7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpkzr0otdloiiah5rase7.png" alt=" " width="626" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbfspcrcyrp1y59bx3xm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsbfspcrcyrp1y59bx3xm.png" alt=" " width="626" height="183"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Insights and Optimization Actions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Knowledge Base Bottleneck: If the "Knowledge Base Retrieval Time" is high, need to investigate the knowledge base settings. Are you retrieving too many chunks? Is the vector database under-provisioned?&lt;/li&gt;
&lt;li&gt;API Bottleneck: If the "Lambda/API Call Time" is high, the performance issue lies outside of Bedrock. You need to use tools like AWS X-Ray and CloudWatch Logs to optimize the Lambda function and any downstream services it calls.&lt;/li&gt;
&lt;li&gt;Model Latency: If "Final Response Generation Latency" is high, consider switching to a faster, more cost-effective model. Refine agent's instructions to produce more concise answers, thereby reducing the outputTokenCount.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;References&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://strandsagents.com/latest/documentation/docs/" rel="noopener noreferrer"&gt;https://strandsagents.com/latest/documentation/docs/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/strands-agents/sdk-python" rel="noopener noreferrer"&gt;https://github.com/strands-agents/sdk-python&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>performance</category>
      <category>llm</category>
    </item>
    <item>
      <title>Amazon Nova Canvas : opening up new canvases</title>
      <dc:creator>Rini Susan V S</dc:creator>
      <pubDate>Mon, 19 May 2025 07:28:26 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-nova-canvas-opening-up-new-canvases-3mgi</link>
      <guid>https://forem.com/aws-builders/amazon-nova-canvas-opening-up-new-canvases-3mgi</guid>
      <description>&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;Amazon Nova is a new generation of foundation models available on Amazon Bedrock. Nova includes four understanding models, two creative content generation models, and one speech-to-speech model. The content generation models include Amazon Nova Canvas for image generation and Amazon Nova Reel for video generation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazon Nova Canvas
&lt;/h2&gt;

&lt;p&gt;Amazon Nova Canvas is an image generation model that creates professional-grade images from text and image inputs. Amazon Nova Canvas is ideal for various applications such as marketing and reporting. It supports features like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Text-to-image generation – Input a text prompt and generate an image as output.&lt;/li&gt;
&lt;li&gt;Image editing options – inpainting, outpainting, generating variations, and automatic editing.&lt;/li&gt;
&lt;li&gt;Color-guided content – input a list of hex color codes with a text prompt.&lt;/li&gt;
&lt;li&gt;Background removal – identifies multiple objects in the input image and removes the background.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Nova Canvas in Amazon Bedrock Playground
&lt;/h2&gt;

&lt;p&gt;It was a fun experience to try out the Nova Canvas model in the Amazon Bedrock playground. Being interested in photography, I wanted to evaluate how realistic the Nova model images were and know if the quality was similar to what Amazon claims.&lt;/p&gt;

&lt;p&gt;Below is an image of Yosemite National Park, which I have taken near the Tunnel View point. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqc6wuc7bigz8tbmgwdo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqc6wuc7bigz8tbmgwdo.png" alt="Image description" width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I tried to generate a similar image in Amazon Bedrock Playground, using the Nova Canvas model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxk6pa3boz8nriok1jiui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxk6pa3boz8nriok1jiui.png" alt="Image description" width="800" height="342"&gt;&lt;/a&gt;&lt;br&gt;
I was amazed to see how easily we could generate quality images using a few prompts. Yes, the experience of viewing the natural wonder in person is definitely rewarding. But time, effort, and cost factors are low in this option. The Amazon Canvas model can come in handy in case of weekly or monthly newsletters that need to have catchy cover pages or images related to a topic. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1gpvorr4x97jf98l034.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk1gpvorr4x97jf98l034.png" alt="Image description" width="800" height="506"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Nova Canvas in Python Notebook
&lt;/h2&gt;

&lt;p&gt;Amazon Nova also provides the option for image generation using Python notebooks, apart from the Bedrock playground. One method of invoking the Amazon Nova models is via the Invoke API.  Provided below is the Python code to generate the above image programmatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Install libraries
!pip install boto3

# Import necessary libraries
import base64
import json
import random
import boto3
from PIL import Image
import io
import os

# Get AWS Credentials from secrets
AWS_ACCESS_KEY_ID=os.getenv('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY=os.getenv('AWS_SECRET_ACCESS_KEY')
AWS_DEFAULT_REGION=os.getenv('AWS_DEFAULT_REGION')

# Create a Bedrock Runtime client
client = boto3.client(
    "bedrock-runtime",
    region_name=AWS_DEFAULT_REGION,
    aws_access_key_id=AWS_ACCESS_KEY_ID,
    aws_secret_access_key=AWS_SECRET_ACCESS_KEY
)

# Set the model ID for Nova Canvas
model_id = "amazon.nova-canvas-v1:0"

# Define the image generation prompt
prompt = "Generate a realistic-looking image of Yosemite from the tunnel view. The picture was taken during the summer season at noon."

# Generate a random seed
seed = 12

# Format the request payload
request_body = {
    "taskType": "TEXT_IMAGE",
    "textToImageParams": {"text": prompt},
    "imageGenerationConfig": {
        "seed": seed,
        "quality": "standard",
        "height": 1024,
        "width": 512,
        "numberOfImages": 1,
        "cfgScale": 8.0
    }
}

# Invoke the model
response = client.invoke_model(
    body=json.dumps(request_body),
    modelId=model_id,
    contentType="application/json",
    accept="application/json"
)
# Parse the response
response_body = json.loads(response["body"].read())
image_data = base64.b64decode(response_body["images"][0])

# Save the image
output_dir = "generated_images"
os.makedirs(output_dir, exist_ok=True)
output_path = os.path.join(output_dir, f"nova_canvas_image_{seed}.png")

image = Image.open(io.BytesIO(image_data))
image.save(output_path)
print(f"Image saved to: {output_path}")`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Security Guardrails in Amazon Nova
&lt;/h2&gt;

&lt;p&gt;The Nova Canvas model available through Amazon Bedrock comes with integrated security guardrails. Guardrails help evaluate user inputs and model responses based on specific policies and provide safeguards to help build generative AI applications securely. &lt;/p&gt;

&lt;p&gt;For example, if you try a non-safe prompt in Nova Canvas, the image won’t be generated, and a warning text message will be displayed as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxav1whdf09kfr1298vlw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxav1whdf09kfr1298vlw.png" alt="Image description" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Areas of Improvement
&lt;/h2&gt;

&lt;p&gt;The following are some features of Amazon Nova Canvas model, that needs enhancement&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Input text size limit [ 1024 characters]&lt;/li&gt;
&lt;li&gt;Input image size limit [ longest side not exceeding 4096 pixels]&lt;/li&gt;
&lt;li&gt;No 3D image generation [ currently 2D image generation] &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;The Amazon Nova Canvas model is an excellent content generation model that can aid in prototyping, social media marketing, and advertising campaigns. It can be easily integrated into Generative AI applications. The security guardrails can ensure that the input prompt and the generated response don’t violate any safety policies. The foundation models are evolving rapidly, and the Amazon Nova Canvas model also has scope to improve. &lt;/p&gt;

&lt;h3&gt;
  
  
  References
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/exploring-creative-possibilities-a-visual-guide-to-amazon-nova-canvas/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/machine-learning/exploring-creative-possibilities-a-visual-guide-to-amazon-nova-canvas/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/nova/latest/userguide/image-generation.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/nova/latest/userguide/image-generation.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>awscommunitybuilder</category>
      <category>ai</category>
      <category>amazonnovamodel</category>
    </item>
  </channel>
</rss>
