<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Hafiz Syed Ashir Hassan</title>
    <description>The latest articles on Forem by Hafiz Syed Ashir Hassan (@ashirhs).</description>
    <link>https://forem.com/ashirhs</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ashirhs"/>
    <language>en</language>
    <item>
      <title>Automating CloudWatch Log Analysis with Amazon Strands Agent: Meet the CloudWatch Analyzer</title>
      <dc:creator>Hafiz Syed Ashir Hassan</dc:creator>
      <pubDate>Sat, 14 Jun 2025 18:24:54 +0000</pubDate>
      <link>https://forem.com/aws-builders/automating-cloudwatch-log-analysis-with-amazon-strands-agent-meet-the-cloudwatch-analyzer-299k</link>
      <guid>https://forem.com/aws-builders/automating-cloudwatch-log-analysis-with-amazon-strands-agent-meet-the-cloudwatch-analyzer-299k</guid>
      <description>&lt;h1&gt;
  
  
  📈 The Problem: The Developer Time Drain
&lt;/h1&gt;

&lt;p&gt;Ever wonder how much time do we developers spend debugging, monitoring, and resolving issues, time that could be better spent building?&lt;/p&gt;

&lt;p&gt;Developers spend a significant amount of time, ranging from 20% to 75%, on debugging, monitoring, and resolving issues. This includes time spent identifying, understanding, and fixing bugs in code, as well as monitoring application performance and troubleshooting issues.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Debugging:&lt;/strong&gt; This is a core part of a developer's work, involving finding and fixing errors in code. Studies suggest that developers can spend anywhere from 20% to 75% of their time on debugging.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring:&lt;/strong&gt; This involves tracking application performance and behaviour to identify potential issues before they impact users. Monitoring tools and techniques help developers proactively address problems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resolving Issues:&lt;/strong&gt; This encompasses the entire process of addressing bugs, performance issues, and other problems that arise in software. This can include debugging, but also involves communication, collaboration, and potentially working with other teams to resolve complex issues.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  🤖 What is Amazon Strands Agent?
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Open‑source, model‑first agent framework created by AWS; lives on GitHub and is installable via pip.&lt;/li&gt;
&lt;li&gt;Lets you define an agent with three artefacts only: a prompt, a model provider (Bedrock, Anthropic, Ollama, etc.), and a list of tools.&lt;/li&gt;
&lt;li&gt;Ships with a deployment toolkit for Lambda, Fargate, containers, or local dev.&lt;/li&gt;
&lt;li&gt;Key design goals: minimal boilerplate, production‑ready, pluggable LLMs, clear separation of planning vs. execution.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Strands Agents SDK in 60 seconds
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;pip install strands-agents-sdk-python gets you started.&lt;/li&gt;
&lt;li&gt;Create an agent in ~10 lines of code: import SDK → write prompt → register tools → run.&lt;/li&gt;
&lt;li&gt;Out‑of‑the‑box adapters for Bedrock models, Anthropic Claude, Meta Llama, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  👨‍💻 Agents for Amazon Bedrock vs. Strands Agents: Key Differences
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh69tdggyi8tluecyvdf0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh69tdggyi8tluecyvdf0.png" alt="Amazon Bedrock Agent vs Amazon Strands Agent" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  🖥️ Introducing the CloudWatch Analyzer
&lt;/h1&gt;

&lt;p&gt;A tool build using Amazon Strands Agent, using Amazon Nova model which can be changed as well that fetch, analyze and provide solutions for the issues found in CloudWatch. A easy to use tool that will monitor the logs, debug and provide solutions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecj5nqyuqnrzrme2eyuk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fecj5nqyuqnrzrme2eyuk.png" alt="Cloudwatch Analyzer" width="800" height="405"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  💡 Key Features
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Log Source Selection: Pick all or any CloudWatch Log Group.&lt;/li&gt;
&lt;li&gt;Time Window: Specify how many hours back to check for the logs.&lt;/li&gt;
&lt;li&gt;AI‑Powered Analysis: Strands Agent summarises logs &amp;amp; pin-points root cause.&lt;/li&gt;
&lt;li&gt;Resolution Suggestions: Returns fixes with code snippets where applicable.&lt;/li&gt;
&lt;li&gt;Knowledge‑Base Add‑on: Optionally hook in internal docs for context aware solutions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  📐 Architecture Walk‑through
&lt;/h1&gt;

&lt;p&gt;1- User selects logs group + time range using UI.&lt;br&gt;
2- Logs are fetched using Boto3 or AWS SDK using tools provided.&lt;br&gt;
3- Amazon Strands Agent processes logs.&lt;/p&gt;

&lt;p&gt;4- Agent returns: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;li&gt;Error insight&lt;/li&gt;
&lt;li&gt;Recommended fix&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;5- (Optional) Knowledge Base is queried for tailored help.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g4z6ph55tb8m4ky9sgc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6g4z6ph55tb8m4ky9sgc.png" alt="Flow Architecture" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Video Link:
&lt;/h4&gt;

&lt;p&gt;Click here to watch Amazon Cloudwatch Analyzer Walkthrough&lt;/p&gt;
&lt;h1&gt;
  
  
  ❓ Why Strands Agent Was the Perfect Fit
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Local prototyping: Iterate without redeploying to the console.&lt;/li&gt;
&lt;li&gt;Tool abstraction layer: Let you bolt on the Knowledge Base search later without rewriting prompts.&lt;/li&gt;
&lt;li&gt;Multi‑model freedom: You benchmarked Claude‑3, Titan‑Text, and Gemini side‑by‑side.&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  🛠️ Tech Stack
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Python&lt;/li&gt;
&lt;li&gt;Boto3 for CloudWatch log access&lt;/li&gt;
&lt;li&gt;Amazon Bedrock Strands Agent&lt;/li&gt;
&lt;li&gt;Amazon Knowledge Base&lt;/li&gt;
&lt;li&gt;Streamlit for UI&lt;/li&gt;
&lt;/ul&gt;
&lt;h1&gt;
  
  
  ⚙️ Code Highlights
&lt;/h1&gt;

&lt;p&gt;Create Agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    agent = Agent(
        model=model,
        tools=tools,
        system_prompt=get_system_prompt(use_knowledge_base)
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fetch Logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = self.client.describe_log_streams(
                logGroupName=log_group_name,
                limit=1
            )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Display Result:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;prompt = f"""
            Get logs from the CloudWatch log group '{log_group}' for the past {hours} hours
            {f"with filter pattern '{filter_pattern}'" if filter_pattern else ""}.

            Then analyze these logs to identify errors and issues.

            For each identified issue:
            1. Provide a clear description of the problem
            2. Assess the severity (Critical, High, Medium, Low)
            3. Recommend solutions to fix the issue
            {"4. Reference relevant knowledge base articles if available" if use_kb else ""}

            Organize your response in a clear, structured format.
            """

response = agent(prompt)
print(response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  🚀 Github Link:
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/ashirsyed/cloudwatch-logs-analyzer/" rel="noopener noreferrer"&gt;https://github.com/ashirsyed/cloudwatch-logs-analyzer/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  🧠 Resources:
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/strands-agents" rel="noopener noreferrer"&gt;https://github.com/strands-agents&lt;/a&gt;&lt;br&gt;
&lt;a href="https://strandsagents.com/latest/" rel="noopener noreferrer"&gt;https://strandsagents.com/latest/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/opensource/introducing-strands-agents-an-open-source-ai-agents-sdk/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/opensource/introducing-strands-agents-an-open-source-ai-agents-sdk/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  🙌 Impact &amp;amp; Use Cases
&lt;/h1&gt;

&lt;p&gt;Save developers' time&lt;br&gt;
Faster incident resolutions&lt;br&gt;
Onboarding support: New devs get guided suggestions&lt;br&gt;
Can be plugged into existing dashboards or tools (Slack, Teams)&lt;br&gt;
Reduced Mean‑Time‑To‑Resolution by 40%&lt;/p&gt;

&lt;h1&gt;
  
  
  🔚 Conclusion
&lt;/h1&gt;

&lt;p&gt;Wrap up by encouraging other builders to try the Strands Agent and explore how AI can assist developers.&lt;/p&gt;

</description>
      <category>cloudwatch</category>
      <category>amazonstrands</category>
      <category>agents</category>
      <category>ai</category>
    </item>
    <item>
      <title>Prompt Engineering with Generative AI</title>
      <dc:creator>Hafiz Syed Ashir Hassan</dc:creator>
      <pubDate>Sun, 15 Sep 2024 16:51:25 +0000</pubDate>
      <link>https://forem.com/aws-builders/prompt-engineering-with-generative-ai-1lij</link>
      <guid>https://forem.com/aws-builders/prompt-engineering-with-generative-ai-1lij</guid>
      <description>&lt;p&gt;See any GenAI model as a genius boy who is dump to answer. He has all the knowledge but don't know what and how to answer. The tool to fetch your answer in most accurate and well structured way is called Prompt Engineering.&lt;/p&gt;

&lt;p&gt;Here I will mention 3 techniques for well prompt engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clear Prompting:&lt;/strong&gt;&lt;br&gt;
We have a lot in our head which we don't say because it is 'common sense.' As a wise man said: "common sense is not common", thus we need to be clear, direct, well structured and well explained while giving prompts.&lt;br&gt;
The below example shows that when we ask a question, it may get confuse or give details that is more needed. But when we are direct, it give is a better answer.&lt;/p&gt;

&lt;p&gt;Indirect Prompt:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw35wmn5c2z5d5mfel1h2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw35wmn5c2z5d5mfel1h2.png" alt="Image description" width="800" height="653"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Direct Prompt:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0c5n7poy2z3rybg3z42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0c5n7poy2z3rybg3z42.png" alt="Image description" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ask to think Step-By-Step:&lt;/strong&gt;&lt;br&gt;
This method, also referred to as chain of thought (CoT) prompting, can greatly enhance the accuracy and depth of a model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgj2f50f62tlboptdh2dk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgj2f50f62tlboptdh2dk.png" alt="Image description" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now ask to think step by step:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcsb5yv4h6soyn9wesei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcsb5yv4h6soyn9wesei.png" alt="Image description" width="800" height="711"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduce Hallucination:&lt;/strong&gt;&lt;br&gt;
To reduce hallucination, can follow the steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ask the model to answer only if it is confident and sure about the response.&lt;/li&gt;
&lt;li&gt;Give the model space to think before responding by encouraging step-by-step reasoning.&lt;/li&gt;
&lt;li&gt;Ask the model so say "I don't know the answer" if it is not sure or don't have the answer.&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>generativeai</category>
      <category>genai</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>From Raw to Refined: Using AWS Glue to Curate Data in S3 for Athena Queries</title>
      <dc:creator>Hafiz Syed Ashir Hassan</dc:creator>
      <pubDate>Sun, 15 Sep 2024 15:46:32 +0000</pubDate>
      <link>https://forem.com/aws-builders/from-raw-to-refined-using-aws-glue-to-curate-data-in-s3-for-athena-queries-j9b</link>
      <guid>https://forem.com/aws-builders/from-raw-to-refined-using-aws-glue-to-curate-data-in-s3-for-athena-queries-j9b</guid>
      <description>&lt;p&gt;Being a data engineer is fun in own ways. Thinking of a different, easiest and optimised way to solve a problem is 90% of the effort. Few years back, when I and AWS Glue was young, created a whole ETL pipeline where a single trigger (manual or automatic) can fetch the data, curate and ready to use in just few minutes.&lt;/p&gt;

&lt;p&gt;Lets start from the Architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9veqyppeeiz27d619pn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz9veqyppeeiz27d619pn.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Invoke can be manual or automatic schedule which executes a Glue Job named: Driver. The main responsibility of it to see if any other job is running and fetch the requirements and configurations and pass to another Glue job called Controller.&lt;/li&gt;
&lt;li&gt;Controller is responsible for the whole execution and taking care if the pipeline end successfully or failed and any retry is needed. It also trigger the worker glue jobs when one is finished.&lt;/li&gt;
&lt;li&gt;The Amazon RDS keeps all the records for each steps which in short is our logging database. if retry is needed, the latest data is fetch from RDS so know from where the job will start again.&lt;/li&gt;
&lt;li&gt;The 1st worker job called Fetch CSV will fetch the data in CSV format from source that can be RDS, S3, Data Streams or any other and store in S3.&lt;/li&gt;
&lt;li&gt;The 2nd worker job called 'Convert to Parquet' gets trigger when 1st is completed and fetch the CSV files from S3 and convert to Parquet format so it is lighter and easier to curate and the size is reduced as well.&lt;/li&gt;
&lt;li&gt;The 3rd worker 'Curate Data' is executed after 2nd worker job is finished. It fetches the Data from S3 in Parquet format, curate using the Spark Job and store in final S3 bucket.&lt;/li&gt;
&lt;li&gt;Meanwhile the Glue Crawlers are used on S3 to fetch the metadata for Athena.&lt;/li&gt;
&lt;li&gt;Lastly, Athena is used to view the data from S3.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is easy to implement and maintain. It can work on structured or unstructured data.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Reduce ETL Time by Converting Sequential Code to Parallel AWS Lambda Execution</title>
      <dc:creator>Hafiz Syed Ashir Hassan</dc:creator>
      <pubDate>Sun, 15 Sep 2024 13:29:52 +0000</pubDate>
      <link>https://forem.com/aws-builders/reduce-etl-time-by-converting-sequential-code-to-parallel-aws-lambda-execution-25bp</link>
      <guid>https://forem.com/aws-builders/reduce-etl-time-by-converting-sequential-code-to-parallel-aws-lambda-execution-25bp</guid>
      <description>&lt;p&gt;Few years back, when I was quite fresh in the cloud world, I was given an ETL problem that the current code is written in Java that executes on linux server and the whole ETL time was more than 8 hours minimum. As a cloud enthusiastic, my challenge was to reduce the ETL time.&lt;/p&gt;

&lt;p&gt;So, the code as using Google Adwords API to extract the data and store on servers where the data was send to a data warehouse. The whole process used Pentaho tool to perform the ETL.&lt;br&gt;
For quick resolution, I had 2 options: either to use AWS Lambda or AWS Glue. I choose AWS Lambda because the ETL time per Google Adwords account would never exceed 10 min in worst case.&lt;/p&gt;

&lt;p&gt;The architecture is below: &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodsaxjkt35ina8hmr2qr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodsaxjkt35ina8hmr2qr.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The Pentaho tool would invoke the AWS Lambda function named 'data-sync-automate' with accountID as payload.&lt;/li&gt;
&lt;li&gt;The function will execute the 10 other AWS Lambdas, each associated with a metrics of Google-Ads, fetch the records and store in S3.&lt;/li&gt;
&lt;li&gt;Once fetched, the AWS Lambda function 'data-sync-automate' will send a message in SQS.&lt;/li&gt;
&lt;li&gt;Pentaho will fetch the message and download the data from S3 for that particular accountID.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole ETL time was reduced from 8 hours to less than 50 minutes.&lt;/p&gt;

&lt;p&gt;Below is an example how to fetch Google Adwords Keyword Report:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import boto3
import json
from google.ads.google_ads.client import GoogleAdsClient
from google.ads.google_ads.errors import GoogleAdsException

# Initialize S3 client
s3_client = boto3.client('s3')

def lambda_handler(event, context):
    # Environment variables for configuration
    developer_token = os.environ['GOOGLE_ADS_DEVELOPER_TOKEN']
    client_id = os.environ['GOOGLE_ADS_CLIENT_ID']
    client_secret = os.environ['GOOGLE_ADS_CLIENT_SECRET']
    refresh_token = os.environ['GOOGLE_ADS_REFRESH_TOKEN']
    login_customer_id = os.environ['GOOGLE_ADS_LOGIN_CUSTOMER_ID']
    s3_bucket_name = os.environ['S3_BUCKET_NAME']

    # Initialize Google Ads client
    client = GoogleAdsClient.load_from_dict({
        'developer_token': developer_token,
        'client_id': client_id,
        'client_secret': client_secret,
        'refresh_token': refresh_token,
        'login_customer_id': login_customer_id,
        'use_proto_plus': True
    })

    # Define the customer ID
    customer_id = 'YOUR_CUSTOMER_ID'  # Replace with the correct customer ID

    # Define the Google Ads Query Language (GAQL) query for keywords report
    query = """
        SELECT
          campaign.id,
          ad_group.id,
          ad_group_criterion.keyword.text,
          ad_group_criterion.keyword.match_type,
          metrics.impressions,
          metrics.clicks,
          metrics.cost_micros
        FROM
          keyword_view
        WHERE
          segments.date DURING LAST_30_DAYS
        LIMIT 100
    """

    try:
        # Fetch the keywords report data
        response_data = fetch_keywords_report(client, customer_id, query)

        # Upload the data to S3
        upload_to_s3(s3_bucket_name, 'google_ads_keywords_report.json', json.dumps(response_data))

        return {
            'statusCode': 200,
            'body': json.dumps('Keywords report fetched and stored in S3 successfully!')
        }
    except GoogleAdsException as ex:
        return {
            'statusCode': 500,
            'body': f"An error occurred: {ex}"
        }

def fetch_keywords_report(client, customer_id, query):
    ga_service = client.get_service("GoogleAdsService")
    response = ga_service.search(customer_id=customer_id, query=query)
    results = []

    # Process the response
    for row in response:
        results.append({
            'campaign_id': row.campaign.id,
            'ad_group_id': row.ad_group.id,
            'keyword_text': row.ad_group_criterion.keyword.text,
            'match_type': row.ad_group_criterion.keyword.match_type.name,
            'impressions': row.metrics.impressions,
            'clicks': row.metrics.clicks,
            'cost_micros': row.metrics.cost_micros
        })

    return results

def upload_to_s3(bucket_name, file_name, data):
    s3_client.put_object(
        Bucket=bucket_name,
        Key=file_name,
        Body=data
    )

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>lambda</category>
      <category>aws</category>
      <category>etl</category>
    </item>
    <item>
      <title>Amazon Bedrock: Anthropic’s model, Claude 2.1 - The Key Difference</title>
      <dc:creator>Hafiz Syed Ashir Hassan</dc:creator>
      <pubDate>Tue, 30 Jan 2024 19:55:00 +0000</pubDate>
      <link>https://forem.com/aws-builders/amazon-bedrock-anthropics-model-claude-21-the-key-difference-ldn</link>
      <guid>https://forem.com/aws-builders/amazon-bedrock-anthropics-model-claude-21-the-key-difference-ldn</guid>
      <description>&lt;p&gt;Anthropic’s model, Claude 2.1 was introduced in November 2023. It targets the enterprise level applications that can be used commercially.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Differences and enhancements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supports 200,000 tokens, roughly 150,000 words or around 500 pages of documents where Claude 2.0 supports 100,000 tokens&lt;/li&gt;
&lt;li&gt;Helps to summarise, doing Q&amp;amp;A, forecast trends on various factors and did stats and comparison for multiple documents better than before&lt;/li&gt;
&lt;li&gt;50% fewer hallucinations&lt;/li&gt;
&lt;li&gt;30% reduction in incorrect answers&lt;/li&gt;
&lt;li&gt;3–4 times lower rate of mistakes in comparison of the documents&lt;/li&gt;
&lt;li&gt;Can integrate different APIs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model is available in AWS Bedrock in the US East (N. Virginia) and US West (Oregon) Regions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Making a requests using AWS Bedrock:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import anthropic_bedrock
from anthropic_bedrock import AnthropicBedrock

client = AnthropicBedrock(
    aws_access_key="&amp;lt;aws access key&amp;gt;",
    aws_secret_key="&amp;lt;aws secret key&amp;gt;",
    aws_session_token="&amp;lt;aws_session_token&amp;gt;",
    aws_region="us-east-2",
)

prompt_completion = client.completions.create(
    model="anthropic.claude-v2:1",
    max_tokens_to_sample=256,
    prompt=f"{anthropic_bedrock.HUMAN_PROMPT} Tell me top 10 happenings on 10th January in the history of the world! {anthropic_bedrock.AI_PROMPT}",
)
print(prompt_completion.completion)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>generativeai</category>
      <category>aws</category>
      <category>bedrock</category>
    </item>
  </channel>
</rss>
