<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ayman Aly Mahmoud</title>
    <description>The latest articles on Forem by Ayman Aly Mahmoud (@aymanmahmoud33).</description>
    <link>https://forem.com/aymanmahmoud33</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/aymanmahmoud33"/>
    <language>en</language>
    <item>
      <title>Demystifying API Gateway integration types</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Wed, 14 Jan 2026 12:25:14 +0000</pubDate>
      <link>https://forem.com/aws-builders/demystifying-api-integration-types-2eia</link>
      <guid>https://forem.com/aws-builders/demystifying-api-integration-types-2eia</guid>
      <description>&lt;p&gt;&lt;strong&gt;API gateway&lt;/strong&gt; integrations connect the gateway to various backend services, such as Lambda functions, HTTP/S endpoints, or other cloud services. The specific integration types and configurations depend on the chosen API gateway provider and the target backend. &lt;br&gt;
In this article I will explain the integration types and when to use each one.&lt;/p&gt;

&lt;p&gt;I will divide them by 3 use cases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The "Builder" tools (Lambda &amp;amp; Mock integrations)&lt;/strong&gt;
This one focuses on how to connect to serverless functions or create a fake backend for testing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The "Connector" Tools (HTTP &amp;amp; Private)&lt;/strong&gt;
Here the focus is on how to talk to other public websites or secure services hidden inside a private network (VPC).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The "Power User" Tool (AWS Service)&lt;/strong&gt; 
Focus on the advanced method of connecting directly to other AWS services (like DynamoDB) without writing any code.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's jump into the &lt;strong&gt;Builder Tools&lt;/strong&gt;.&lt;br&gt;
In this integration type we basically integrate with an "AWS Lambda" function, and sometimes we just use a "Mock Integration", when integrating with a lambda function, the code will execute the logic. talks to databases, or processes data. While in Mock Integration the API will just return static, hardcoded responses without actually running any backend code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Lambda Integration&lt;/strong&gt;&lt;br&gt;
This is the most popular integration. When a user hits your API endpoint (e.g., &lt;code&gt;GET /users&lt;/code&gt;), API Gateway triggers a specific AWS Lambda function.&lt;br&gt;
The Crucial Decision is to choose between &lt;em&gt;Proxy&lt;/em&gt; or &lt;em&gt;Non-Proxy&lt;/em&gt;, this is the single most important setting to understand here.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4d6apbma1madfvdhxi0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4d6apbma1madfvdhxi0.jpg" alt="Proxy vs non-proxy integration" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Proxy Integration (The Pipe)&lt;/th&gt;
&lt;th&gt;Non-Proxy Integration (The Prep Chef)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;How it works&lt;/td&gt;
&lt;td&gt;API Gateway passes the entire raw request (headers, body, query params) directly to your Lambda function.&lt;/td&gt;
&lt;td&gt;API Gateway transforms the request before sending it. It can filter data, rename parameters, or change JSON to XML.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Who does the work?&lt;/td&gt;
&lt;td&gt;Your Lambda code must parse the request and format the response perfectly.&lt;/td&gt;
&lt;td&gt;API Gateway handles the messy parsing; your Lambda just receives clean data.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for...&lt;/td&gt;
&lt;td&gt;Modern, standard APIs where you want full control in your code.&lt;/td&gt;
&lt;td&gt;Legacy systems or when you need to clean up data before your code sees it.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;2. Mock Integration (The Placeholder)&lt;/strong&gt;&lt;br&gt;
This is exactly what it sounds like. You configure the API to say: "If someone calls this endpoint, just send back this specific JSON."&lt;br&gt;
Why would you use this?&lt;br&gt;
The Mock will return fake data, so the frontend team keeps working while the backend code is still being developed.&lt;br&gt;
You want to test how your app handles a "500 Server Error" without actually breaking your server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Connector" Tools (HTTP &amp;amp; Private)&lt;/strong&gt;&lt;br&gt;
Now, let's look at how we connect to services that already exist somewhere else (not Lambda functions you just wrote).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. HTTP Integration (The Messenger)&lt;/strong&gt;&lt;br&gt;
Think of this as a "pass-through." You already have a web application running somewhere else (like a legacy server or a third-party API like Google Maps). You want API Gateway to sit in front of it.&lt;br&gt;
The API Gateway receives the request and forwards it straight to another URL.&lt;br&gt;
Use this if you are migrating an old API to AWS. You can put API Gateway in front of your old server. To the user, it looks like a modern AWS API, but behind the scenes, it's still talking to the old server until you're ready to upgrade it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzltiu1po7rgcnxd668r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjzltiu1po7rgcnxd668r.jpg" alt="HTTP integration" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Private Integration (The Secure Tunnel)&lt;/strong&gt;&lt;br&gt;
This is for when your backend is &lt;em&gt;hidden&lt;/em&gt; and not accessible via the public internet.&lt;br&gt;
You may have a database or service running on an EC2 instance inside an &lt;em&gt;Amazon VPC&lt;/em&gt;. It is secure with no public IP address.&lt;br&gt;
Since it's private, API Gateway can't normally reach it.&lt;br&gt;
Private Integration uses a component called &lt;em&gt;VPC Link&lt;/em&gt; to create a secure tunnel into your private network to talk to that hidden service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84ud0zijyqn3zxavhsce.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F84ud0zijyqn3zxavhsce.jpg" alt="Private integration" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The "Power User" Tool (AWS Service)&lt;/strong&gt;&lt;br&gt;
This is the final and often most misunderstood integration type.&lt;br&gt;
Most people think: "If I want to save data to a database, I need a Lambda function to do it." &lt;em&gt;AWS Service Integration&lt;/em&gt; says: "No, you don't."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. AWS Service Integration (The Shortcut)&lt;/strong&gt;&lt;br&gt;
This allows API Gateway to talk directly to other AWS services like DynamoDB, SQS, SNS, or Kinesis; there is no need for a Lambda function.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How it works: API Gateway acts as the "client." When a request comes in, API Gateway translates it into the specific format that the AWS service (like DynamoDB) expects and sends it.&lt;/li&gt;
&lt;li&gt;The "Cost": You have to set up Mapping Templates (using a language called VTL). You have to explicitly tell API Gateway: "Take the 'user_id' from the URL and put it into a DynamoDB &lt;code&gt;PutItem&lt;/code&gt; command."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw3lraa057r77p1ncbaz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuw3lraa057r77p1ncbaz.jpg" alt="AWS service integration" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why not just use Lambda?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You don't pay for Lambda execution time.&lt;/li&gt;
&lt;li&gt;One less hop means lower latency.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A common use case is the "Contact Us" form, imagine a high-traffic "Contact Us" form on a website.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Old Way: API GW - Lambda - SQS Queue.&lt;/li&gt;
&lt;li&gt;The Better Way: API GW - SQS Queue. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Summary of the five integration types:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Integration Type&lt;/th&gt;
&lt;th&gt;Best Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Lambda&lt;/td&gt;
&lt;td&gt;Running custom business logic or calculations.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTTP&lt;/td&gt;
&lt;td&gt;Proxying to existing web apps or 3rd party APIs.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mock&lt;/td&gt;
&lt;td&gt;Testing, unblocking front-end teams, or handling errors.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Private&lt;/td&gt;
&lt;td&gt;Accessing internal/private resources inside a VPC.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;AWS Service&lt;/td&gt;
&lt;td&gt;High-performance, direct actions without code.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;You have learned about all the API Gateway integration types, are you interested in specific further details to learn about API Gateway? Let me know in the comments!&lt;/p&gt;

</description>
      <category>apigateway</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Would Lambda Managed instance reduce your cost?</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Thu, 08 Jan 2026 12:04:46 +0000</pubDate>
      <link>https://forem.com/aws-builders/would-lambda-managed-instance-reduce-your-cost-36gj</link>
      <guid>https://forem.com/aws-builders/would-lambda-managed-instance-reduce-your-cost-36gj</guid>
      <description>&lt;p&gt;Since AWS pioneered Lambda, the decision between EC2 and Lambda was about a simple trade-off: Flexibility &amp;amp; Cost (EC2) vs. Simplicity &amp;amp; Speed (Lambda).&lt;br&gt;
If you had a high-throughput, steady-state workload, you likely had to use containers or EC2 to save money, sacrificing the features of Lambda and the serverless architectures.&lt;br&gt;
That era ends now. With the launch of AWS Lambda Managed Instances, AWS has given us the missing link: the ability to run Lambda functions on specific EC2 infrastructure that we select, but they manage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Lambda Managed Instances?&lt;/strong&gt;&lt;br&gt;
This is a new deployment model where your function code runs on fleet instances that are provisioned for your account.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What has changed?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Pricing Models: You can apply Compute Savings Plans and Reserved Instances to your Lambda workloads, unlocking discounts of up to 72%.&lt;/li&gt;
&lt;li&gt;No Duration Charges: You stop paying for "GB/seconds." instead, you pay for the underlying instance capacity + a management fee (15% premium on the EC2 on-demand instance price)&lt;/li&gt;
&lt;li&gt;Multi-Concurrency: Unlike standard Lambda, where 1 Event = 1 invocation, Managed Instances support true multi-threading. A single instance can handle multiple concurrent requests, which maximize the CPU utilization.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Let’s look at a practical scenario where this feature can improve the cost while enjoying Lambda features.&lt;br&gt;
You run a Real-Time Log Ingestion Service.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The traffic is steady stream of data, 24/7.&lt;/li&gt;
&lt;li&gt;The volume is 50 Million requests per month.&lt;/li&gt;
&lt;li&gt;I/O heavy worload (validating JSON, writing to Kinesis/DynamoDB).&lt;/li&gt;
&lt;li&gt;The execution time: Average is 200ms.&lt;/li&gt;
&lt;li&gt;The required memory is 1024 MB.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Option A: Standard AWS Lambda&lt;/strong&gt;&lt;br&gt;
In the standard model, you pay for every millisecond the code runs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requests: 50,000,000&lt;/li&gt;
&lt;li&gt;Duration: 200ms @ 1024MB&lt;/li&gt;
&lt;li&gt;Compute Cost: ~$166.00 (approx. based on standard x86 pricing)&lt;/li&gt;
&lt;li&gt;Request Cost: $10.00&lt;/li&gt;
&lt;li&gt;Total Monthly Cost: &lt;strong&gt;~$176.00&lt;/strong&gt;
While $176 isn't huge, imagine this at enterprise scale (500M or 5B requests). The linear scaling hurts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Option B: Lambda Managed Instances&lt;/strong&gt;&lt;br&gt;
Now, let's switch to Managed Instances. Since the workload is I/O bound, we can leverage multi-concurrency. We don't need a new container for every request; we just need enough threads.&lt;br&gt;
We choose 2x c7g.large (Graviton) instances to handle the baseline load with high availability.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instance Cost: c7g.large (approx $0.07/hr On-Demand).

&lt;ul&gt;
&lt;li&gt;$0.07 x 2 instances x 730 hours = $102.20&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Management Fee: AWS charges a ~15% fee on the On-Demand instance price for managing the fleet.

&lt;ul&gt;
&lt;li&gt;15% of $102.20 = $15.33&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Request Cost: Standard $0.20 per million.

&lt;ul&gt;
&lt;li&gt;50M x $0.20 = $10.00&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Total Monthly Cost (On-Demand): &lt;strong&gt;$127.53&lt;/strong&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Wait, it gets better.&lt;br&gt;
Because these are standard EC2 instances under the hood, we can apply a Compute Savings Plan (1-year, No Upfront). Graviton instances often see ~30-40% savings here.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Discounted Instance Cost: ~$65.00&lt;/li&gt;
&lt;li&gt;Management Fee: Remains ~$15.33.&lt;/li&gt;
&lt;li&gt;Request Cost: $10.00&lt;/li&gt;
&lt;li&gt;Total Monthly Cost (Savings Plan): &lt;strong&gt;~$90.33&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Result&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standard Lambda: $176.00&lt;/li&gt;
&lt;li&gt;Managed Instances (Savings Plan): $90.33&lt;/li&gt;
&lt;li&gt;Total Savings: ~48%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When to use which?&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Standard Lambda&lt;/th&gt;
&lt;th&gt;Lambda Managed Instances&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Traffic Pattern&lt;/td&gt;
&lt;td&gt;Spiky, Unpredictable, "Scale to Zero"&lt;/td&gt;
&lt;td&gt;Steady-state, Predictable, High-Volume&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Billing&lt;/td&gt;
&lt;td&gt;Per Millisecond (Duration)&lt;/td&gt;
&lt;td&gt;Per Instance Hour (Capacity)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Concurrency&lt;/td&gt;
&lt;td&gt;1 Request per Execution Environment&lt;/td&gt;
&lt;td&gt;Multi-threaded (Many requests per instance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best For&lt;/td&gt;
&lt;td&gt;APIs, Cron jobs, Event triggers&lt;/td&gt;
&lt;td&gt;Data streams, High-throughput APIs, Batch processing&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Summary&lt;/strong&gt;&lt;br&gt;
AWS Lambda Managed Instances is not a replacement for standard Lambda; it is an evolution for mature workloads. If the use case is suitable, It allows us to evolve our high-volume serverless functions to a more cost-effective model without rewriting them for containers.&lt;/p&gt;

&lt;p&gt;Have you used Lambda Managed Instances, tell us in the comments if you find it cheaper for your workload!&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Stop Building Chatbots: The Case for Infrastructure-Driven AI Agents</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Mon, 05 Jan 2026 13:08:30 +0000</pubDate>
      <link>https://forem.com/aymanmahmoud33/stop-building-chatbots-the-case-for-infrastructure-driven-ai-agents-1n0a</link>
      <guid>https://forem.com/aymanmahmoud33/stop-building-chatbots-the-case-for-infrastructure-driven-ai-agents-1n0a</guid>
      <description>&lt;p&gt;Everyone is building chatbots right now. They are the &lt;em&gt;“Hello World”&lt;/em&gt; of the GenAI era. But in the real world applications, the real value of AI is not in conversation — it’s in execution.&lt;br&gt;
Real business value comes from AI agents that take actions, make decisions for you when possible, integrate with tools and systems, and of course operate within strict governance and audit boundaries.&lt;br&gt;
When people build AI agents they have two options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To include the multi-step reasoning and decision logic inside the code.&lt;/li&gt;
&lt;li&gt;Use agentic workflow with serverless services like AWS Step Functions and AWS Lambda
The first option takes time and effort and eventually becomes a liability because of the following:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;No clear audit trail&lt;/li&gt;
&lt;li&gt;Limited observability&lt;/li&gt;
&lt;li&gt;Fragile retries&lt;/li&gt;
&lt;li&gt;Painful debugging&lt;/li&gt;
&lt;li&gt;Hard-to-enforce human approval&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post, I explain the &lt;strong&gt;agentic workflow&lt;/strong&gt;. Instead of orchestrating AI in code, we move orchestration into &lt;strong&gt;infrastructure&lt;/strong&gt; using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Step Functions for explicit control flow&lt;/li&gt;
&lt;li&gt;Amazon Bedrock for multimodal and text reasoning&lt;/li&gt;
&lt;li&gt;AWS Lambda for integration and validation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The result is an &lt;strong&gt;enterprise-grade AI orchestration pattern&lt;/strong&gt; that is observable, auditable, secure, and production-ready.&lt;br&gt;
I prefer to explain the concepts using hands-on and realistic use cases, so let's talk about a practical use case!&lt;/p&gt;
&lt;h3&gt;
  
  
  Automated Insurance Claim Processing
&lt;/h3&gt;

&lt;p&gt;We are building an &lt;strong&gt;Automated Insurance Claim Processor&lt;/strong&gt; that:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Analyzes an uploaded image of a car accident (multimodal AI).&lt;/li&gt;
&lt;li&gt;Estimates repair cost and damage severity.&lt;/li&gt;
&lt;li&gt;Retrieves the customer’s policy limits.&lt;/li&gt;
&lt;li&gt;Decides whether to:

&lt;ul&gt;
&lt;li&gt;Auto-approve the claim, or&lt;/li&gt;
&lt;li&gt;Pause for human review based on confidence and risk.
This is not “just prompting.” It’s a multi-step decision workflow with financial and regulatory impact.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;
  
  
  1. The Architecture:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzitu1exeglaufcz8ag8r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzitu1exeglaufcz8ag8r.jpg" alt="Automated Insurance Claim Processor" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow Breakdown&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1.Trigger (Image Upload): An S3 upload event initiates the Step Functions state machine.&lt;br&gt;
2.Validate Input (AWS Lambda): Checks file integrity before invoking AI models.&lt;br&gt;
3.AI Vision Analysis (Amazon Bedrock): Uses Claude 3.5 or Amazon Nova to analyze the damage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;- Pro-Tip:&lt;/em&gt;&lt;/strong&gt; Native Structured Output: Bedrock now supports JSON Mode via the Converse API. By providing a schema, you guarantee valid JSON output, eliminating the need for "cleaning" or "validation" Lambdas.&lt;br&gt;
To implement this, you define a schema that Bedrock uses to constrain its response. Here is the specific JSON Schema you can use:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "type": "object",
  "properties": {
    "damage_type": { "type": "string" },
    "estimated_cost": { "type": "number" },
    "severity_score": { "type": "integer", "minimum": 1, "maximum": 5 },
    "confidence_score": { "type": "number", "minimum": 0, "maximum": 1 }
  },
  "required": ["damage_type", "estimated_cost", "severity_score", "confidence_score"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Fetch Policy Data (AWS Lambda &amp;amp; DynamoDB): Retrieves coverage limits for comparison.&lt;br&gt;
5.Choice State (Risk Assessment): Orchestrator compares AI estimates against policy data to decide between Auto-Approval or Human Review.&lt;br&gt;
6.Final Update (DynamoDB): Records the outcome and full audit trail.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Human-in-the-loop with callback tasks
&lt;/h4&gt;

&lt;p&gt;This is where Step Functions truly shines.&lt;br&gt;
If the AI estimates that the cost is above policy auto-approval limits, or the confidence is below a defined threshold, then the workflow must not finalize the claim automatically.&lt;br&gt;
Why do we use the Callback Pattern (.waitForTaskToken):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A Choice state detects high risk.&lt;/li&gt;
&lt;li&gt;The workflow pauses execution and waits.&lt;/li&gt;
&lt;li&gt;An SNS notification is sent to a human adjuster.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Context Checkpointing (Managing Token &amp;amp; Payload Limits)
&lt;/h4&gt;

&lt;p&gt;Long-running agent workflows suffer from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;LLM context window limits&lt;/li&gt;
&lt;li&gt;Exploding token costs&lt;/li&gt;
&lt;li&gt;Step Functions payload size limits (256 KB)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We solve this with context checkpointing.&lt;br&gt;
In this Checkpointing pattern we use DynamoDB to store summarized agent context, and S3 to store large artifacts like images.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works?&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After a major reasoning step, the agent state is summarized.&lt;/li&gt;
&lt;li&gt;A History Manager Lambda will use a smaller and cheaper model to produce a concise summary.&lt;/li&gt;
&lt;li&gt;The summary is stored in DynamoDB.&lt;/li&gt;
&lt;li&gt;The next step retrieves only the summary.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;This ensures that:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Step Functions payloads is small&lt;/li&gt;
&lt;li&gt;DynamoDB items is under the 400 KB limit&lt;/li&gt;
&lt;li&gt;Bedrock token usage is predictable
And we can use TTLs in DynamoDB to enforce data retention policies.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why this wins over script-based agents?
&lt;/h3&gt;

&lt;p&gt;As serverless builders, we prefer explicit orchestration over hidden loops.&lt;br&gt;
&lt;strong&gt;Visibility&lt;/strong&gt;&lt;br&gt;
Failures are visible directly in the Step Functions console. You see which state failed and why, no log archaeology required.&lt;br&gt;
Retries &amp;amp; Resilience&lt;br&gt;
AI APIs fail. Networks glitch. Throttling happens.&lt;br&gt;
Step Functions provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built-in retries with exponential backoff&lt;/li&gt;
&lt;li&gt;Explicit failure paths&lt;/li&gt;
&lt;li&gt;Idempotent replays&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Governance &amp;amp; Compliance&lt;/strong&gt;&lt;br&gt;
Human approval is auditable, secure, and enforceable. Not a while loop buried in a container.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
The future of AI systems isn’t just better models, it’s better orchestration.&lt;br&gt;
By treating AI prompts, decisions, and approvals as explicit infrastructure steps, we move from experimental demos to enterprise-grade autonomous systems.&lt;br&gt;
The question is no longer: “How smart is the model?”&lt;br&gt;
But: “Can we trust, observe, govern, and replay its decisions?”&lt;br&gt;
Are you orchestrating your AI agents in code — or in infrastructure? Let’s discuss in the comments.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>architecture</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Simplifying Serverless Workflows with EventBridge Pipes</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Mon, 10 Feb 2025 12:02:18 +0000</pubDate>
      <link>https://forem.com/aws-builders/simplifying-serverless-workflows-with-eventbridge-pipes-3mai</link>
      <guid>https://forem.com/aws-builders/simplifying-serverless-workflows-with-eventbridge-pipes-3mai</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
Event-driven architectures (EDAs) are essential when creating scalable, decoupled systems. By allowing services to communicate asynchronously, they reduce bottlenecks and enable flexible scaling. &lt;br&gt;
AWS offers many services for building EDAs, in this post I will focus on two different combinations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The traditional SNS/SQS combo&lt;/li&gt;
&lt;li&gt;The newer EventBridge Pipes. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We’ll explore how to build a serverless workflow using EventBridge Pipes (integrating SQS, DynamoDB Streams, and Lambda) and compare it to SNS/SQS setups.&lt;/p&gt;

&lt;p&gt;What are eventBridge pipes?&lt;br&gt;
EventBridge Pipes is a serverless integration service that connects AWS services like (SQS, DynamoDB Streams, and Lambda) without requiring &lt;em&gt;intermediate code&lt;/em&gt;. It supports filtering, enrichment (we use Lambda to do it), and transformations, EventBridge Pipes simplifies creating event-driven workflows. &lt;/p&gt;

&lt;p&gt;The diagram below illustrates the workflow of AWS EventBridge Pipes, which enables seamless integration between event sources and targets with optional processing steps.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4z04uojhue3rifel1pf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4z04uojhue3rifel1pf.png" alt="sample-workflow" width="800" height="259"&gt;&lt;/a&gt;&lt;br&gt;
Here's a breakdown of the components and their roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Event Source:&lt;/strong&gt;
This is the origin of events, such as AWS services (e.g., S3, DynamoDB, Kinesis), SaaS applications, or custom applications. Events are ingested into the pipe from here.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Filter:&lt;/strong&gt; Events are evaluated against predefined criteria (e.g., JSON-based rules). Only events that match the filter conditions proceed to the next step, reducing unnecessary processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enrichment:&lt;/strong&gt; Optional step to enhance the event data. For example, you might invoke an AWS Lambda function to fetch additional information from a database or transform the event payload before sending it to the target.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Target:&lt;/strong&gt; The final destination where processed events are delivered. Targets can be AWS services (e.g., Lambda, SNS, SQS, EventBridge event buses), HTTP endpoints, or other resources.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How it works together:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The event source generates events (e.g., an S3 bucket upload).&lt;/li&gt;
&lt;li&gt;The filter pick up relevant events.&lt;/li&gt;
&lt;li&gt;The enrichment step adds/modifies data.&lt;/li&gt;
&lt;li&gt;The target receives the refined event for further action (e.g., triggering a Lambda function to process the file).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Comparing EventBridge Pipes with SNS/SQS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now as we know how it works let's learn by looking at a use case of &lt;strong&gt;E-Commerce Order Fulfillment&lt;/strong&gt; system&lt;br&gt;
In this e-commerce platform every time a customer places an order, the order details are stored in a DynamoDB table. Changes in the table (e.g., new orders) are captured by DynamoDB Streams. Instead of writing custom integration code to filter and route these events, you can use EventBridge Pipes. With Pipes, you configure the following workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;DynamoDB Streams as the Source:
When a new order is added to the DynamoDB table, the change event is captured. &lt;/li&gt;
&lt;li&gt;EventBridge Pipes Filtering &amp;amp; Routing:
The Pipe ingests events from DynamoDB Streams and applies a filter to select only orders with a status of "NEW." It then routes these filtered events directly to a Lambda function. &lt;/li&gt;
&lt;li&gt;Lambda Function for Order Processing:
The Lambda function processes the order (for example, updating inventory, initiating shipping, and sending notifications).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This approach is compared to the traditional setup where events are published to an SNS topic and then pushed to an SQS queue before being processed by a Lambda function.&lt;/p&gt;

&lt;p&gt;Let's view both architectures and explain the advantages of each one, and when to use each one of them:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;EventBridge Pipes approach&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4xl97wmuchvw29oi2ab.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4xl97wmuchvw29oi2ab.png" alt="EventBridge Pipes approach" width="763" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Diagram Explanation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sources:

&lt;ol&gt;
&lt;li&gt;DynamoDB Streams: Capture changes from a table (for example, new or updated order records).&lt;/li&gt;
&lt;li&gt;SQS Queue (if needed): Acts as an additional source that may buffer events from various systems.&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;EventBridge Pipes: Serves as the central hub that filters, transforms, and routes incoming events from both DynamoDB Streams and SQS.&lt;/li&gt;
&lt;li&gt;Lambda Function: Consumes the refined events to process business logic (e.g., updating order status or triggering further workflows).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Traditional SNS-SQS approach&lt;/em&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwiwv22d7iae14lkf4vn8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwiwv22d7iae14lkf4vn8.png" alt="Traditional SNS-SQS approach" width="800" height="184"&gt;&lt;/a&gt;&lt;br&gt;
Diagram Explanation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SNS Topic: Typically used to broadcast events to multiple subscribers.&lt;/li&gt;
&lt;li&gt;SQS Queue: Buffers messages and provides a pull-based delivery mechanism.&lt;/li&gt;
&lt;li&gt;Lambda Function: Processes the events retrieved from the SQS queue.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Comparing EventBridge Pipes with SNS/SQS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EventBridge Pipes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrated Filtering &amp;amp; Transformation as you can set up rules directly within Pipes to process events without extra Lambda functions.&lt;/li&gt;
&lt;li&gt;Simpler Configuration because it directly connects sources like DynamoDB Streams or SQS to Lambda with minimal configuration.&lt;/li&gt;
&lt;li&gt;Reduced operational complexity by eliminating extra intermediary services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When to Use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When your workflow is relatively linear or requires simple filtering.&lt;/li&gt;
&lt;li&gt;When you prefer a code-free, declarative integration between event sources and targets.&lt;/li&gt;
&lt;li&gt;For pipelines where transformation and routing can be managed entirely within EventBridge Pipes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Traditional SNS/SQS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fan-Out Capabilities as SNS naturally supports broadcasting a single event to multiple subscribers.&lt;/li&gt;
&lt;li&gt;Services reliability with fine-grained delivery and retry control.&lt;/li&gt;
&lt;li&gt;Suitable for complex scenarios that require multiple consumer endpoints.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When to Use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When your architecture needs to fan out events to multiple consumers.&lt;/li&gt;
&lt;li&gt;If you require detailed control over message delivery policies and retry behaviors.&lt;/li&gt;
&lt;li&gt;When your existing setup is already built around SNS/SQS patterns and migrating to Pipes isn’t practical.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cost Components of Each Approach&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EventBridge Pipes Approach&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;AWS Service&lt;/th&gt;
&lt;th&gt;Cost Component&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DynamoDB Streams&lt;/td&gt;
&lt;td&gt;First 2.5M reads free per month, then $0.02 per 100,000 reads&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EventBridge Pipes&lt;/td&gt;
&lt;td&gt;$0.40 per 1M events processed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lambda (Order Processing)&lt;/td&gt;
&lt;td&gt;$0.20 per 1M requests + execution time cost (depends on memory &amp;amp; duration)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;SNS/SQS Approach&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;AWS Service&lt;/th&gt;
&lt;th&gt;Cost Component&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;DynamoDB Streams&lt;/td&gt;
&lt;td&gt;$0.02 per 100,000 reads&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SNS&lt;/td&gt;
&lt;td&gt;$0.50 per 1M publishes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SQS (FIFO Queue)&lt;/td&gt;
&lt;td&gt;$0.50 per 1M requests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lambda (Order Processing)&lt;/td&gt;
&lt;td&gt;$0.20 per 1M requests + execution time cost&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
EventBridge Pipes offer a modern, efficient way to build decoupled, event-driven workflows on AWS. By integrating sources like DynamoDB Streams and SQS directly with Lambda, you can streamline your architecture and reduce operational overhead. However, traditional SNS/SQS setups remain a good choice for scenarios that require broad event broadcasting and detailed control over message delivery.&lt;/p&gt;

&lt;p&gt;The right choice depends on your specific use case: go for EventBridge Pipes when you seek simplicity and reduced code, or stick with SNS/SQS for more complex, fan-out scenarios. Either way, understanding these patterns is key to building scalable and resilient serverless applications.&lt;/p&gt;

</description>
      <category>eventbridge</category>
      <category>serverless</category>
      <category>microservices</category>
      <category>sns</category>
    </item>
    <item>
      <title>Python Lambda function example:</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Tue, 22 Oct 2024 16:16:00 +0000</pubDate>
      <link>https://forem.com/aymanmahmoud33/python-lambda-function-example-49gp</link>
      <guid>https://forem.com/aymanmahmoud33/python-lambda-function-example-49gp</guid>
      <description>&lt;p&gt;Python Lambda function example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
from PIL import Image
import io

s3 = boto3.client('s3')

def lambda_handler(event, context):
    # Get bucket and object key from the event
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']

    # Download the image from S3
    response = s3.get_object(Bucket=bucket, Key=key)
    image = Image.open(io.BytesIO(response['Body'].read()))

    # Create a thumbnail
    image.thumbnail((100, 100))

    # Save the thumbnail to a new S3 key
    thumbnail_key = f"thumbnails/{key}"
    buffer = io.BytesIO()
    image.save(buffer, 'JPEG')
    buffer.seek(0)
    s3.put_object(Bucket=bucket, Key=thumbnail_key, Body=buffer, ContentType='image/jpeg')

    return {'statusCode': 200, 'body': f'Thumbnail saved to {thumbnail_key}'}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Explanation of Important Parts:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Imports (&lt;em&gt;boto3&lt;/em&gt; and &lt;em&gt;PIL&lt;/em&gt;):

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;boto3&lt;/em&gt; is the AWS SDK for Python, allowing interaction with AWS services (S3 in this case).&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;PIL&lt;/em&gt; (Pillow) is a library for image processing, used here to create a thumbnail.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;AWS S3 Client &lt;em&gt;(s3 = boto3.client('s3'))&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;Creates an S3 client to interact with Amazon S3.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Handler Function &lt;em&gt;(lambda_handler)&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;The main entry point for the Lambda function, triggered by an S3 event when an image is uploaded to a bucket.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Extracting S3 Information (&lt;em&gt;bucket&lt;/em&gt; and &lt;em&gt;key&lt;/em&gt;):&lt;/p&gt;

&lt;p&gt;The event contains information about the S3 bucket and the object (image) that triggered the Lambda function.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Download the Image &lt;em&gt;(s3.get_object)&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;Downloads the image from the S3 bucket using the &lt;em&gt;get_object&lt;/em&gt; method.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Image Processing &lt;em&gt;(image.thumbnail)&lt;/em&gt;:&lt;/p&gt;

&lt;p&gt;Uses the Pillow library to create a thumbnail of the image with a maximum size of 100x100 pixels.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Upload the Thumbnail (&lt;em&gt;s3.put_object&lt;/em&gt;):&lt;/p&gt;

&lt;p&gt;Saves the thumbnail back to the S3 bucket under a new key (&lt;em&gt;thumbnails/{key&lt;/em&gt;}).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Return Statement:&lt;/p&gt;

&lt;p&gt;Returns a simple JSON response indicating where the thumbnail was saved.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Lambda layers&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah3kj18ekswhvawa91nt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fah3kj18ekswhvawa91nt.png" alt="Lambda layers" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without layers:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Two Lambda functions (function 1 and function 2) each with their own function code and dependencies. Function 1 also includes a custom run time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With layers:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lambda function 1&lt;/strong&gt; and &lt;strong&gt;Lambda function 2&lt;/strong&gt; include only function code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda function 1&lt;/strong&gt; uses &lt;strong&gt;Lambda layer 1&lt;/strong&gt; for its custom runtime and &lt;strong&gt;Lambda layer 2&lt;/strong&gt; for its code dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lambda function 2&lt;/strong&gt; uses &lt;strong&gt;Lambda layer 2&lt;/strong&gt; for its code dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A Lambda layer is a .zip file archive that contains supplementary code or data. Layers usually contain library dependencies, a custom runtime, or configuration files. You can also package your own custom runtime in a Lambda layer if you prefer a different runtime from those provided by the Lambda service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There are multiple reasons why you might consider using layers:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Reduce the size of your deployment packages:&lt;/strong&gt; Instead of including all of your function dependencies along with your function code in your deployment package, put them in a layer. This keeps deployment packages small and organized.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separate core function logic from dependencies:&lt;/strong&gt; With layers, you can update your function dependencies independent of your function code, and vice versa. This promotes separation of concerns and helps you focus on your function logic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Share dependencies across multiple functions:&lt;/strong&gt; After you create a layer, you can apply it to any number of functions in your account. Without layers, you need to include the same dependencies in each individual deployment package.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use the Lambda console code editor:&lt;/strong&gt; The code editor is a useful tool for testing minor function code updates quickly. However, you can’t use the editor if your deployment package size is too large. Using layers reduces your package size and can unlock usage of the code editor.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Summary of what we learned in this series&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda is a service to run code functions without provisioning or managing servers.&lt;/li&gt;
&lt;li&gt;A Lambda function can run inside a VPC owned by the AWS Lambda service or as Lambda@Edge in an Amazon CloudFront regional cache.&lt;/li&gt;
&lt;li&gt;A Lambda function can be configured to connect to your VPC to access AWS services inside the VPC.&lt;/li&gt;
&lt;li&gt;A Lambda function can be invoked synchronously, asynchronously, and with event source mappings for queues and streams.&lt;/li&gt;
&lt;li&gt;Use Lambda layers to package code dependencies or custom runtimes to be re-used by all Lambda functions in the Region.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Invoking a synchronous Lambda function</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Sat, 19 Oct 2024 13:15:00 +0000</pubDate>
      <link>https://forem.com/aws-builders/invoking-a-synchronous-lambda-function-3n2m</link>
      <guid>https://forem.com/aws-builders/invoking-a-synchronous-lambda-function-3n2m</guid>
      <description>&lt;p&gt;Invoking a synchronous Lambda function&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1o4zrdxqs8uav87ds8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1o4zrdxqs8uav87ds8i.png" alt="Invoking a synchronous Lambda function&amp;lt;br&amp;gt;
" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you invoke a function synchronously, the Lambda service runs the function and waits for a response. In this example, an API request is received from a browser client to Amazon API Gateway. API Gateway invokes the Lambda function by calling the AWS Lambda service. While the function runs, it can optionally call other AWS services. When the function completes, the Lambda service returns the response from the function's code with additional data, such as the version of the function that was invoked. If the function encounters an error, the error is returned as the response. The Lambda service returns the response to API Gateway which in turn sends the response to the browser client.&lt;/p&gt;

&lt;p&gt;Another option for a synchronous call to Lambda is to use a function URL. A function URL is a dedicated HTTP(S) endpoint for your Lambda function. In the example above, the URL request is sent directly to the Lambda service from a browser client. The Lambda service invokes the Lambda function which returns a URL response. The response is returned to the browser client.&lt;/p&gt;

&lt;p&gt;When you create a function URL, Lambda automatically generates a unique URL endpoint for you. After you create a function URL, its URL endpoint never changes. Lambda function URLs use resource-based policies for security and access control. Function URLs also support cross-origin resource sharing (CORS) configuration options. CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.&lt;/p&gt;

&lt;p&gt;While function URLs are simple and easy to set up, the browser application will require an update if the URL changes. When a function changes behind Amazon API Gateway, no changes are required to the browser application.&lt;/p&gt;

&lt;p&gt;Invoking an asynchronous Lambda function&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyc2kdw0314gwnf2kvgc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyc2kdw0314gwnf2kvgc.png" alt="Invoking an asynchronous Lambda function" width="800" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you invoke a function asynchronously, you don't wait for a response from the function code. You hand off the event to Lambda, and Lambda handles the rest. The Lambda service places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to the function. You can configure how Lambda handles errors and can send invocation records to a downstream resource such as Amazon Simple Queue Service (Amazon SQS) or Amazon EventBridge to chain together components of your application.&lt;/p&gt;

&lt;p&gt;Several AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Simple Notification Service (Amazon SNS), invoke functions asynchronously to process events. If scheduled events are required, you can use EventBridge as a scheduler to invoke Lambda functions. If an AWS service doesn’t have a direct integration with the Lambda service, EventBridge can serve as the event bus to route the request. An example of a scheduled event is daily reporting or any recurring process that should be activated.&lt;/p&gt;

&lt;p&gt;Lambda event source mappings for queues and streams&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhr5ydzt2btorsonf2fi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhr5ydzt2btorsonf2fi.png" alt="Lambda event source mappings for queues and streams&amp;lt;br&amp;gt;
" width="800" height="148"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An event source mapping is a Lambda resource that reads from an event source and invokes a Lambda function. You can use event source mappings to process items from a stream or queue in services that don't invoke Lambda functions directly. The Lambda service can poll AWS services like Amazon DynamoDB, Amazon Kinesis, Amazon SQS, and Amazon DocumentDB (with MongoDB compatibility).&lt;/p&gt;

&lt;p&gt;In the above diagram, the target event source is a DynamoDB stream. When the Lambda service polls the DynamoDB stream for events, a number of events are returned to the Lambda service. The Lambda service batches records together in a single payload and invokes the Lambda function with the payload. You can configure the Lambda function to specify the maximum size of the batch window and the payload. The payload can’t exceed the Lambda function input limit of 6 MB.&lt;/p&gt;

&lt;p&gt;An example scenario would be if a customer order changed status to delivery in progress in the DynamoDB orders table, then the Lambda function is invoked to send a notification to the customer and do some financial processing.&lt;/p&gt;

&lt;p&gt;In the next article in this series we will have an example of a lambda functions, and will learns about lambda layers.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building serverless architectures with AWS Lambda</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Wed, 16 Oct 2024 12:10:24 +0000</pubDate>
      <link>https://forem.com/aws-builders/building-serverless-architectures-with-aws-lambda-1k7l</link>
      <guid>https://forem.com/aws-builders/building-serverless-architectures-with-aws-lambda-1k7l</guid>
      <description>&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging.&lt;/p&gt;

&lt;p&gt;You configure your Lambda function with the runtime language you prefer, the amount of memory that the function needs, and the maximum length of function timeout. The amount of memory determines the amount of virtual CPU and network bandwidth. Currently, the minimum and maximum amount of memory that can be allocated is 128MB and 10,240MB respectively. A Lambda function can’t exceed 15 minutes in duration, so 15 minutes is the maximum timeout setting. This is an AWS hard limit and can’t be changed.&lt;/p&gt;

&lt;p&gt;You create code for the function and upload the code using a deployment package. Lambda supports two types of deployment packages: container images and .zip file archives. The Lambda service invokes the function when an event occurs. Lambda runs multiple instances of your function in parallel, governed by concurrency and scaling limits. You only pay for the compute time that you consume—there is no charge when your code isn’t running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foub7s1y2m4rkr2bd1g2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foub7s1y2m4rkr2bd1g2r.png" alt="lambda" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A Lambda function can run inside a VPC owned by the AWS Lambda service or in an Amazon CloudFront regional cache.&lt;/p&gt;

&lt;p&gt;When you create a Lambda function, you deploy it to the AWS Region where you want your Lambda function to run. When a Lambda function is invoked, the Lambda service instantiates an isolated Firecracker virtual machine (VM) on an Amazon Elastic Compute Cloud (Amazon EC2) instance in the Lambda service VPC.&lt;/p&gt;

&lt;p&gt;Lambda@Edge is an extension of AWS Lambda, a compute service that lets you run functions that customize the content that CloudFront delivers. You can author Node.js or Python functions in one Region, US East (N. Virginia), and then run them in AWS regional edge locations globally that are closer to the viewer. Processing requests at AWS locations closer to the viewer instead of on origin servers significantly reduces latency and improves the user experience. &lt;/p&gt;

&lt;p&gt;An example of a Lambda@Edge use case is a retail website that sells bags. If you use cookies to indicate which color a user chose for a small bag, a Lambda function can change the request so that CloudFront returns the image of a bag in the selected color.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda@edge&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It is a serverless computing service that allows you to run AWS Lambda functions at AWS Edge locations. It integrates with Amazon CloudFront to run application code closer to your customers, to improve performance and reduce latency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executing code closer to your customer reduces latency and improves performance.&lt;/li&gt;
&lt;li&gt;You scale your application and be move available for customer around the globe.&lt;/li&gt;
&lt;li&gt;You can modify request and response behavior for web applications in real-time, which make you able to customize content delivery&lt;/li&gt;
&lt;li&gt;You can runs in a secure and isolated environment, supporting custom authentication and authorization, which is considered a secure execution for your code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic Content Personalization: you can tailor content based on user attributes (like: language, location).&lt;/li&gt;
&lt;li&gt;A/B Testing: serve different versions of content to different user groups for testing.&lt;/li&gt;
&lt;li&gt;Access Control: you are able to implement custom authentication for web content.&lt;/li&gt;
&lt;li&gt;SEO Optimization: you can modify URLs and headers for search engine optimization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How It Works&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trigger Points: Runs in response to events generated by CloudFront, such as viewer request, viewer response, origin request, and origin response.&lt;/li&gt;
&lt;li&gt;Deployment: Deploy code to AWS regions, and Lambda@Edge replicates it to edge locations globally.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Connecting a Lambda function to your VPC&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2n6nlhte53s4ydfkpc2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2n6nlhte53s4ydfkpc2s.png" alt="Connecting a Lambda function to your VPC" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sometimes you might have a requirement to implement an architecture that has serverless components and components running in your own VPC. When designing this type of architecture, pay close attention to scaling as some components can cause a bottleneck.&lt;/p&gt;

&lt;p&gt;By default, a Lambda function isn't connected to VPCs in your account. If your Lambda function needs to access the resources in your account VPC, you can configure the function to connect to your VPC. The Lambda service provides managed resources named Hyperplane elastic network interfaces (ENIs) which are created when the function is configured to connect to a VPC. When invoked, the Lambda function in the Lambda VPC connects to an ENI in your account VPC. Hyperplane ENIs provide NAT capabilities from the Lambda VPC to your account VPC using VPC-to-VPC NAT (V2N). V2N provides connectivity from the Lambda VPC to your account VPC, but it doesn’t in the other direction.&lt;/p&gt;

&lt;p&gt;When you connect a function to a VPC in your account, the function can't access the internet unless your VPC provides access. To give your function access to the internet, route outbound traffic to a NAT gateway in a public subnet. The NAT gateway has a public IP address and can connect to the internet through the VPC's internet gateway.&lt;/p&gt;

&lt;p&gt;In the example above, the database and the Amazon EC2 application instance can cause bottlenecks if the Lambda functions aggressively scale. Lambda functions 1 and 2 connect to an EC2 application instance and an Amazon Relational Database Service (Amazon RDS) proxy deployed in a private subnet in the customer’s VPC using the VPC-to-VPC NAT and the ENI. To scale the EC2 instance, deploy it behind an application load balancer in an Amazon EC2 Auto Scaling group.&lt;/p&gt;

&lt;p&gt;To scale the database, you can use Amazon RDS proxy that manages a connection pool to the Amazon RDS database. Because Lambda functions can scale rapidly, the connections to an Amazon RDS database can be saturated. Lambda functions that rapidly open and close database connections can cause the database to fall behind. When no more connections are available the function will produce an error. When using MySQL and Aurora Amazon RDS databases, you can solve this challenge with RDS proxy. The Lambda functions connect to RDS proxy, which will have an open connection to the database ready to be used.&lt;/p&gt;

&lt;p&gt;Here is a summary of what we discussed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda is a serverless compute service that runs code without server management. It automatically scales, and you only pay for the compute time used. Functions can run in a VPC and are triggered by events.&lt;/li&gt;
&lt;li&gt;Lambda@Edge extends Lambda to AWS edge locations for reduced latency, useful for content personalization and improving performance.&lt;/li&gt;
&lt;li&gt;To access resources in your VPC, Lambda uses Hyperplane ENIs. For scaling, RDS Proxy helps manage database connections, preventing bottlenecks during rapid function scaling.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the next article we will talk about "Identifying Lambda serverless scenarios" and "How to invoke lambda functions"&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>serverless</category>
      <category>aws</category>
    </item>
    <item>
      <title>Image Labeling with Amazon Rekognition</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Wed, 21 Feb 2024 11:46:34 +0000</pubDate>
      <link>https://forem.com/aws-builders/image-labeling-with-amazon-rekognition-2enn</link>
      <guid>https://forem.com/aws-builders/image-labeling-with-amazon-rekognition-2enn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Amazon Rekognition&lt;/strong&gt; facilitates object and scene detection in images, offering a secure, stateless API that returns a list of related labels along with confidence levels.&lt;/p&gt;

&lt;p&gt;In this tutorial, you'll build a serverless system to perform object detection upon image uploads to an Amazon S3 bucket. AWS Lambda will handle the processing logic, while Amazon DynamoDB will serve as the storage solution for the label detection results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Object Detection Context and Limitations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Object and Scene Detection involves identifying objects and their context within an image. Object Detection locates specific instances of objects like humans, cars, buildings, etc. In Amazon Rekognition, Object Detection is akin to Image Labeling, extracting semantic labels from images. This approach also lends itself to scene detection, identifying both individual objects and overall scene attributes, such as "person" or "beach".&lt;/p&gt;

&lt;p&gt;In AWS, Object Detection with Amazon Rekognition involves utilizing its DetectLabels API. You can input the image as either a binary string or an S3 Object reference. Submitting images as S3 Objects offers advantages such as avoiding redundant uploads and supporting images up to 15MB in size, compared to the 5MB limit.&lt;/p&gt;

&lt;p&gt;The response typically follows a JSON structure similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Labels": [
        {
            "Confidence": 97,
            "Name": "Person"
        },
        {
            "Confidence": 96,
            "Name": "Animal"
        },
        ...
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The API provides an ordered list of labels, ranked by confidence level, starting from the highest.&lt;/p&gt;

&lt;p&gt;The quantity of labels returned is determined by two parameters:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;MaxLabels: This sets the maximum number of labels returned by the API.&lt;/li&gt;
&lt;li&gt;MinConfidence: Labels with a confidence score below this threshold will not be included in the response.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's crucial to note that setting low values for MaxLabels alongside high values for MinConfidence could result in empty responses.&lt;/p&gt;

&lt;p&gt;Throughout the tutorial, we'll use an S3 bucket to store images, leveraging the API to extract labels from each newly uploaded image. We'll store each image-label pair in a DynamoDB table.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's start&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;1) create DynamoDB table named "images"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qbkhilan7vfe1ualvjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3qbkhilan7vfe1ualvjt.png" alt="table" width="800" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2) Create S3 bucket&lt;/em&gt;&lt;br&gt;
Choose a unique name for the bucket, I will use the name "demo21-2"&lt;br&gt;
Make sure to select ACLs Enabled:&lt;br&gt;
Create a folder in the bucket and name it "images"&lt;/p&gt;

&lt;p&gt;&lt;em&gt;3) Create a Lambda function as follows:&lt;/em&gt;&lt;br&gt;
Choose "Use a blueprint" when creating the function.&lt;br&gt;
For the blueprint name, choose "Use Rekognition to detect faces"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rsgkvro6mtq8ywvfgx9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rsgkvro6mtq8ywvfgx9.png" alt="lambda" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will need to give the lambda function permissions to access the following services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"CloudWatch" to put logs&lt;/li&gt;
&lt;li&gt;"DynamoDB" to put, update, describe item&lt;/li&gt;
&lt;li&gt;"Rekognition" to detect labels and faces&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So you will create a role with the following IAM policies, and assume the role by the lambda function.&lt;/p&gt;

&lt;p&gt;Basic execution role&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lambda policy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "dynamodb:PutItem",
                "dynamodb:UpdateItem",
                "dynamodb:DescribeStream",
                "dynamodb:GetRecords",
                "dynamodb:GetShardIterator",
                "dynamodb:ListStreams"
            ],
            "Resource": "arn:aws:dynamodb:us-west-2:909737842772:table/images",
            "Effect": "Allow"
        },
        {
            "Action": [
                "rekognition:DetectLabels",
                "rekognition:DetectFaces"
            ],
            "Resource": "*",
            "Effect": "Allow"
        },
        {
            "Action": [
                "s3:*"
            ],
            "Resource": "*",
            "Effect": "Allow"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the S3 trigger section enter the following:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16o84pfw2noi8xojwh78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F16o84pfw2noi8xojwh78.png" alt="s3" width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You are now ready to test the function to see if it's successfully been triggered and called AWS Rekognition&lt;br&gt;
Upload a picture to the folder "images" in your bucket&lt;br&gt;
Go check the CloudWatch log group for your functions, you should see a log similar to this one below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklx04p55r6fbauf1dzp7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklx04p55r6fbauf1dzp7.png" alt="log" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing the Object Detection Logic&lt;/strong&gt;&lt;br&gt;
In the Code source section, double-click the lambda_function.py file, and replace the code with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3, urllib.parse

rekognition = boto3.client('rekognition', 'us-west-2')
table = boto3.resource('dynamodb').Table('images')

def detect_labels(bucket, key):
    response = rekognition.detect_labels(
        Image={"S3Object": {"Bucket": bucket, "Name": key}},
        MaxLabels=10,
        MinConfidence=80,
    )

    labels = [label_prediction['Name'] for label_prediction in response['Labels']]

    table.put_item(Item={
        'PK': key,
        'Labels': labels,
    })

    return response


def lambda_handler(event, context):
    data = event['Records'][0]['s3']
    bucket = data['bucket']['name']
    key = urllib.parse.unquote_plus(data['object']['key'])
    try:
        response = detect_labels(bucket, key)
        print(response)
        return response
    except Exception as e:
        print(e)
        raise e
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code outlined above facilitates the following functionalities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each image uploaded triggers the creation of a new DynamoDB item, with the S3 Object Key serving as the primary key.&lt;/li&gt;
&lt;li&gt;The item includes a list of corresponding labels retrieved from Amazon Rekognition, stored as a set of strings in DynamoDB.&lt;/li&gt;
&lt;li&gt;Storing labels in DynamoDB enables repeated retrieval without additional queries to Amazon Rekognition.&lt;/li&gt;
&lt;li&gt;Labels can be retrieved either by their primary key or by scanning the DynamoDB table and filtering for specific labels using a CONTAINS DynamoDB filter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can now test the function by uploading one or more images to the "images" folder in your bucket&lt;/p&gt;

&lt;p&gt;Go to the DynamoDB table and click on Explore items from the left pane, you will find the items returned with the labels that are recognized by AWS Rekognitions&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fherelz1r2xg5kmt7oc2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fherelz1r2xg5kmt7oc2j.png" alt="table1" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
In this tutorial, you set up an Amazon S3 bucket and configured an AWS Lambda function to trigger when images are uploaded to the bucket. You implemented the Lambda function to call the Amazon Rekognition API, label the images, and store the results in Amazon DynamoDB. Finally, you demonstrated how to search DynamoDB for image labels.&lt;/p&gt;

&lt;p&gt;This serverless setup offers flexibility, allowing for customization of the Lambda function to address more complex scenarios with minimal effort.&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>awsrekognition</category>
    </item>
    <item>
      <title>Configuring Lambda functions properly</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Tue, 06 Feb 2024 12:17:22 +0000</pubDate>
      <link>https://forem.com/aws-builders/configuring-lambda-functions-properly-43pg</link>
      <guid>https://forem.com/aws-builders/configuring-lambda-functions-properly-43pg</guid>
      <description>&lt;p&gt;When developing and evaluating a function, it's essential to define three key configuration parameters: memory allocation, timeout duration, and concurrency level. These settings play a crucial role in measuring the performance of the function. Determining the optimal configuration for memory, timeout, and concurrency involves testing in real-world scenarios and under peak loads. Continuously monitoring your functions allows for adjustments to be made to optimize costs and maintain the desired customer experience within your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F451gbahw8w3unjgsfjyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F451gbahw8w3unjgsfjyd.png" alt="mtc" width="800" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, let's talk about how important it is to configure your memory and timeout values. Following that, we'll discuss the billing considerations associated with these values then exploring concurrency and strategies for optimizing it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lambda functions allow for allocating up to 10 GB of memory. Lambda allocates CPU and other resources in direct proportion to the amount of memory configured. Scaling up the memory size results in a corresponding increase in available CPU resources for your function. To determine the optimal memory configuration for your functions, consider utilizing the &lt;a href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:451282441545:applications~aws-lambda-power-tuning" rel="noopener noreferrer"&gt;AWS Lambda Power Tuning tool.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AWS Lambda Power Tuning, is a state machine powered by AWS Step Functions, optimizes Lambda functions for cost and performance. It's language agnostic and suggests the best power configuration for your function, based on multiple invocations across various memory settings (128MB to 10GB), aiming to minimize costs or maximize performance.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Since Lambda charges are directly proportional to the configured memory and function duration (measured in GB-seconds), the additional costs incurred by using more memory might be balanced out by reduced duration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Timeout:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AWS Lambda timeout value specifies the duration a function can run before Lambda terminates it. Currently capped at 900 seconds, this limit means a Lambda function invocation cannot exceed 15 minutes.&lt;/p&gt;

&lt;p&gt;Set the timeout to the maximum only after thorough function testing. Many scenarios require quick failure rather than waiting for the full timeout.&lt;/p&gt;

&lt;p&gt;Analyzing function duration aids in identifying issues causing invocations to surpass expected lengths. Load testing helps determine the optimal timeout value.&lt;/p&gt;

&lt;p&gt;Billing for Lambda functions is based on runtime in 1-ms increments. Avoid lengthy timeouts to prevent billing for idle time during timeouts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda Billing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS Lambda charges are based on usage, including the number of requests and duration. Each time your code runs in response to an event or an invoke call counts as a request. Duration is rounded up to the nearest 1 ms, and pricing depends on memory allocation, not actual usage, if you allocate 4 GB but only use 1 GB, you'll be charged for the full 4 GB. That is another reason why testing with various memory allocations is crucial to optimize both function performance and budget.&lt;/p&gt;

&lt;p&gt;Increasing memory allocates proportional CPU power and resources. The AWS Lambda Free Tier includes 1 million free requests per month and 400,000 GB-seconds of compute time monthly. Access AWS pricing details and calculators for more information.&lt;br&gt;
For more visibility about the cost please refer to &lt;a href="https://calculator.aws/" rel="noopener noreferrer"&gt;https://calculator.aws/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrency and Scaling:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Concurrency is a crucial configuration that affects your function's performance and scalability. It refers to the number of invocations your function can handle simultaneously. When invoked, Lambda launches an instance to process the event, and upon completion, it can handle additional requests. If new invocations occur while previous ones are still processing, Lambda allocates additional instances, leading to concurrent invocations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrent Invocations:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consider a bus's capacity is 50 passengers. Only 50 passengers can occupy seats at any given time. Passengers who arrive when the bus is full must wait until a seat becomes available. With a reservation system, if a group reserves 10 seats, only 40 of the 50 seats remain available for other passengers. Similarly, Lambda functions have a concurrency limit comparable to the bus's capacity, and a reservation system can allocate runtime for specific instances.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Concurrency types&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Unreserved concurrency&lt;/em&gt;&lt;br&gt;
The amount of concurrency that is not reserved for specific functions. A minimum of 100 unreserved concurrency is guaranteed, ensuring functions without provisioned concurrency can still execute. If all concurrency is provisioned for one or two functions, none remains for others. Maintaining at least 100 available concurrency ensures all functions can run upon invocation.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Reservred concurrency&lt;/em&gt; &lt;br&gt;
Reserved concurrency guarantees the maximum number of concurrent instances for a function. Once reserved, this concurrency is exclusively allocated to that function, preventing other functions from utilizing it. There is no charge for configuring reserved concurrency for a function.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Provisioned concurrency&lt;/em&gt;&lt;br&gt;
Provisioned concurrency initializes a specified number of runtime environments, ensuring they are ready to promptly respond to your function's invocations. This option is ideal for achieving high performance and low latency.&lt;br&gt;
You're charged for the provisioned concurrency amount and the duration it's configured. For instance, you may elevate provisioned concurrency in anticipation of a traffic surge. To avoid paying for unused warm environments, you can scale back down once the event subsides.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Testing Concurrency:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ensuring your concurrency, memory, and timeout settings are optimal requires thorough testing against real-world scenarios. Here are some recommendations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Conduct performance tests mimicking peak invocation levels, monitor metrics to view throttling occurrences during performance peaks.&lt;/li&gt;
&lt;li&gt;Assess whether your backend can handle the incoming request velocity.&lt;/li&gt;
&lt;li&gt;Test comprehensively; if interfacing with Amazon RDS, verify that your function's concurrency levels align with the database's processing capabilities.&lt;/li&gt;
&lt;li&gt;Validate error handling functionality; include tests to push the application beyond concurrency limits to confirm proper error management.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;In conclusion,&lt;/strong&gt; we talked about optimizing AWS Lambda functions for performance, cost-efficiency, and scalability. We've explored key configuration settings such as memory allocation, timeout values, concurrency, and provisioned concurrency. &lt;br&gt;
Additionally, we've discussed strategies for testing concurrency and ensuring that our functions perform effectively under real-world conditions. By implementing these best practices, we aim to enhance the reliability and efficiency of our serverless applications on AWS.&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>serverless</category>
      <category>eventdriven</category>
    </item>
    <item>
      <title>AWS Lambda Function Permissions</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Tue, 26 Dec 2023 09:19:52 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-lambda-function-permissions-43nn</link>
      <guid>https://forem.com/aws-builders/aws-lambda-function-permissions-43nn</guid>
      <description>&lt;p&gt;In this article we will explore the permissions and connection capabilities that allow &lt;em&gt;AWS Lambda&lt;/em&gt; and other &lt;em&gt;AWS services&lt;/em&gt; to connect to each other.&lt;/p&gt;

&lt;p&gt;Lambda functions involve two aspects that determine the required permissions:&lt;br&gt;
&lt;em&gt;Execution Permission:&lt;/em&gt; Authorization for the Lambda function to interact with other services.&lt;br&gt;
&lt;em&gt;Invoking Permission:&lt;/em&gt; Granting permission to invoke the function, it is controlled using an IAM resource-based policy.&lt;/p&gt;

&lt;p&gt;Let's break it down&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Execution role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rdr0c27xez9web4n6x0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rdr0c27xez9web4n6x0.png" alt="Execution role" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The execution role gives your function permissions to interact with other services. You provide this role when you create a function, and Lambda assumes the role when your function is invoked.&lt;/p&gt;

&lt;p&gt;Execution role definitions&lt;br&gt;
Let's assume you have an S3 bucket that your Lambda function needs to read from and write to. Your execution role would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::your-bucket-name/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "arn:aws:logs:*:*:*"
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The trust policy defines who (or what) can assume the role. In the case of AWS Lambda, the principal is the Lambda service, and it's given permission to assume the role using the &lt;em&gt;AWS STS AssumeRole&lt;/em&gt; action. Here's an example trust policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy explicitly states that the AWS Lambda service (identified by the service principal "lambda.amazonaws.com") is allowed to assume the role by calling the &lt;em&gt;sts:AssumeRole&lt;/em&gt; action.&lt;br&gt;
When you create an IAM role for your Lambda function, you would attach this trust policy to the role. It ensures that the Lambda service has the necessary permissions to assume the role and execute the Lambda function on behalf of your AWS account.&lt;/p&gt;

&lt;p&gt;Here is a screenshot showing where to configure both the function "Execution Role" (or Permission), and the "Trust policy"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jwnwgey6wmszqn8qci7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jwnwgey6wmszqn8qci7.png" alt="Execution Role screenshot" width="710" height="757"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v5kuqy21bkj3im5myv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2v5kuqy21bkj3im5myv9.png" alt="Trust policy" width="695" height="752"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource-based policy&lt;/strong&gt;&lt;br&gt;
A resource policy (also called a function policy) tells the Lambda service which principals have permission to invoke the Lambda function. An AWS principal may be a user, role, another AWS service, or another AWS account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92loesnzgd4hfnugcjmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F92loesnzgd4hfnugcjmp.png" alt="Resource-based policy" width="568" height="244"&gt;&lt;/a&gt;&lt;br&gt;
Here is an example of a Resource-based policy&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Id": "default",
    "Statement": [
        {
            "Sid": "lambda-allow-s3-my-function",
            "Effect": "Allow",
            "Principal": {
              "Service": "s3.amazonaws.com"
            },
            "Action": "lambda:InvokeFunction",
            "Resource":  "arn:aws:lambda:us-east-2:123456789012:function:my-function",
            "Condition": {
              "StringEquals": {
                "AWS:SourceAccount": "123456789012"
              },
              "ArnLike": {
                "AWS:SourceArn": "arn:aws:s3:::my-bucket"
              }
            }
        }
     ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Read more about Resource-based policy from &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;What if I need a function to connect to AWS resource in a VPC like EC2 or RDS instance?&lt;br&gt;
And is it possible to call the function from an EC2 instance in a private subnet?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessing resources in a VPC&lt;/strong&gt;&lt;br&gt;
Enabling your Lambda function to access resources inside your VPC requires additional VPC-specific configuration information, such as VPC subnet IDs and security group IDs. This functionality allows Lambda to access resources in the VPC. It does not change how the function is secured. You also need an execution role with permissions to create, describe, and delete elastic network interfaces. Lambda provides a permissions policy for this purpose named "AWSLambdaVPCAccessExecutionRole".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gbk8dj7cs06mxrc76nl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3gbk8dj7cs06mxrc76nl.png" alt="Accessing resources in a VPC" width="774" height="178"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;learn more about attaching your Lambda functions to a VPC, in &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html" rel="noopener noreferrer"&gt;the AWS Lambda Developer Guide.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda and AWS PrivateLink&lt;/strong&gt;&lt;br&gt;
AWS Lambda supports AWS PrivateLink, enabling you to securely create, manage, and invoke Lambda functions within your VPC or on-premises data centers without exposing traffic to the public Internet.&lt;br&gt;
For a private link between your VPC and Lambda, set up an interface VPC endpoint. This utilizes AWS PrivateLink, allowing private access to Lambda APIs without the need for an internet gateway, NAT device, VPN, or AWS Direct Connect. No public IP addresses are required for instances in your VPC to communicate with Lambda APIs, ensuring that traffic between your VPC and Lambda stays within the AWS network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4razlchd6h2gs15zodr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4razlchd6h2gs15zodr5.png" alt="Lambda and AWS PrivateLink" width="706" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;learn more about AWS PrivateLink and AWS Lambda, in &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc-endpoints.html" rel="noopener noreferrer"&gt;the AWS Lambda Developer Guide.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
In this article we covered the permissions to control who can do what in regard to AWS Lambda and other AWS services.&lt;br&gt;
And then we talked about the ability to of AWS Lambda to connect to AWS services in a VPC, and a resource in private VPC that needs to call Lambda functions privately.&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Unleashing the Power of Serverless: A Journey into Event-Driven Architectures with AWS</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Mon, 18 Dec 2023 13:19:14 +0000</pubDate>
      <link>https://forem.com/aws-builders/unleashing-the-power-of-serverless-a-journey-into-event-driven-architectures-with-aws-3ppi</link>
      <guid>https://forem.com/aws-builders/unleashing-the-power-of-serverless-a-journey-into-event-driven-architectures-with-aws-3ppi</guid>
      <description>&lt;p&gt;Welcome, everyone! are you ready to improve your approach for application development?&lt;br&gt;
In this series I will cover how to use serverless and event driven architecture in your applications.&lt;br&gt;
I will explain the concept and how to use AWS services in this architecture, and I will add hand-on guides if you are like me love to learn by doing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let’s start by some concepts first:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Serverless advantages&lt;/strong&gt;&lt;br&gt;
Cloud computing main advantage is removing the need to manage data center components by yourself and leaving this to the cloud provider. &lt;br&gt;
In a serverless environment you don't even manage servers, of course there are servers that run your applications, but you don't need to manage and maintain those servers, and AWS will take care of it, allowing you to focus on your code which reflects on your client's experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Lambda&lt;/strong&gt;&lt;br&gt;
AWS Lambda is a serverless compute service that lets you run your code without the need to manage servers. It runs on a highly available  infrastructure, and it handles scaling, code monitoring, and logging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To cut short we can say that lambda has the following Benefits&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Serverless operation&lt;/li&gt;
&lt;li&gt;Event-triggered functions&lt;/li&gt;
&lt;li&gt;Automatic scaling&lt;/li&gt;
&lt;li&gt;Integrated monitoring and logging&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are new to AWS Lambda, and you need to learn by doing, then please read my previous post on &lt;a href="https://dev.to/aws-builders/how-to-use-lambda-functions-1d8e"&gt;how to use AWS Lambda&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event-driven architectures&lt;/strong&gt;&lt;br&gt;
Event-driven is an architecture where your application is decoupled to many services and these services communicate events to each other and services can make use of events and run code in the context of the event.&lt;br&gt;
but what is an event?&lt;br&gt;
An event reflects some change, user request, or update, like adding an item to a shopping cart on an e-commerce site. Once an event happens, the data is sent to be used by other services. In this architecture, the main way to exchange information between different services is by sending those events.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Producers, pathways, consumers&lt;/strong&gt;&lt;br&gt;
There are many AWS services that produce events, which we can call an event sources for AWS Lambda. In response to events, Lambda executes the code, which is crafted to handle those events, and when executed, Lambda functions can trigger additional actions or subsequent events.&lt;/p&gt;

&lt;p&gt;The following diagram illustrates how services work together in a serverless architecture &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyra9cnzrcrn9jc3q8y0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyra9cnzrcrn9jc3q8y0v.png" alt="serverless architecture " width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Notice: An event is a state change in what you're monitoring, such as an updated shopping cart or a new file uploaded to Amazon S3.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;After covering the fundamentals of serverless architecture and AWS Lambda, let's dive into the details of how AWS Lambda operates through event sources and triggers.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How AWS Lambda Works?&lt;/strong&gt;&lt;br&gt;
To understand event driven architectures and services like AWS Lambda, you need to understand the events themselves. This section dives into how events trigger Lambda functions. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invocation models for running Lambda functions&lt;/strong&gt;&lt;br&gt;
Event sources can trigger lambda functions with three different invocation models, and each model suits specific requirement. &lt;br&gt;
We have 3 invocation models:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Synchronous invocation&lt;/li&gt;
&lt;li&gt;Asynchronous invocation&lt;/li&gt;
&lt;li&gt;Polling invocation&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;Now let’s learn about each invocation model&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Synchronous invocation&lt;/strong&gt;&lt;br&gt;
In this model you wait for the function to process the event and return a response. when the function completes, Lambda returns the response from the function's code with additional data. &lt;/p&gt;

&lt;p&gt;The following diagram illustrates clients invoking a Lambda function synchronously. Events go directly to the function, and the function response returns directly to the invoker. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpga8sb9px6knhxs6o4u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpga8sb9px6knhxs6o4u.png" alt="synchronously" width="345" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following AWS services invoke Lambda synchronously:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon API Gateway&lt;/li&gt;
&lt;li&gt;Amazon Cognito&lt;/li&gt;
&lt;li&gt;AWS CloudFormation&lt;/li&gt;
&lt;li&gt;Amazon Alexa&lt;/li&gt;
&lt;li&gt;Amazon Lex&lt;/li&gt;
&lt;li&gt;Amazon CloudFront&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;More information about &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-sync.html" rel="noopener noreferrer"&gt;Synchronous invocation&lt;/a&gt; in the AWS Lambda Developer Guide.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous invocation&lt;/strong&gt;&lt;br&gt;
Several AWS services, such as &lt;em&gt;S3&lt;/em&gt; and &lt;em&gt;Amazon SNS&lt;/em&gt;, invoke functions asynchronously to process events. When you invoke a function asynchronously, you don't wait for a response from the function code. You hand off the event to Lambda and Lambda handles the rest. You can configure how Lambda handles errors, and can send invocation records to a downstream resource such as &lt;em&gt;Amazon SQS&lt;/em&gt; or &lt;em&gt;EventBridge&lt;/em&gt; to chain together components of your application. &lt;em&gt;More about destinations is coming below.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The following diagram illustrates clients invoking a Lambda function asynchronously. Events are queued before being sent to the function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkpl8e3a4znqq2xwlfm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frkpl8e3a4znqq2xwlfm0.png" alt="asynchronously" width="510" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Destinations&lt;/strong&gt;&lt;br&gt;
Destinations sends records of asynchronous invocations to other services. You can setup separate destinations for events that fail processing and events that succeed.&lt;br&gt;
Destinations provide a way to handle errors and successes without requiring additional code. &lt;br&gt;
More information about &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html#invocation-async-destinations" rel="noopener noreferrer"&gt;Configuring destinations for asynchronous invocation&lt;/a&gt; in the AWS Lambda Developer Guide.&lt;/p&gt;

&lt;p&gt;The following diagram shows a function handling asynchronous invocation. If the function responds successfully or exits without errors, Lambda sends an invocation record to an EventBridge event bus. In case of repeated processing failures, Lambda forwards an invocation record to an Amazon Simple Queue Service (Amazon SQS) queue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3a9em3vkk6jc9e9y9uf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm3a9em3vkk6jc9e9y9uf.png" alt="failures" width="510" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Asynchronous AWS service integration&lt;/strong&gt; &lt;br&gt;
The following AWS services invoke Lambda asynchronously: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon SNS &lt;/li&gt;
&lt;li&gt;Amazon S3&lt;/li&gt;
&lt;li&gt;Amazon EventBridge&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;More information about &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html" rel="noopener noreferrer"&gt;Asynchronous invocation&lt;/a&gt; in the AWS Lambda Developer Guide.&lt;/p&gt;

&lt;p&gt;And one last type of invocations:&lt;br&gt;
&lt;strong&gt;Polling invocation&lt;/strong&gt;&lt;br&gt;
This invocation model is designed to allow you to integrate with AWS Stream and Queue based services with no code or server management. Lambda will poll the following services on your behalf, retrieve records, and invoke your functions. &lt;br&gt;
The following are supported services:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon Kinesis&lt;/li&gt;
&lt;li&gt;Amazon SQS&lt;/li&gt;
&lt;li&gt;Amazon DynamoDB Streams&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With this type of integration, AWS will manage the poller on your behalf and perform Synchronous invokes of your function with this type of integration. The retry behavior for this model is based on data expiration in the data source. For example, Kinesis Data streams store records for 24 hours by default (up to 168 hours).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Event source mapping&lt;/em&gt;&lt;br&gt;
The configuration of services as event triggers is known as event source mapping. This occurs when you configure event sources to launch your Lambda functions and then grant these sources IAM permissions to access the Lambda function. &lt;/p&gt;

&lt;p&gt;Lambda reads events from the following services:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon DynamoDB&lt;/li&gt;
&lt;li&gt;Amazon Kinesis&lt;/li&gt;
&lt;li&gt;Amazon MQ&lt;/li&gt;
&lt;li&gt;Amazon Managed Streaming for Apache Kafka (MSK)&lt;/li&gt;
&lt;li&gt;self-managed Apache Kafka&lt;/li&gt;
&lt;li&gt;Amazon SQS&lt;/li&gt;
&lt;li&gt;Amazon DocumentDB&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;More information about &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html" rel="noopener noreferrer"&gt;Lambda event source mappings&lt;/a&gt; in the AWS Lambda Developer Guide.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In conclusion&lt;/strong&gt;, we've explored the principles of serverless architecture and dived into the how to use lambda in such architecture, we explored its functionality through event sources and triggers. &lt;br&gt;
By understanding the different invocation models, such as synchronous and asynchronous approaches, and the seamless integration with AWS streaming and queuing services using polling, you're ready to get in and start using serverless computing. &lt;br&gt;
AWS Lambda's ability to abstract infrastructure layer, and with its diverse invocation options, it helps developers to focus on crafting code that drives unique and innovative business solutions. &lt;br&gt;
Stay tuned to the upcoming articles in this series for more fun with serverless!&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>eventdriven</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Introduction to AWS Fargate</title>
      <dc:creator>Ayman Aly Mahmoud</dc:creator>
      <pubDate>Thu, 16 Mar 2023 08:58:16 +0000</pubDate>
      <link>https://forem.com/aws-builders/introduction-to-aws-fargate-4iaf</link>
      <guid>https://forem.com/aws-builders/introduction-to-aws-fargate-4iaf</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is AWS Fargate?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As described by AWS, AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.&lt;/p&gt;

&lt;p&gt;That means you focus more on developing your applications, and your business value, and spare the time to manage and maintain the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeetg2uwfjxt3e7kxhj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeetg2uwfjxt3e7kxhj1.png" alt="fargate" width="800" height="440"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Components&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster&lt;/strong&gt;&lt;br&gt;
With AWS ECS we use cluster to group tasks or services together, and to isolate applications. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task definitions&lt;/strong&gt;&lt;br&gt;
A task definition is a text file that describes a container ot containers that form your application. it is a JSON file and for task definition it can be use to describe up to ten containers.&lt;br&gt;
The file contains information like which port to open for the application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task&lt;/strong&gt;&lt;br&gt;
A Task is created when you run a Task directly, which launches container(s) defined in the task definition.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service&lt;/strong&gt;&lt;br&gt;
A Service is responsible for creating Tasks. it guarantee that you always have some number of tasks running all the time, if a Task's container fails due to an error or something, or the EC2 instance fails, ECS service will replace it by launching another instance based on your task definition&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What do we mean by managing apps and services at the container level?&lt;/strong&gt;&lt;br&gt;
To better understands this, Here is a simple architecture of Fargate in AWS ECS stack.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4twjhpquqzcq8odjcw6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4twjhpquqzcq8odjcw6.png" alt="fargate-arch" width="800" height="920"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As illustrated above, with ECS and Fargate it starts with a "Task Definition", you as a customer use the task definition to define the container, CPU, memory, image, along with other details that tells the container how it should run.&lt;br&gt;
Then a "Task" represents one or more containers that make up the application.&lt;br&gt;
To run the task, you simply use the start task API and choose the launch type, either in EC2 instance you manage, or using managed environment by Fargate.&lt;/p&gt;

&lt;p&gt;You can use AWS Fargate from AWS Console, AWS CLI, and Amazon ECS CLI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Networking with Fargate&lt;/strong&gt;&lt;br&gt;
AWS Fargate runs in your VPC, you select the VPC, subnets, and Security groups the you need to attach to the instances that runs your containerized applications&lt;/p&gt;

&lt;p&gt;Fargate supports Application Load Balancers "ALB", and Network Load Balancers "NLP".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security with Fargate&lt;/strong&gt;&lt;br&gt;
No SSH access, as you offload the management of the infrastructure to AWS, there is no need to have SSH access, so it is secure.&lt;br&gt;
And you have cluster-level isolation of your containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fargate use cases&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long running services&lt;/li&gt;
&lt;li&gt;Highly available workloads&lt;/li&gt;
&lt;li&gt;Monolithic applications&lt;/li&gt;
&lt;li&gt;Microservices applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;When should not you use Fargate?&lt;/strong&gt;&lt;br&gt;
You should go for EC2 mode if you have reserved instances running your applications.&lt;br&gt;
As Fargate charges you on seconds of CPU and memory consumed, there is no way to translate that to reserved instances saving plan.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Notice&lt;/strong&gt;: Before 2019 Spot instance plan was not supported, then at AWS re:Invent 2019 AWS announced AWS Fargate Spot. Fargate Spot is a new capability on AWS Fargate that can run interruption tolerant Amazon ECS Tasks at up to a 70% discount off the Fargate price.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container services on AWS&lt;/strong&gt;&lt;br&gt;
If you are new to containers and want to make use of Fargate it is recommended to understand more about container services on AWS, and there is no better way than reading about them on AWS documentations. So here are what you need to read.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html" rel="noopener noreferrer"&gt;Amazon ECS&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html" rel="noopener noreferrer"&gt;Amazon EKS&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html" rel="noopener noreferrer"&gt;Amazon ECR&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tutorial "How to use AWS Fargate"&lt;/strong&gt;&lt;br&gt;
Now you understand the basics and want to explore some hands on experience, I will walk you through this tutorial on how to deploy a sample container using AWS Fargate. Let's jump in.&lt;/p&gt;

&lt;p&gt;1- Sign in to the console and search for AWS ECS, then click in "Get Started"&lt;br&gt;
The first component to create is the cluster that will contain a group of tasks together.&lt;br&gt;
2- Click on "Create Cluster"&lt;br&gt;
Enter the cluster name and choose the VPC and subnets your want to create the cluster in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw0o7xvbdasawk7odgb4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flw0o7xvbdasawk7odgb4.png" alt="cluster" width="800" height="561"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the infrastructure section, you will find AWS Fargate is selected for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaq4j9xphs9upy60kfl3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnaq4j9xphs9upy60kfl3.png" alt="fargate-selected" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click "Create"&lt;/p&gt;

&lt;p&gt;3- Click on the cluster name to open its page, and then from the service tap, click create, to create a new service.&lt;/p&gt;

&lt;p&gt;4- Select the "Launch Type" Fargate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69yhy8kcbbbkwye4rc42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69yhy8kcbbbkwye4rc42.png" alt="Launch-Type" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5- Scroll down to "Deployment configuration" section, and click on "Task definitions"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnkduk6igtcigy8lxkbb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqnkduk6igtcigy8lxkbb.png" alt="Task-definitions" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will open another browser tap to create the "Task definitions"&lt;/p&gt;

&lt;p&gt;6- Click on "Create new task definition"&lt;br&gt;
Enter "Task definition family" name.&lt;br&gt;
In the container details section enter the container name, and the Image URI.&lt;br&gt;
If you don't have an image in AWS ECR, please get one first.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zjxn6p6p7znwb840l87.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zjxn6p6p7znwb840l87.png" alt="Task-definitions2" width="800" height="559"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;7- Review the "Port mappings", then click "Next"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4q3q9yak09nhw3s9ft0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4q3q9yak09nhw3s9ft0.png" alt="Port-mappings" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8- In the "Environment" section make sure AWS Fargate is selected and choose the CPU and Memory that works for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hmlp6pr2yyubc61h7qg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hmlp6pr2yyubc61h7qg.png" alt="Environment" width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9- Accept all the default settings and scroll down, then click "Next"&lt;/p&gt;

&lt;p&gt;10- In the "Review and create" page, review every thing and click Create&lt;/p&gt;

&lt;p&gt;11- Now back to the "Create service" page, you will need to refresh the page to load the task definition you just created and select it, then enter a "Service name".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g1js1nuv1fs74ysgno4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2g1js1nuv1fs74ysgno4.png" alt="def" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;12- Review the Networking and Load Balance sections if needed, and finally click "Create"&lt;/p&gt;

&lt;p&gt;13- Click on the Service to open its page, and review the health&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56syblaxr2e8dtju1pac.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F56syblaxr2e8dtju1pac.png" alt="health" width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;14- Click on "Configuration and task"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyyssdhc7lhii1kg7aim.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyyssdhc7lhii1kg7aim.png" alt="Configuration-task" width="800" height="89"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;15- Scroll down, and check that the task is running, then click on it is name&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwztux0oyco6rui01zzxo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwztux0oyco6rui01zzxo.png" alt="task" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;16- Click on open address, to confirm that your service is running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyd2w07zm6y6udcnsd2a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyd2w07zm6y6udcnsd2a.png" alt="running" width="500" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will need to add a rule to the security group to allow traffic to port 80, in order to be able to connect like in step 16&lt;/p&gt;

&lt;p&gt;That will be all.&lt;br&gt;
Make sure you delete the container and clean all the resources to avoid unexpected costs in your bill.&lt;/p&gt;

&lt;p&gt;I hope this has been beneficial to you, and I would like to thank you for reading.&lt;/p&gt;

&lt;p&gt;Follow me for more articles and tutorials in serverless.&lt;/p&gt;

</description>
      <category>awsfargate</category>
      <category>serverless</category>
      <category>containers</category>
    </item>
  </channel>
</rss>
