<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Kishan</title>
    <description>The latest articles on Forem by Kishan (@am_i_dev).</description>
    <link>https://forem.com/am_i_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/am_i_dev"/>
    <language>en</language>
    <item>
      <title>AWS AppSync: The Game-Changing Access Method For Environment Variables</title>
      <dc:creator>Kishan</dc:creator>
      <pubDate>Thu, 08 Feb 2024 03:26:18 +0000</pubDate>
      <link>https://forem.com/am_i_dev/aws-appsync-the-game-changing-access-method-for-environment-variables-3h5p</link>
      <guid>https://forem.com/am_i_dev/aws-appsync-the-game-changing-access-method-for-environment-variables-3h5p</guid>
      <description>&lt;p&gt;Excitingggggg news! AWS has just rolled out support for environment variables in AppSync. Let's delve into incorporating this feature in our SAM &amp;amp; Serverless Framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step1:&lt;/strong&gt; When defining AppSync, we can now add environment variables similarly to how it's done during Lambda creation. In the properties section, include EnvironmentVariables and add key-value pairs as needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
  UserAppsync:
    Type: "AWS::AppSync::GraphQLApi"
    Properties:
      Name: "user-int-appsync"
      AuthenticationType: "AWS_IAM"
      EnvironmentVariables:
        USERTABLENAME: "user-int-table"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step2:&lt;/strong&gt; Having defined the environment in the YAML file, let's access it in our resolver file. Once environment variables are specified, they are passed for both request and response within the ctx (context) of the function parameter&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;//accessing the environment variable(NEW METHOD)
const tableName = ctx.env.USERTABLENAME
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Accessing it is demonstrated below Inside the Resolver:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/**
 * @param {*} ctx - Context
 * @retuns {Object} - Request object
 */

export function request(ctx) {
  //accessing the environment variable(NEW METHOD)
  const tableName = ctx.env.USERTABLENAME
  return {
    operation: 'BatchGetItem',
    tables: {
      [tableName]: { keys },
    },
  };
}

/**
 * @param {*} ctx - Context
 * @retuns {Object} - Response object
 */
export function response(ctx) {
  return {};
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As discussed, the environment is now passed within the ctx of both request and response. This enhancement simplifies the process of passing and accessing environment variables with the latest updates.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>appsync</category>
      <category>lambda</category>
      <category>serverless</category>
    </item>
    <item>
      <title>ServeLess Pattern: Lambda Error Handling Simplified With SQS and Dead Letter Queue (DLQ)</title>
      <dc:creator>Kishan</dc:creator>
      <pubDate>Sun, 07 Jan 2024 06:17:13 +0000</pubDate>
      <link>https://forem.com/am_i_dev/aws-pattern-mastering-serverless-error-handling-with-sqs-and-dead-letter-queue-dlq-4b10</link>
      <guid>https://forem.com/am_i_dev/aws-pattern-mastering-serverless-error-handling-with-sqs-and-dead-letter-queue-dlq-4b10</guid>
      <description>&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;You just started with Lambda and your function receive lot's of request coming, and some of the request get failed and you not able to trace it?, let's use Simple Queue Service (SQS) and Dead Letter Queue(DLQ) and build a pattern which will catch all the failed request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture pattern
&lt;/h2&gt;

&lt;p&gt;To catch all the request that got failed, we need to build a pattern where the Lambda is triggered from the SQS queue and then DLQ attached to it that it will catch the Lambda failure. Below is the Architecture design for the solution&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxabs5jsrg6rw2ds4gko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxabs5jsrg6rw2ds4gko.png" alt="Pattern" width="730" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;let's implement the pattern, by understanding the resources to be created:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. normalQueue:&lt;/strong&gt; This Queue will be the starting point which will trigger the Lambda.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. FailedLambda:&lt;/strong&gt; The Lambda will receive the data from SQS and for now it is designed to throw error.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. DeadLetterQueue:&lt;/strong&gt; When the Lambda get failed the Message is stored in this DLQ.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Implement?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step1:&lt;/strong&gt; Create an SQS DLQ named "deadLetterQueueTest" with the type of Standard and a retention period of 14 days.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felcbbnhcpn4u4r4xrbrp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Felcbbnhcpn4u4r4xrbrp.png" alt="DLQName" width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make this queue available for other queue to use as dead letter queue(DLQ), we need to configure re-drive policy for this queue, which will allow all the queue in the region to access it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0ycmrfta9g4jyb7cj82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0ycmrfta9g4jyb7cj82.png" alt="DLQ" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step2:&lt;/strong&gt; Next Create a queue named "normalQueue" which will be used as a source to trigger the Lambda function. And Set the type to be Standard and retention period to 14 days.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0yn5fhgc14wt2qqwjlw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0yn5fhgc14wt2qqwjlw9.png" alt="QueueName" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Dead-letter queue section, select the DLQ created in Step 1 and click on create queue button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhucvmqu3axtuc6yn2rb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhucvmqu3axtuc6yn2rb3.png" alt="Queue" width="800" height="558"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step3:&lt;/strong&gt; Now Let's Create the Lambda function named "failedLambda" with permissions to receive SQS messages. Later Navigate to SQS and attach this lambda as trigger to the "normalQueue" as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwy4eo4ufd0ig6pzmjjp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwy4eo4ufd0ig6pzmjjp.png" alt="AttachDLQ" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Lambda function code, it will throw an error, such as an incorrect URL in an Axios call, to simulate a failed request&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const axios = require('axios');

exports.handler = async (event) =&amp;gt; {
    try {
      console.log('[INFO] Event', event);
      const result = await axios.get('https://jsonplaceholdr.typicode.com/todos/1');
      return result.data;
    } catch (err) {
      console.log('[ERROR] Error', err)
      throw err
    }
  };

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test
&lt;/h2&gt;

&lt;p&gt;To Test Navigate to the "normalQueue", click on Send and receive message in the message tab, and send data that the Lambda can process. The data should be looking something similar to below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmmq2t6pf4oir2v0kq06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpmmq2t6pf4oir2v0kq06.png" alt="QueueMessage" width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will trigger the Lambda function, which will get failed. As a result, the corresponding message will be added to the DLQ. To verify you can then navigate to the DLQ, click on &lt;strong&gt;Send and receive message&lt;/strong&gt;, and observe the failed message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhnsih6oznl02t6g5g6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhnsih6oznl02t6g5g6k.png" alt="DLQMessage" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;By Following this pattern to catch the failure in the Lambda made developer job easier to understand which request and what caused the request to get failed, by using SQS queue &amp;amp; Dead Letter Queue (DLQ) it made easier to catch the error or failure scenario more effective.&lt;/p&gt;

</description>
      <category>sqs</category>
      <category>aws</category>
      <category>lambda</category>
      <category>serverless</category>
    </item>
    <item>
      <title>AWS SQS: Decode the Secret Power Of Short &amp; Long Polling</title>
      <dc:creator>Kishan</dc:creator>
      <pubDate>Tue, 19 Dec 2023 06:25:53 +0000</pubDate>
      <link>https://forem.com/am_i_dev/aws-sqs-marvels-decode-the-secrets-of-short-long-pulling-5a0m</link>
      <guid>https://forem.com/am_i_dev/aws-sqs-marvels-decode-the-secrets-of-short-long-pulling-5a0m</guid>
      <description>&lt;p&gt;Discover the dynamics of Short &amp;amp; Long Polling in AWS SQS. When using the ReceiveMessage API call, SQS can be configured to perform short or long Polling, which will retrieve data from distributed components. Short Polling fetches a subset of servers, while long Polling queries all servers for message retrieval. let's understand each Polling mechanism for seamless integration into your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Short &amp;amp; Long Polling in SQS?
&lt;/h2&gt;

&lt;p&gt;While invoking the ReceiveMessage API call in SQS, the option to configure Short &amp;amp; Long Polling arises.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;SQS, a distributed queue, distributes messages across various components in different regions. Short &amp;amp; long polling capitalises on this architecture, enhancing accessibility to fetch data from these distributed components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Short Polling:&lt;/strong&gt; This polling method optimise the user experience by sending data seamlessly fast by extracting the message in the subset of the server which stores the message.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long Polling:&lt;/strong&gt; On the opposite to short polling, this method search all the servers to fetch the messages in the server, which will take time to fetch all the server even there is no messages in those server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2qs29odnzx2k1i5h59p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2qs29odnzx2k1i5h59p.png" alt="Mind map of polling" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Consuming Message through Short Polling
&lt;/h2&gt;

&lt;p&gt;We Will Invoke SQS in two scenarios: an empty queue and a queue with data. For short polling, set WaitTimeSeconds to 0.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code&lt;/strong&gt;&lt;br&gt;
On the code level, to make the api call to short polling you need to specify the parameter &lt;code&gt;WaitTimeSeconds&lt;/code&gt; to 0, and &lt;code&gt;MaxNumberOfMessages&lt;/code&gt; to 3 (you can set as much you want).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const aws = require('aws-sdk')
const sqs = new aws.SQS({ region: 'us-east-1' });

const shortPolling = async () =&amp;gt; {
    try {
        const params = {
            QueueUrl: 'Queue-URL',
            WaitTimeSeconds: 0,
            MaxNumberOfMessages: 3
        }
        const data = await sqs.receiveMessage(params).promise();
        console.log('[INFO] Received messages from SQS by short polling', JSON.stringify(data));
    } catch (error) {
        console.log('[ERROR] Error while short polling the sqs', error);
        throw error;
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Scenario1:&lt;/strong&gt;&lt;br&gt;
When the queue is empty, SQS quickly looks over the server and returns an empty message.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtdlafc0xktyo2cobbd2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtdlafc0xktyo2cobbd2.png" alt="Short Polling Empty scenario" width="800" height="29"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario2:&lt;/strong&gt;&lt;br&gt;
let assume you have 5 message in your queue and  make a query to get those message. SQS fetches a subset of the server and returns messages such as MSG0,3(message with number postfix of 0 and 3). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjva114na3zdpfxraa0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjva114na3zdpfxraa0b.png" alt="Response Short Polling" width="800" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For subsequent calls, it returns Message 2 as it finds the third message in the server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjji2cm3ra48jgi90458.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpjji2cm3ra48jgi90458.png" alt="Second Short polling response" width="800" height="137"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Consuming Message through Long Polling
&lt;/h2&gt;

&lt;p&gt;Let's Invoke SQS in two scenarios: an empty queue and a queue with data. For long polling, set WaitTimeSeconds to 20 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;code&lt;/strong&gt;&lt;br&gt;
below code will be used for both the scenario, here i have specified the WaitTimeSeconds to be 20 seconds and maxMessage to be 3, this will make long polling request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const aws = require('aws-sdk')
const sqs = new aws.SQS({ region: 'us-east-1' });

const longPolling = async () =&amp;gt; {
    try {
        const startTime = new Date().getTime();
        const params = {
            QueueUrl: 'https://sqs.us-east-1.amazonaws.com/693651664797/PollingQueue',
            WaitTimeSeconds: 20,
            MaxNumberOfMessages: 3
        }
        const data = await sqs.receiveMessage(params).promise();
        const endTime = new Date().getTime();
        console.log('[INFO] Time taken for long polling', `${endTime - startTime} ms`);
        console.log('[INFO] Received messages from SQS by long polling', JSON.stringify(data));
    } catch (error) {
        console.log('[ERROR] Error while long polling the sqs', error);
        throw error;  
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Scenario1:&lt;/strong&gt;&lt;br&gt;
When attempting to invoke the queue devoid of data on the server, SQS patiently awaits for a duration of 20 seconds, which is scanning through all distributed servers to fulfil the request, ultimately responding an empty array.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foodfj9rt5j8fuwj1jxge.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foodfj9rt5j8fuwj1jxge.png" alt="Long Polling no message" width="800" height="29"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario2:&lt;/strong&gt;&lt;br&gt;
For this polling it will take all 20 seconds to fetch all the messages in the server, and after 20 seconds it return the response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u6v42bmjruz40lyd3k2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u6v42bmjruz40lyd3k2.png" alt="Long Polling response" width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Harness the potential of Short &amp;amp; Long Polling in AWS SQS. Grasp the intricacies of each polling mechanism, tailor them to your scenarios, and optimise message retrieval for a responsive and efficient application. Tailor your SQS interactions with precision, leveraging the strengths of short and long polling based on your use case.&lt;/p&gt;

</description>
      <category>sqs</category>
      <category>aws</category>
      <category>serverless</category>
      <category>awsbigdata</category>
    </item>
    <item>
      <title>AWS DynamoDB: Comparing Lambda &amp; Step Functions To Scan 1MB Data</title>
      <dc:creator>Kishan</dc:creator>
      <pubDate>Sun, 10 Dec 2023 10:46:50 +0000</pubDate>
      <link>https://forem.com/am_i_dev/aws-dynamodb-showdown-lambda-vs-step-functions-for-1mb-data-scans-41ja</link>
      <guid>https://forem.com/am_i_dev/aws-dynamodb-showdown-lambda-vs-step-functions-for-1mb-data-scans-41ja</guid>
      <description>&lt;p&gt;while working In a project i was exploring the &lt;strong&gt;Lambda, DynamoDB, and step function&lt;/strong&gt;, while my aim was to fetch data from DDB in both the Lambda and in Step Function, which led me to an idea of comparing both the method with time, memory usage, and cost for each method. to this idea i added another layer of scanning 1MB of data from DynamoDB. now without wasting time let's explore the setup, execution and cost which should be considered.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;To test both the scenario, let's setup required Lambda &amp;amp; step function, for both method will be sharing code to scan DB records, all the creation of Lambda &amp;amp; step function definition will be done in the aws console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;stepFunction Definition:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Comment": "Scan all the DB records",
  "StartAt": "Scan User Data",
  "States": {
    "Scan User Data": {
      "Type": "Task",
      "Parameters": {
        "TableName": "userInfo",
        "Limit": 50
      },
      "Resource": "arn:aws:states:::aws-sdk:dynamodb:scan",
      "End": true
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Lambda Logic:&lt;/strong&gt; Below code aims to scan the data from DDB, in the starting of the process i noted the time and and the completion of the task i again noted the time by the end i can find the difference from start to end, to find the total time taken to complete the task.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const { DynamoDBClient, ScanCommand } = require("@aws-sdk/client-dynamodb");

const dynamoDB = new DynamoDBClient({ region: 'us-east-1' });

const scanWholeTable = async (startKey) =&amp;gt; {
    let items = [];
    const startTime = new Date().getTime();
    try {
      const params = {
        TableName: 'userInfo',
        ExclusiveStartKey: startKey,
        Limit: 25,
      };

      // prepare the params to scan the table
      const command = new ScanCommand(params);
      const data = await dynamoDB.send(command);

      items = items.concat(data.Items);

      // If there are more items to scan, recursively call the scanDynamoDBTable function with the last evaluated key
      if (data.LastEvaluatedKey) {
        return scanWholeTable(data.LastEvaluatedKey);
      }

      // check the time taken to complete the scan
      const endTime = new Date().getTime();
      const totalTime = endTime - startTime
      console.log('[INFO] time taken to complete the db scan', totalTime)
      return items;
    } catch (error) {
      console.log('[ERROR] Error while scanning the table', error);
      throw error;
    }
}

module.exports.handler = async (event, context) =&amp;gt; {
  try {
      const finalUserInfo = await scanWholeTable(null);
      console.log('[INFO] UserInfo', finalUserInfo);
      return {
        statusCode: 200,
        headers: {
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({userData:finalUserInfo, length: finalUserInfo.length}),
      }
    } catch (error) {
      console.info('Error in the handler:', error);
      return error;
    }
};

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Time to complete the task.
&lt;/h2&gt;

&lt;p&gt;Once all the setup is done, I triggered both the scenario multiple time to determine amount of time both method takes to scan 1MB of data from DDB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I aimed trigger the lambda multiple times.&lt;/li&gt;
&lt;li&gt;As I triggered multiple time, Lambda execution time ranged between &lt;strong&gt;500-1100 milliseconds&lt;/strong&gt; for scanning 1MB of data. &lt;/li&gt;
&lt;li&gt;To complete the task Lambda execution took &lt;strong&gt;1-2 seconds&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;And for The first trigger Lambda took &lt;strong&gt;1253 milliseconds&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieeixz6gn3h14svpd0sv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieeixz6gn3h14svpd0sv.png" alt="First Trigger" width="800" height="70"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As triggering multiple time lambda showed reduced time of 544 milliseconds.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmfd9yif6ld9sbohz9ry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvmfd9yif6ld9sbohz9ry.png" alt="Optimised Response" width="800" height="61"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step Function:&lt;/strong&gt; &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuously triggering the step function surprisingly took only &lt;strong&gt;266 milliseconds&lt;/strong&gt; to complete the task.&lt;/li&gt;
&lt;li&gt;Task initialisation and completion took &lt;strong&gt;400-500 milliseconds&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Apart from some lag, both Lambda and Step Function times are worth considering.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5ikvdv8f3g6gnxwv8pk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5ikvdv8f3g6gnxwv8pk.png" alt="StepFunction time" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Consideration.
&lt;/h2&gt;

&lt;p&gt;Let's use the &lt;a href="https://calculator.aws/#/" rel="noopener noreferrer"&gt;AWS Price Calculator&lt;/a&gt; to estimate the cost of scanning 1MB on DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda Cost:&lt;/strong&gt; &lt;br&gt;
Let's configure the region (us-east-1), architecture (x84), memory (512), timeout (30), and trigger per day (let's keep it at 50).once all the configuration is set let's break down the cost.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For 50 requests per day, which total will be 1,520 per month, and memory usage of &lt;strong&gt;608.33&lt;/strong&gt; GB for computing.&lt;/li&gt;
&lt;li&gt;AWS provides &lt;strong&gt;400,000&lt;/strong&gt; GB as a free tier for compute, resulting in a compute cost of &lt;strong&gt;0 USD&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Currently AWS Lambda makes 1 million triggers as part of the free tier, which result in our process to cost &lt;strong&gt;0 USD&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Running the Lambda will cost 0 USD until it reaches &lt;strong&gt;400k&lt;/strong&gt; compute and &lt;strong&gt;1 million&lt;/strong&gt; requests.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9rsvtfostao0ntpb7m7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9rsvtfostao0ntpb7m7.png" alt="lambda cost" width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step Function Cost:&lt;/strong&gt;&lt;br&gt;
Now configure the region (us-east-1), select standard workflow, and set the workflow requests to 50 per day, 5 state transition of the step function, let's consider there will be 5 states after it to transform data and send back the result.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With 1,520 requests triggered to the Step Function per month.&lt;/li&gt;
&lt;li&gt;AWS provides &lt;strong&gt;4,000&lt;/strong&gt; transitions as a free tier for the step function service, and for our step function it will be total of 7,604 transitions per month.&lt;/li&gt;
&lt;li&gt;As a result above point the billable transitions amount will be &lt;strong&gt;3,604&lt;/strong&gt; transitions after 7,604 - 4,000.&lt;/li&gt;
&lt;li&gt;The cost for one transition state is &lt;strong&gt;0.000025 USD&lt;/strong&gt;. &lt;/li&gt;
&lt;li&gt;leading to an estimated cost of &lt;strong&gt;0.09 USD&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31ojknzwejuyh8uc2jcd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F31ojknzwejuyh8uc2jcd.png" alt="step function cost" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Step Function&lt;/th&gt;
&lt;th&gt;Lambda&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Data Size(MB)&lt;/td&gt;
&lt;td&gt;1MB&lt;/td&gt;
&lt;td&gt;1MB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time in Milliseconds&lt;/td&gt;
&lt;td&gt;266&lt;/td&gt;
&lt;td&gt;500 - 1100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Compute(GB)&lt;/td&gt;
&lt;td&gt;686&lt;/td&gt;
&lt;td&gt;not-specified&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost(USD)&lt;/td&gt;
&lt;td&gt;0.09&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Disadvantage&lt;/td&gt;
&lt;td&gt;As the Step Function send input to next state as marshalled data from DDB, and un-marshalling correctly mapping the "Null" type is problematic from the Payload of the step function in next states.&lt;/td&gt;
&lt;td&gt;N/A&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As i delve into comparing the Lambda &amp;amp; Step Function to scan 1MB of data from DynamoDb, yield insightful observations, on the time &amp;amp; cost for both methods, where &lt;strong&gt;Lambda&lt;/strong&gt; free-tier benefits overshadow the Step Function consistent performance. &lt;/p&gt;

</description>
      <category>lambda</category>
      <category>stepfunctions</category>
      <category>serverless</category>
      <category>costoptimisation</category>
    </item>
    <item>
      <title>AWS IAM Auth: Calling AppSync Mutations from Lambda (Step-by-Step-Guide)</title>
      <dc:creator>Kishan</dc:creator>
      <pubDate>Sun, 03 Dec 2023 15:23:25 +0000</pubDate>
      <link>https://forem.com/am_i_dev/iam-authentication-calling-appsync-mutations-from-lambda-step-by-step-guide-2doi</link>
      <guid>https://forem.com/am_i_dev/iam-authentication-calling-appsync-mutations-from-lambda-step-by-step-guide-2doi</guid>
      <description>&lt;p&gt;Have you recently started working in AppSync and wondering how to trigger mutations or queries from Lambda functions? While you can use Axios to complete this, to make the call more secure, we will approach using IAM to call mutations from Lambda which will add an extra layer of protection. This blog aims to make dev life easier with the approach on how to implement the below scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;In This blog i'll introduce the mutation schema which we will be using to call inside the Lambda, and then will be using IAM auth to get token so that lambda is allowed to call the mutation. now lets explore the approach&lt;/p&gt;

&lt;h2&gt;
  
  
  pre-requirities:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Create the required AppSync mutation.&lt;/li&gt;
&lt;li&gt;Set up a Lambda function that will make the mutation call.&lt;/li&gt;
&lt;li&gt;Ensure necessary permissions are in place to trigger the mutation from Lambda.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Soultion
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step1:&lt;/strong&gt; Let's create an AppSync mutation which will be use to create user in DDB. Below is the schema for the createUser mutation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      mutation MyMutation(
        $age: Int!, 
        $id: String!, 
        $isDeleted: String!, 
        $name: String!
      ) {
        createUser(
            age: $age, 
            id: $id, 
            isDeleted: $isDeleted, 
            name: $name
        ) {
          body {
            age
            id
            isDeleted
            name
          }
          message
          status
        }
      }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step2:&lt;/strong&gt; As the mutation is been created and we require IAM permission for your lambda, with the policy to trigger the appsync mutation, for that let's specify the region, accountId, and apiId in the resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Action": [
                "appsync:GraphQL"
            ],
            "Resource": [
                "arn:aws:appsync:region:accountId:apis/apiId/types/Mutation/fields/createUser"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step3:&lt;/strong&gt; once all the necessary setup are done, you need to go to your lambda and paste the below code which will call the mutation, and install the required libraries which will be used to call the mutation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;'axios', '@aws-crypto/sha256-js', '@aws-sdk/credential-provider-node',  '@aws-sdk/signature-v4', '@aws-sdk/protocol-http'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, copy the below Lambda code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const axios = require('axios');
const crypto = require('@aws-crypto/sha256-js');
const { defaultProvider } = require('@aws-sdk/credential-provider-node');
const { SignatureV4 } = require('@aws-sdk/signature-v4');
const { HttpRequest } = require('@aws-sdk/protocol-http');
const { Sha256 } = crypto;

const callUserMutation = async (mutationInput) =&amp;gt; {
    try {
        const { id, name, age, isDeleted } = mutationInput;
      const query = `
      mutation MyMutation(
        $age: Int!, 
        $id: String!, 
        $isDeleted: String!, 
        $name: String!
      ) {
        createUser(
            age: $age, 
            id: $id, 
            isDeleted: $isDeleted, 
            name: $name
        ) {
          body {
            age
            id
            isDeleted
            name
          }
          message
          status
        }
      }`;

      const APPSYNC_MUTATION_URL = 'your-appsync-mutation-url';
      const signer = new SignatureV4({
        credentials: defaultProvider(),
        region: 'us-east-1',
        service: 'appsync',
        sha256: Sha256,
      });

      const { host, pathname } = new URL('your-appsync-mutation-url');
      const params = {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          Host: host,
        },
        body: JSON.stringify({
          query,
          variables: {
            age,
            id,
            isDeleted,
            name,
          },
          operationName: 'MyMutation',
        }),
        hostname: host,
        path: pathname,
      };
      const requestToBeSigned = new HttpRequest(params);
      console.log('[INFO] requestToBeSigned', requestToBeSigned)

      // Sign the request to call mutation
      const signedRequest = await signer.sign(requestToBeSigned);

      // sending the signed request using Axios to call mutation.
      const response = await axios({
        url: APPSYNC_MUTATION_URL,
        method: signedRequest.method,
        data: signedRequest.body,
        headers: signedRequest.headers,
      });
      const { data } = response.data;
      console.log('[INFO] data from the mutation', data);
      return data;
    } catch (error) {
      console.log('[ERROR] Error while calling the mutation', JSON.stringify(error));  
      return error;
    }
};

module.exports.handler = async (event) =&amp;gt; {
    try {
        const mutationData = callUserMutation({age: 22, id: '1', isDeleted: 'false', name: 'kishan'});
        console.log('[INFO] mutationData', mutationData);
    } catch (error) {
        console.log('[ERROR] Error',error);
        return error;
    }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Breaking Down the Code:
&lt;/h2&gt;

&lt;p&gt;after seeing the code does it look little confusion?, let understand the source code by splitting it.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Initially, you must configure the parameters, such as the query, variables, method, and URL. which will be required to call the mutation.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      const { host, pathname } = new URL('your-appsync-mutation-url');
      const params = {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
          Host: host,
        },
        body: JSON.stringify({
          query,
          variables: {
            age,
            id,
            isDeleted,
            name,
          },
          operationName: 'MyMutation',
        }),
        hostname: host,
        path: pathname,
      };
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Once you've made the parameters, the next step will be focusing on getting the authorisation token, that will allow the Lambda to invoke the mutation. To achieve this, use the &lt;strong&gt;SignatureV4()&lt;/strong&gt; functionality, initiating the &lt;strong&gt;signature.sign()&lt;/strong&gt; process. This will return an authorisation code which can be integrated with your parameters so that it as access to call the mutation.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      const signer = new SignatureV4({
        credentials: defaultProvider(),
        region: 'us-east-1',
        service: 'appsync',
        sha256: Sha256,
      });

      // Sign the request to call mutation
      const signedRequest = await signer.sign(requestToBeSigned);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After the IAM authentication token has been added to your parameters, the next step involves utilising Axios to initiate the mutation call and retrieve the response. The following code will be doing the task.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      // sending the signed request using Axios to call mutation.
      const response = await axios({
        url: APPSYNC_MUTATION_URL,
        method: signedRequest.method,
        data: signedRequest.body,
        headers: signedRequest.headers,
      });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By following these steps, now you are aware on how to call the appsync mutation from the Lambda with IAM security protection.&lt;/p&gt;

</description>
      <category>appsync</category>
      <category>graphql</category>
      <category>serverless</category>
      <category>awsappsync</category>
    </item>
    <item>
      <title>Solving the Puzzle: How to Pass Environment Variables in AWS AppSync Resolvers using Serverless</title>
      <dc:creator>Kishan</dc:creator>
      <pubDate>Sat, 25 Nov 2023 05:29:04 +0000</pubDate>
      <link>https://forem.com/am_i_dev/solving-the-puzzle-how-to-pass-environment-variables-in-aws-appsync-resolvers-using-serverless-kl0</link>
      <guid>https://forem.com/am_i_dev/solving-the-puzzle-how-to-pass-environment-variables-in-aws-appsync-resolvers-using-serverless-kl0</guid>
      <description>&lt;p&gt;Are you just diving into the world of AWS AppSync and puzzled about passing environment variables to the resolver using the Serverless Framework? If you've scoured countless blogs without finding a solution, you've landed on the right page!&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;Imagine you have an AppSync endpoint triggering a mutation. In this scenario, you need to validate user information and perhaps trigger an SQS with a unique accountID for each environment or use the tableName in the resolver function to fetch data from DynamoDB. Whatever your use case, you can follow these steps to seamlessly pass environment variables to your resolver function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step1:-&lt;/strong&gt; Add a property called substitutions to the resolver you've created. This is where you'll pass your environment variables as key-value pairs. In the example below, I'm using tableName as an environmental variable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc77w6mft3k31mmyo61xa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc77w6mft3k31mmyo61xa.png" alt="Passing Env's in resolver" width="800" height="164"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step2:-&lt;/strong&gt; Now that you've successfully passed your environment variables to the resolver, you'll need to access them within the resolver function. While in a Lambda function, you might use &lt;strong&gt;'process.env.tableName'&lt;/strong&gt;, in the resolver function, you'll need to add &lt;strong&gt;"#"&lt;/strong&gt; at the start and end of the name you passed in Step 1. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const tableName = '#tableName#';
const accountId = '#accountId#';
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ensure you add this before the request and response functions in your resolver.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5pg5a4iue9zuy1duytg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5pg5a4iue9zuy1duytg.png" alt="Accessing Env's in Resolver Function" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step3:-&lt;/strong&gt; With the setup complete, you now have the knowledge to pass environment variables seamlessly to your AppSync resolver function.&lt;/p&gt;

&lt;p&gt;This blog aims to simplify the process, ensuring you can efficiently manage and utilize environment variables in your AppSync projects. Feel free to explore, experiment, and elevate your serverless development experience!"&lt;/p&gt;

</description>
      <category>appsync</category>
      <category>serverless</category>
      <category>javascript</category>
      <category>resolverfunction</category>
    </item>
  </channel>
</rss>
