<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: eelayoubi</title>
    <description>The latest articles on Forem by eelayoubi (@eelayoubi).</description>
    <link>https://forem.com/eelayoubi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/eelayoubi"/>
    <language>en</language>
    <item>
      <title>AWS Text-To-Speech Serverless Application</title>
      <dc:creator>eelayoubi</dc:creator>
      <pubDate>Thu, 01 Dec 2022 16:00:43 +0000</pubDate>
      <link>https://forem.com/eelayoubi/aws-text-to-speech-serverless-application-n6f</link>
      <guid>https://forem.com/eelayoubi/aws-text-to-speech-serverless-application-n6f</guid>
      <description>&lt;p&gt;In this article we will go through deploying a Text-To-Speech Serverless Application that contains two main flows presented in the following section.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;h3&gt;
  
  
  New Post
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihnafoq67nhenz97w6gw.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihnafoq67nhenz97w6gw.jpeg" alt="New Post" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user calls the API Gateway Restful endpoint that invokes the NewPost lambda function.&lt;/li&gt;
&lt;li&gt;The NewPost Lambda function inserts information about the post into an Amazon DynamoDB table, where information about all posts is stored.&lt;/li&gt;
&lt;li&gt;Then, the NewPost Lambda function publishes the post to the SNS topic we create, so it can be converted asynchronously.&lt;/li&gt;
&lt;li&gt;Convert to Audio Lambda function, is subscribed to the SNS topic and is triggered whenever a new message appears (which means that a new post should be converted into an audio file).&lt;/li&gt;
&lt;li&gt;The Convert to Audio Lambda function uses Amazon Polly to convert the text into an audio file in the specified language (the same as the language of the text).&lt;/li&gt;
&lt;li&gt;The new MP3 file is saved in a dedicated S3 bucket.&lt;/li&gt;
&lt;li&gt;Information about the post is updated in the DynamoDB table (the URL to the audio file stored in the S3 bucket is saved with the previously stored data.)&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Get Post
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frk603q40vmycs7pulfgq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frk603q40vmycs7pulfgq.jpeg" alt="Get Post" width="800" height="146"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The user calls the API Gateway Restful endpoint that invokes the GetPost lambda function which contains the logic for retrieving the post data.&lt;/li&gt;
&lt;li&gt;The GetPost Lambda function retrieves information about the post (including the reference to Amazon S3) from the DynamoDB table and returns the information.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Deploying The Resources
&lt;/h2&gt;

&lt;p&gt;You will find the code repository &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech#deploying-the-application" rel="noopener noreferrer"&gt;To deploy&lt;/a&gt; the application, simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Breaking it down
&lt;/h2&gt;

&lt;p&gt;We are using &lt;a href="//www.terraform.io/"&gt;Terraform&lt;/a&gt; to deploy the application.&lt;/p&gt;

&lt;p&gt;As you saw in the architecture section. We have two flows, the first one, is when a user submit a post through the API Gateway. And the second flow, in which the user can request one post or all posts.&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/blob/main/main.tf" rel="noopener noreferrer"&gt;main.tf&lt;/a&gt;, we are using the &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/tree/main/modules" rel="noopener noreferrer"&gt;terraform modules&lt;/a&gt; that we create, to provision the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DynamoDB Table for storing the posts metadata&lt;/li&gt;
&lt;li&gt;NewPost Lambda function&lt;/li&gt;
&lt;li&gt;GetPost Lambda function&lt;/li&gt;
&lt;li&gt;ConvertToAudio Lambda Function&lt;/li&gt;
&lt;li&gt;The API Gateway&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/blob/main/functions/GetPost/index.js" rel="noopener noreferrer"&gt;iam.tf&lt;/a&gt; is where we define the &lt;strong&gt;least-privilege permissions&lt;/strong&gt; for the various lambda functions we have.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;public&lt;/strong&gt; S3 bucket that is used to store all the audio posts, is declared in the &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/blob/main/s3.tf" rel="noopener noreferrer"&gt;s3.tf&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The SNS topic used to decouple the application into an asynchronous flow is defined in the &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/blob/main/sns.tf" rel="noopener noreferrer"&gt;sns.tf&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are creating a topic called &lt;strong&gt;new_posts&lt;/strong&gt;, with a lambda as a &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/blob/main/sns.tf#L5" rel="noopener noreferrer"&gt;subscription&lt;/a&gt;. So every time this topic receives a message, it will invoke the &lt;strong&gt;ConvertToAudio&lt;/strong&gt; lambda function with that message as an event.&lt;/p&gt;

&lt;p&gt;To allow SNS to invoke the &lt;strong&gt;ConvertToAudio&lt;/strong&gt; lambda function, we create as well a lambda permission &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/blob/main/sns.tf#L11" rel="noopener noreferrer"&gt;resource&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  NewPost Lambda Function
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/blob/main/functions/NewPost/index.js" rel="noopener noreferrer"&gt;NewPost&lt;/a&gt; lambda function does two things:&lt;br&gt;
1- Creates an item in the posts table with the following schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;id: recordId,
text: string,
voice: string,
status: 'PROCESSING'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The status as you can see is &lt;strong&gt;PROCESSING&lt;/strong&gt;, since it still needs to be converted to an audio.&lt;/p&gt;

&lt;p&gt;2- Publishes the item id to the SNS topic (so it can be process by ConvertToAudio)&lt;/p&gt;

&lt;h3&gt;
  
  
  ConvertToAudio Lambda Function
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/blob/main/functions/ConvertToAudio/index.js" rel="noopener noreferrer"&gt;ConvertToAudio&lt;/a&gt; lambda function does the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Fetches the post metadata from the posts table&lt;/li&gt;
&lt;li&gt;Splits the text into chunks (if text is larger than 2600 characters)&lt;/li&gt;
&lt;li&gt;Invokes &lt;a href="https://docs.aws.amazon.com/polly/latest/dg/API_SynthesizeSpeech.html" rel="noopener noreferrer"&gt;Polly synthesizeSpeech&lt;/a&gt; with the text and saves them to a local file&lt;/li&gt;
&lt;li&gt;Uploads the audio file to the S3 bucket&lt;/li&gt;
&lt;li&gt;Updates the post metadata in the posts table with the S3 audio url and sets the status to &lt;strong&gt;UPDATED&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  GetPost Lambda Function
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/eelayoubi/aws-serverless-text-to-speech/blob/main/functions/ConvertToAudio/index.js" rel="noopener noreferrer"&gt;GetPost&lt;/a&gt; lambda function retrieves the post(s):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If postId (querystring) is "*", the function will return all posts in the posts table&lt;/li&gt;
&lt;li&gt;If postId is a post Id, the function will return that specific post (if it exists) or an empty array.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Clean-up
&lt;/h2&gt;

&lt;p&gt;Don't forget to clean everything up by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform destroy -auto-approve
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;In this article I presented the Text-To-Speech serverless application. In the next article, I will go through how we can handle errors that our lambda could encounter, and how to process them in the API gateway. &lt;/p&gt;

&lt;p&gt;Until then, thank you for reading this article. I hope you found it useful 👋🏻.&lt;/p&gt;

</description>
      <category>watercooler</category>
    </item>
    <item>
      <title>What were your goals for this week?</title>
      <dc:creator>eelayoubi</dc:creator>
      <pubDate>Fri, 21 Oct 2022 08:36:05 +0000</pubDate>
      <link>https://forem.com/eelayoubi/what-were-your-goals-for-this-week-268k</link>
      <guid>https://forem.com/eelayoubi/what-were-your-goals-for-this-week-268k</guid>
      <description>&lt;p&gt;Hello all,&lt;/p&gt;

&lt;p&gt;I would like us to start some discussions around this week's work. To do that here are a couple of questions to kick of the discussion: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What were your goals for this week?&lt;/li&gt;
&lt;li&gt;Did you achieve all of them?&lt;/li&gt;
&lt;li&gt;If not, why is that? Did some unplanned work block you from achieving your goals?&lt;/li&gt;
&lt;li&gt;Does it happen often? Is it a recurrent pattern?&lt;/li&gt;
&lt;li&gt;How do you plan on mitigating it?&lt;/li&gt;
&lt;li&gt;What are your takeaways from this week?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's end this week with some nice discussions and help each other move forward 🎉&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>goals</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Using AWS Step Functions To Implement The SAGA Pattern</title>
      <dc:creator>eelayoubi</dc:creator>
      <pubDate>Sun, 16 Oct 2022 07:27:01 +0000</pubDate>
      <link>https://forem.com/eelayoubi/using-aws-step-functions-to-implement-the-saga-pattern-14o7</link>
      <guid>https://forem.com/eelayoubi/using-aws-step-functions-to-implement-the-saga-pattern-14o7</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In this post I will walk you through how to leverage AWS Step Functions to implement the SAGA Pattern.&lt;/p&gt;

&lt;p&gt;Put simply, the Saga pattern is a failure management pattern, that provides us the means to establish semantic consistency in our distributed applications by providing compensating transactions for every transaction where you have more than one collaborating services or functions.&lt;/p&gt;

&lt;p&gt;For our use case, imagine we have a workflow that goes as the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The user books a hotel&lt;/li&gt;
&lt;li&gt;If that succeeds, we want to book a flight&lt;/li&gt;
&lt;li&gt;If booking a flight succeeds we want to book a rental&lt;/li&gt;
&lt;li&gt;If booking a rental succeeds, we consider the flow a success.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As you may have guessed, this is the happy scenario. Where everything went right (shockingly ...).&lt;/p&gt;

&lt;p&gt;However, if any of the steps fails, we want to undo the changes introduced by the failed step, and undo all the prior steps if any.&lt;/p&gt;

&lt;p&gt;What if booking the hotel step failed? How do we proceed? What if the booking hotel step passes but booking a flight fails? We need to be able to revert the changes.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User books a hotel successfully&lt;/li&gt;
&lt;li&gt;Booking the flight failed&lt;/li&gt;
&lt;li&gt;Cancel the flight (assuming the failure happened after we saved the flight record in the database)&lt;/li&gt;
&lt;li&gt;Cancel the hotel record&lt;/li&gt;
&lt;li&gt;Fail the machine&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;AWS Step functions&lt;/strong&gt; can help us here, since we can implement these functionalities as steps (or tasks). Step functions can orchestrate all these transitions easily.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying The Resources
&lt;/h2&gt;

&lt;p&gt;You will find the code repository &lt;a href="https://github.com/eelayoubi/aws/tree/main/saga-pattern" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Please refer to &lt;a href="https://github.com/eelayoubi/aws/tree/main/saga-pattern#how-to-run-it" rel="noopener noreferrer"&gt;this&lt;/a&gt; section to deploy the resources.&lt;/p&gt;

&lt;p&gt;For the full list of the resources deployed, check out this &lt;a href="https://github.com/eelayoubi/aws/tree/main/saga-pattern#resources-deployed" rel="noopener noreferrer"&gt;table&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB Tables
&lt;/h2&gt;

&lt;p&gt;In our example, we are deploying 3 DynamoDB tables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;BookHotel&lt;/li&gt;
&lt;li&gt;BookFlight&lt;/li&gt;
&lt;li&gt;BookRental&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The following is the code responsible for creating the &lt;strong&gt;BookHotel&lt;/strong&gt; table&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "book_hotel_ddb" {
  source         = "./modules/dynamodb"
  table_name     = var.book_hotel_ddb_name
  billing_mode   = var.billing_mode
  read_capacity  = var.read_capacity
  write_capacity = var.write_capacity
  hash_key       = var.hash_key
  hash_key_type  = var.hash_key_type

  additional_tags = var.book_hotel_ddb_additional_tags
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Lambda Functions
&lt;/h2&gt;

&lt;p&gt;We will be relying on 6 Lambda functions to implement our example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;BookHotel&lt;/li&gt;
&lt;li&gt;BookFlight&lt;/li&gt;
&lt;li&gt;BookRental&lt;/li&gt;
&lt;li&gt;CancelHotel&lt;/li&gt;
&lt;li&gt;CancelFlight&lt;/li&gt;
&lt;li&gt;CancelRental&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The functions are pretty simple and straightforward.&lt;/p&gt;

&lt;h3&gt;
  
  
  BookHotel Function
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exports.handler = async (event) =&amp;gt; {
  ...
  const {
    confirmation_id,
    checkin_date,
    checkout_date
  } = event

...

  try {
    await ddb.putItem(params).promise();
    console.log('Success')

  } catch (error) {
    console.log('Error: ', error)
    throw new Error("Unexpected Error")
  }

  if (confirmation_id.startsWith("11")) {
    throw new BookHotelError("Expected Error")
  }

  return {
    confirmation_id,
    checkin_date,
    checkout_date
  };
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For the full code, please checkout the &lt;a href="https://github.com/eelayoubi/aws/blob/main/saga-pattern/functions/BookHotel/index.js" rel="noopener noreferrer"&gt;index.js&lt;/a&gt; file&lt;/p&gt;

&lt;p&gt;As you can see, the function expects an input of the following format:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;confirmation_id&lt;/li&gt;
&lt;li&gt;checkin_date&lt;/li&gt;
&lt;li&gt;checkout_date&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The function will create an item in the &lt;strong&gt;BookHotel table&lt;/strong&gt;. And it will return the input as an output.&lt;/p&gt;

&lt;p&gt;To trigger an error, you can create a &lt;em&gt;confirmation_id&lt;/em&gt; that starts with '11' this will throw a custom error that the step function will catch.&lt;/p&gt;

&lt;h3&gt;
  
  
  CancelHotel Function
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const AWS = require("aws-sdk")
const ddb = new AWS.DynamoDB({ apiVersion: '2012-08-10' });

const TABLE_NAME = process.env.TABLE_NAME

exports.handler = async (event) =&amp;gt; {

    var params = {
        TableName: TABLE_NAME,
        Key: {
            'id': { S: event.confirmation_id }
        }
    };

    try {
        await ddb.deleteItem(params).promise();
        console.log('Success')
        return {
            statusCode: 201,
            body: "Cancel Hotel uccess",
        };
    } catch (error) {
        console.log('Error: ', error)
        throw new Error("ServerError")
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function simply deletes the item that was created by the BookHotel function using the &lt;em&gt;confirmation_id&lt;/em&gt; as a key.&lt;/p&gt;

&lt;p&gt;We could have checked if the item was created. But to keep it simple, and I am assuming that the failure of the Booking functions always happen after the records were created in the tables.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;&lt;em&gt;NOTE&lt;/em&gt;&lt;/strong&gt;: The same logic goes for all the other &lt;em&gt;Book&lt;/em&gt; and &lt;em&gt;Cancel&lt;/em&gt; functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reservation Step Function
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Step Function
module "step_function" {
  source = "terraform-aws-modules/step-functions/aws"

  name = "Reservation"

  definition = templatefile("${path.module}/state-machine/reservation.asl.json", {
    BOOK_HOTEL_FUNCTION_ARN    = module.book_hotel_lambda.function_arn,
    CANCEL_HOTEL_FUNCTION_ARN  = module.cancel_hotel_lambda.function_arn,
    BOOK_FLIGHT_FUNCTION_ARN   = module.book_flight_lambda.function_arn,
    CANCEL_FLIGHT_FUNCTION_ARN = module.cancel_flight_lambda.function_arn,
    BOOK_RENTAL_LAMBDA_ARN     = module.book_rental_lambda.function_arn,
    CANCEL_RENTAL_LAMBDA_ARN   = module.cancel_rental_lambda.function_arn
  })

  service_integrations = {
    lambda = {
      lambda = [
        module.book_hotel_lambda.function_arn,
        module.book_flight_lambda.function_arn,
        module.book_rental_lambda.function_arn,
        module.cancel_hotel_lambda.function_arn,
        module.cancel_flight_lambda.function_arn,
        module.cancel_rental_lambda.function_arn,
      ]
    }
  }

  type = "STANDARD"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/eelayoubi/aws/blob/main/saga-pattern/main.tf#L147" rel="noopener noreferrer"&gt;This&lt;/a&gt; is the code that creates the step function. I am relying on a terraform &lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/step-functions/aws/latest" rel="noopener noreferrer"&gt;module&lt;/a&gt; to create it.&lt;/p&gt;

&lt;p&gt;This piece of code, will create a step function with the &lt;a href="https://github.com/eelayoubi/aws/blob/main/saga-pattern/state-machine/reservation.asl.json" rel="noopener noreferrer"&gt;reservation.asl.json&lt;/a&gt; file as a definition. And in the &lt;strong&gt;service_integrations&lt;/strong&gt;, we are giving the step function the permission to invoke the lambda functions (since these functions are all part of the step function workflow)&lt;/p&gt;

&lt;p&gt;Below is the full diagram for the step funtion:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lsexkwc1ykir98j90ds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lsexkwc1ykir98j90ds.png" alt="Step Function Diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The reservation.asl.json is relying on the &lt;a href="https://states-language.net/spec.html" rel="noopener noreferrer"&gt;Amazon State language&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If you open the file, you will notice on the second line the &lt;code&gt;"StartAt" : "BookHotel"&lt;/code&gt;. This tells the step functions to start at the &lt;strong&gt;BookHotel&lt;/strong&gt; State.&lt;/p&gt;

&lt;h3&gt;
  
  
  Happy Scenario
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"BookHotel": {
    "Type": "Task",
    "Resource": "${BOOK_HOTEL_FUNCTION_ARN}",
    "TimeoutSeconds": 10,
    "Retry": [
        {
            "ErrorEquals": [
                "States.Timeout",
                "Lambda.ServiceException",
                "Lambda.AWSLambdaException",
                "Lambda.SdkClientException"
            ],
            "IntervalSeconds": 2,
            "MaxAttempts": 3,
            "BackoffRate": 1.5
        }
    ],
    "Catch": [
        {
            "ErrorEquals": [
                "BookHotelError"
            ],
            "ResultPath": "$.error-info",
            "Next": "CancelHotel"
        }
    ],
    "Next": "BookFlight"
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;BookHotel&lt;/strong&gt; state is a &lt;a href="https://states-language.net/spec.html#task-state" rel="noopener noreferrer"&gt;Task&lt;/a&gt;. With a &lt;em&gt;"Resource"&lt;/em&gt; that will be resolved to the BookHotel Lambda Function via terraform.&lt;/p&gt;

&lt;p&gt;As you might have noticed, I am using a &lt;a href="https://states-language.net/spec.html#errors" rel="noopener noreferrer"&gt;retry&lt;/a&gt; block. Where the step function will retry executing the BookHotel functions up to 3 times (after the first attempt) in case of an error that is equal to any of the following errors:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"States.Timeout"&lt;/li&gt;
&lt;li&gt;"Lambda.ServiceException"&lt;/li&gt;
&lt;li&gt;"Lambda.AWSLambdaException"&lt;/li&gt;
&lt;li&gt;"Lambda.SdkClientException"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can ignore the "Catch" block for now, we will get back to it in the unhappy scenario section.&lt;/p&gt;

&lt;p&gt;After the &lt;strong&gt;BookHotel&lt;/strong&gt; task is done, the step function will transition to the &lt;strong&gt;BookFlight&lt;/strong&gt;, as specified in the "Next" field.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"BookFlight": {
    "Type": "Task",
    "Resource": "${BOOK_FLIGHT_FUNCTION_ARN}",
    "TimeoutSeconds": 10,
    "Retry": [
        {
            "ErrorEquals": [
                "States.Timeout",
                "Lambda.ServiceException",
                "Lambda.AWSLambdaException",
                "Lambda.SdkClientException"
            ],
            "IntervalSeconds": 2,
            "MaxAttempts": 3,
            "BackoffRate": 1.5
        }
    ],
    "Catch": [
        {
            "ErrorEquals": [
                "BookFlightError"
            ],
            "ResultPath": "$.error-info",
            "Next": "CancelFlight"
        }
    ],
    "Next": "BookRental"
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;BookFlight&lt;/strong&gt; state follows the same pattern. As we retry invoking the BookFlight function if we face any of the errors specified in the Retry block. If no error is thrown the step function will transition to the &lt;strong&gt;BookRental&lt;/strong&gt; state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"BookRental": {
    "Type": "Task",
    "Resource": "${BOOK_RENTAL_LAMBDA_ARN}",
    "TimeoutSeconds": 10,
    "Retry": [
        {
            "ErrorEquals": [
                "States.Timeout",
                "Lambda.ServiceException",
                "Lambda.AWSLambdaException",
                "Lambda.SdkClientException"
            ],
            "IntervalSeconds": 2,
            "MaxAttempts": 3,
            "BackoffRate": 1.5
        }
    ],
    "Catch": [
        {
            "ErrorEquals": [
                "BookRentalError"
            ],
            "ResultPath": "$.error-info",
            "Next": "CancelRental"
        }
    ],
    "Next": "ReservationSucceeded"
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;BookRental&lt;/strong&gt; state follows the same pattern. Again we retry invoking the BookRental function if we face any of the errors specified in the Retry block. If no error is thrown the step function will transition to the &lt;strong&gt;ReservationSucceeded&lt;/strong&gt; state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ReservationSucceeded": {
    "Type": "Succeed"
 },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;ReservationSucceeded&lt;/strong&gt;, is a state with &lt;a href="https://states-language.net/spec.html#succeed-state" rel="noopener noreferrer"&gt;Succeed&lt;/a&gt; type.&lt;br&gt;
In this case it terminates the state machine successfully&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28nlfdhaqsch83wapnmf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F28nlfdhaqsch83wapnmf.png" alt="Happy scenario"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Unhappy Scenarios
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Oh no BookHotel failed
&lt;/h4&gt;

&lt;p&gt;As you recall, in the &lt;strong&gt;BookHotel&lt;/strong&gt; state, I included a &lt;em&gt;Catch block&lt;/em&gt;. In the BookHotel function, if the confirmation_id starts with &lt;a href="https://github.com/eelayoubi/aws/blob/main/saga-pattern/functions/BookHotel/index.js#L39" rel="noopener noreferrer"&gt;11&lt;/a&gt;, a custom  error of &lt;strong&gt;BookHotelError&lt;/strong&gt; type will be thrown. This "Catch block" will catch it, and will use the state mentioned in the "Next" field, which is the &lt;strong&gt;CancelHotel&lt;/strong&gt; in this case.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"CancelHotel": {
    "Type": "Task",
    "Resource": "${CANCEL_HOTEL_FUNCTION_ARN}",
    "ResultPath": "$.output.cancel-hotel",
    "TimeoutSeconds": 10,
    "Retry": [
        {
            "ErrorEquals": [
                "States.Timeout",
                "Lambda.ServiceException",
                "Lambda.AWSLambdaException",
                "Lambda.SdkClientException"
            ],
            "IntervalSeconds": 2,
            "MaxAttempts": 3,
            "BackoffRate": 1.5
        }
    ],
    "Next": "ReservationFailed"
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;CancelHotel&lt;/strong&gt; is a "Task" as well, and has a retry block to retry invoking the function in case of an unexpected error. The "Next" field instructs the step function to transition to the "ReservationFailed" state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ReservationFailed": {
    "Type": "Fail"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;"ReservationFailed"&lt;/strong&gt; state is a &lt;a href="https://states-language.net/spec.html#fail-state" rel="noopener noreferrer"&gt;Fail&lt;/a&gt; type, it will terminate the machine and mark it as "Failed".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6re0vdk2s2kk28wrli1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6re0vdk2s2kk28wrli1.png" alt="BookHotel failed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  BookFlight is failing
&lt;/h4&gt;

&lt;p&gt;We can instruct the BookFlight lambda function to throw an error by passing a confirmation_id that starts with &lt;a href="https://github.com/eelayoubi/aws/blob/main/saga-pattern/functions/BookFlight/index.js#L38" rel="noopener noreferrer"&gt;22&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;BookFlight&lt;/strong&gt; step function task, has a Catch block, that will catch the &lt;a href="https://github.com/eelayoubi/aws/blob/main/saga-pattern/state-machine/reservation.asl.json#L73" rel="noopener noreferrer"&gt;BookFlightError&lt;/a&gt;, and instruct the step function to transition to the CancelFlight state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"CancelFlight": {
    "Type": "Task",
    "Resource": "${CANCEL_FLIGHT_FUNCTION_ARN}",
    "ResultPath": "$.output.cancel-flight",
    "TimeoutSeconds": 10,
    "Retry": [
      {
        "ErrorEquals": [
          "States.Timeout",
          "Lambda.ServiceException",
          "Lambda.AWSLambdaException",
          "Lambda.SdkClientException"
        ],
        "IntervalSeconds": 2,
        "MaxAttempts": 3,
        "BackoffRate": 1.5
      }
    ],
    "Next": "CancelHotel"
  },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similar to the &lt;strong&gt;CancelHotel&lt;/strong&gt;, the &lt;strong&gt;CancelFlight&lt;/strong&gt; state will trigger the &lt;em&gt;CancelFlight&lt;/em&gt; lambda function, to undo the changes. Then it will instruct the step function to go to the next step, &lt;strong&gt;CancelHotel&lt;/strong&gt;. And we saw earlier that the &lt;strong&gt;CancelHotel&lt;/strong&gt; will undo the changes introduced by the BookHotel, and will then call the &lt;strong&gt;ReservationFailed&lt;/strong&gt; to terminate the machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3z7dkwohfhkvyzemzj2j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3z7dkwohfhkvyzemzj2j.png" alt="BookFlight Failed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  BookRental is failing
&lt;/h4&gt;

&lt;p&gt;The BookRental lambda function will throw the ErrorBookRental error if the confirmation_id starts with &lt;a href="https://github.com/eelayoubi/aws/blob/main/saga-pattern/functions/BookRental/index.js#L38" rel="noopener noreferrer"&gt;33&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This error will be caught by the Catch block in the &lt;strong&gt;BookRental&lt;/strong&gt; task. And will instruct the step function to go to the &lt;strong&gt;CancelRental&lt;/strong&gt; state.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"CancelRental": {
    "Type": "Task",
    "Resource": "${CANCEL_RENTAL_LAMBDA_ARN}",
    "ResultPath": "$.output.cancel-rental",
    "TimeoutSeconds": 10,
    "Retry": [
        {
            "ErrorEquals": [
                "States.Timeout",
                "Lambda.ServiceException",
                "Lambda.AWSLambdaException",
                "Lambda.SdkClientException"
            ],
            "IntervalSeconds": 2,
            "MaxAttempts": 3,
            "BackoffRate": 1.5
        }
    ],
    "Next": "CancelFlight"
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Similar to the &lt;strong&gt;CancelFlight&lt;/strong&gt;, the &lt;strong&gt;CancelRental&lt;/strong&gt; state will trigger the CancelRental lambda function, to undo the changes. Then it will instruct the step function to go to the next step, &lt;strong&gt;CancelFlight&lt;/strong&gt;. After cancelling the flight, the &lt;strong&gt;CancelFlight&lt;/strong&gt; has a Next field that instructs the step function to transition to the &lt;strong&gt;CancelHotel&lt;/strong&gt; state, which will undo the changes and call the &lt;strong&gt;ReservationFailed&lt;/strong&gt; state to terminate the machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslerihmc48l39363ucv2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslerihmc48l39363ucv2.png" alt="BookRental failed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post, we saw how we can leverage AWS Step Functions to orchestrate and implement a fail management strategy to establish semantic consistency in our distributed reservation application.&lt;/p&gt;

&lt;p&gt;I hope you found this article beneficial. Thank you for reading ... 🤓&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/step-functions/" rel="noopener noreferrer"&gt;https://aws.amazon.com/step-functions/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://states-language.net/spec.html" rel="noopener noreferrer"&gt;https://states-language.net/spec.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.cs.cornell.edu/andru/cs711/2002fa/reading/sagas.pdf" rel="noopener noreferrer"&gt;https://www.cs.cornell.edu/andru/cs711/2002fa/reading/sagas.pdf&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://explore.skillbuilder.aws/learn/course/12715/designing-event-driven-architectures" rel="noopener noreferrer"&gt;https://explore.skillbuilder.aws/learn/course/12715/designing-event-driven-architectures&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>tutorial</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Switching between multiple Terraform versions</title>
      <dc:creator>eelayoubi</dc:creator>
      <pubDate>Mon, 03 Oct 2022 09:45:28 +0000</pubDate>
      <link>https://forem.com/eelayoubi/switching-between-multiple-terraform-versions-5e4f</link>
      <guid>https://forem.com/eelayoubi/switching-between-multiple-terraform-versions-5e4f</guid>
      <description>&lt;p&gt;Hey All,&lt;/p&gt;

&lt;p&gt;In case you are using Terraform to provision and manage your infrastructure, you normally install a specific version on your machine (or on your CI servers).&lt;/p&gt;

&lt;p&gt;But what if you wanted to install another terraform version to test it out? &lt;/p&gt;

&lt;p&gt;In case you have multiple environment in the same codebase, let's say dev and prod, and you deployed both of them using a fixed terraform version (1.2.7).&lt;/p&gt;

&lt;p&gt;After a while a new terraform version is available (1.3.1). You can update the version on your machine and test if it works fine, if not re-install the older version ... Which can be a bit cumbersome ...&lt;/p&gt;

&lt;p&gt;Or, you can use &lt;a href="https://github.com/tfutils/tfenv"&gt;tfenv&lt;/a&gt; which is a Terraform version manager.&lt;/p&gt;

&lt;p&gt;After installing tfenv, you can use the &lt;strong&gt;tfenv list&lt;/strong&gt; command to list the available terraform versions you have installed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~ tfenv list
  1.2.7
No default set. Set with 'tfenv use &amp;lt;version&amp;gt;'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a new terraform version available that I would like to test out.&lt;/p&gt;

&lt;p&gt;To install it, you just simply run :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tfenv install 1.3.1
Installing Terraform v1.3.1
Downloading release tarball from https://releases.hashicorp.com/terraform/1.3.1/terraform_1.3.1_darwin_amd64.zip
######################################################################### 100.0%
Downloading SHA hash file from https://releases.hashicorp.com/terraform/1.3.1/terraform_1.3.1_SHA256SUMS
Not instructed to use Local PGP (/usr/local/Cellar/tfenv/3.0.0/use-{gpgv,gnupg}) &amp;amp; No keybase install found, skipping OpenPGP signature verification
Archive:  /var/folders/jl/tfyd0yns2xxgb_58fj3ljslc0000gn/T/tfenv_download.XXXXXX.ouHdm2P4/terraform_1.3.1_darwin_amd64.zip
  inflating: /usr/local/Cellar/tfenv/3.0.0/versions/1.3.1/terraform  
Installation of terraform v1.3.1 successful. **To make this your default version, run 'tfenv use 1.3.1'**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List down all the installed terraform again:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tfenv list         
* 1.3.1 (set by /usr/local/Cellar/tfenv/3.0.0/version)
  1.2.7
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see that the latest version is selected now.&lt;/p&gt;

&lt;p&gt;If you'd like to switch to the old version, you simply run:&lt;br&gt;
&lt;code&gt;tfenv use VERSION&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;So:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;tfenv use 1.2.7&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now you can see the default selected version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tfenv use 1.2.7
Switching default version to v1.2.7
Default version (when not overridden by .terraform-version or TFENV_TERRAFORM_VERSION) is now: 1.2.7
➜  ~ tfenv list     
  1.3.1
* 1.2.7 (set by /usr/local/Cellar/tfenv/3.0.0/version)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way you can test the newer version in your dev codebase in an easy way with just switching versions.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this short blog.&lt;/p&gt;

&lt;p&gt;Thank you for reading!&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>devops</category>
      <category>opensource</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Deploying the full three-tier application</title>
      <dc:creator>eelayoubi</dc:creator>
      <pubDate>Wed, 06 Apr 2022 13:03:54 +0000</pubDate>
      <link>https://forem.com/eelayoubi/building-a-ha-aws-architecture-using-terraform-part-2-30gm</link>
      <guid>https://forem.com/eelayoubi/building-a-ha-aws-architecture-using-terraform-part-2-30gm</guid>
      <description>&lt;p&gt;In the previous &lt;a href="https://dev.to/eelayoubi/building-a-ha-aws-architecture-using-terraform-part-1-876"&gt;post&lt;/a&gt;, we deployed the public side of our infrastructure. In this part, we will deploy the whole stack. Which consists of the followings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1 VPC&lt;/li&gt;
&lt;li&gt;2 Public subnets&lt;/li&gt;
&lt;li&gt;2 Private subnets&lt;/li&gt;
&lt;li&gt;2 Autoscaling groups&lt;/li&gt;
&lt;li&gt;5 Security Groups&lt;/li&gt;
&lt;li&gt;2 Load Balancers, (one private, one public)&lt;/li&gt;
&lt;li&gt;2 Private EC2 instances (representing our application tier)&lt;/li&gt;
&lt;li&gt;2 Public EC2 instances (representing our presentation tier)&lt;/li&gt;
&lt;li&gt;2 Nat Gateways (so private instances can connect to the internet)&lt;/li&gt;
&lt;li&gt;2 Elastic IP addresses, one for each Nat gateway&lt;/li&gt;
&lt;li&gt;1 rds instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can check out the source code for this part &lt;a href="https://github.com/eelayoubi/aws-ha-app/tree/part-2" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The following is the diagram of our infrastructure by the end of this part:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlug5u00pgpyq649o36r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlug5u00pgpyq649o36r.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Our Application
&lt;/h2&gt;

&lt;p&gt;Our application consists of 2 tiers that will be deployed to the EC2 instances as docker containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/eelayoubi/aws-ha-app/tree/part-2/presentation-tier" rel="noopener noreferrer"&gt;presentation tier&lt;/a&gt; (this represents normally the customer facing part of the application, so what the customer interacts with)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/eelayoubi/aws-ha-app/tree/part-2/application-tier" rel="noopener noreferrer"&gt;application tier&lt;/a&gt; (this is where we have our business logics)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To keep it simple, our presentation tier simply forward requests to the business tier, that in turn run sql queries on the rds instance.&lt;/p&gt;

&lt;p&gt;There is a &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/setup-ecrs.sh" rel="noopener noreferrer"&gt;setup-ecrs.sh&lt;/a&gt; script that will build these application images and push them to separate ECrs repositories. You can inspect the script for more details.&lt;/p&gt;

&lt;p&gt;To run the script, first run &lt;code&gt;chmod +x setup-ecrs.sh&lt;/code&gt;, this will assign executable permission on our script. Then make sure you have aws cli installed and configured, we also need docker to be running, and simply type: &lt;code&gt;./setup-ecrs.sh&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Our new Infrastructure
&lt;/h2&gt;

&lt;p&gt;Building on what we did in part 1, we will add another Load Balancer. This one is an internal LB, that will forward the requests coming from the public presentation instances to the private application instances.&lt;/p&gt;

&lt;p&gt;It is also worth mentioning that we created 2 &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/terraform/ec2.tf" rel="noopener noreferrer"&gt;Launch Templates&lt;/a&gt; and 2 &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/terraform/autoscaling-groups.tf" rel="noopener noreferrer"&gt;autoscaling groups&lt;/a&gt; (one for the presentation tier, and one for the application tier).&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Groups
&lt;/h3&gt;

&lt;p&gt;To allow traffic between the load balancers, public, and private instances, we added a security group for each component, these can be viewed &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/terraform/vpc.tf#L37" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Basically the front facing load balancer's security groups, allows http connection from everywhere.&lt;br&gt;
The presentation tier security group allows connection from the front facing LB, and so on.&lt;/p&gt;
&lt;h3&gt;
  
  
  RDS
&lt;/h3&gt;

&lt;p&gt;We added a &lt;a href="https://github.com/eelayoubi/aws-ha-app/tree/part-2/terraform/modules/rds" rel="noopener noreferrer"&gt;module&lt;/a&gt; for rds. Which will basically provision an rds instance that the application EC2 instances will query for data.&lt;/p&gt;

&lt;p&gt;This module will create 3 resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A db subnet group&lt;/li&gt;
&lt;li&gt;A security group &lt;/li&gt;
&lt;li&gt;An rds instance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The module will export the rds's address that is needed by the application tier.&lt;/p&gt;
&lt;h3&gt;
  
  
  Gateways
&lt;/h3&gt;

&lt;p&gt;We added 2 &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/terraform/nat-gateways.tf" rel="noopener noreferrer"&gt;Nat Gateways&lt;/a&gt; that will allow the EC2 instances in the private subnet (the application tier) to access the internet to download some packages to deploy the docker image.&lt;/p&gt;

&lt;p&gt;The presentation tier EC2 instances are public and already have access to the internet, so they can download the needed packages without extra steps.&lt;/p&gt;

&lt;p&gt;We assigned an &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/terraform/eip.tf" rel="noopener noreferrer"&gt;Elastic IP address&lt;/a&gt; to each of the Nat Gatweways.&lt;/p&gt;
&lt;h3&gt;
  
  
  EC2
&lt;/h3&gt;

&lt;p&gt;Our EC2 instances need to access ECR to pull the docker images of our applications. To allow this, we created an instance &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/terraform/ec2.tf#L19" rel="noopener noreferrer"&gt;role&lt;/a&gt;, that gives EC2 access to ECR.&lt;/p&gt;

&lt;p&gt;As we said already, we have 2 launch templates, one for the business layer and another for the presentation layer.&lt;/p&gt;

&lt;p&gt;We created 2 user data files that the launch templates will reference.&lt;/p&gt;

&lt;p&gt;Presentation user data &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/user-data/user-data-presentation-tier.sh" rel="noopener noreferrer"&gt;script&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
sudo yum update -y
sudo yum install docker -y
sudo service docker start
sudo systemctl enable docker
sudo usermod -a -G docker ec2-user
aws ecr get-login-password --region ${region}  | docker login --username AWS --password-stdin ${ecr_url}
docker run --restart always -e APPLICATION_LOAD_BALANCER=${application_load_balancer} -p 3000:3000 -d ${ecr_url}/${ecr_repo_name}:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This script will install docker and run it. It will also login to the ECR registry so we can run our presentation tier docker container.&lt;/p&gt;

&lt;p&gt;As you can see we are passing the &lt;strong&gt;APPLICATION_LOAD_BALANCER dns name&lt;/strong&gt; as an environment variable to the docker container, since our application needs it to forward client requests to the business tier.&lt;/p&gt;

&lt;p&gt;In the Launch template we are &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/terraform/ec2.tf#L88" rel="noopener noreferrer"&gt;referencing&lt;/a&gt; this script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user_data = base64encode(templatefile("./../user-data/user-data-presentation-tier.sh", {
    application_load_balancer = aws_lb.application_tier.dns_name,
    ecr_url                   = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.region}.amazonaws.com"
    ecr_repo_name             = var.ecr_presentation_tier,
    region                    = var.region
  }))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;strong&gt;ecr_url&lt;/strong&gt; is the url for the presentation ECR repository that was created when running the &lt;code&gt;setup-ecrs.sh&lt;/code&gt; script.&lt;/p&gt;

&lt;p&gt;The application launch template passes additional variables to the &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/user-data/user-data-application-tier.sh" rel="noopener noreferrer"&gt;user data application tier&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;user_data = base64encode(templatefile("./../user-data/user-data-application-tier.sh", {
    rds_hostname  = module.rds.rds_address,
    rds_username  = var.rds_db_admin,
    rds_password  = var.rds_db_password,
    rds_port      = 3306,
    rds_db_name   = var.db_name
    ecr_url       = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.region}.amazonaws.com"
    ecr_repo_name = var.ecr_application_tier,
    region        = var.region
  }))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are passing the RDS instance details, such as username, password, etc ...&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the stack
&lt;/h2&gt;

&lt;p&gt;As mentioned earlier, make sure to run &lt;code&gt;./setup-ecrs.sh&lt;/code&gt; prior to deploying the infrastructure. Since our EC2 instances will download the docker images from the ECRs repositories.&lt;/p&gt;

&lt;p&gt;If you haven't initialized the terraform project yet, simply navigate to the terraform folder and run &lt;code&gt;terraform init&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We have a terraform.example.tfvars that holds all our variables values. Change the file name to terraform.tfvars, so when we terraform apply our changes, it is picked up automatically.&lt;/p&gt;

&lt;p&gt;Once complete, go ahead and run &lt;code&gt;terraform apply&lt;/code&gt;, and type &lt;strong&gt;&lt;em&gt;yes&lt;/em&gt;&lt;/strong&gt; to approve the changes.&lt;/p&gt;

&lt;p&gt;It might take a while, since we are provisioning a couple of resources, and especially the RDS instance takes some time.&lt;/p&gt;

&lt;p&gt;If all goes as planned, the deployment is successful and you get the dns url for the front facing load balancer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.........
Apply complete! Resources: 39 added, 0 changed, 0 destroyed.

Outputs:

lb_dns_url = "front-end-lb-**********.us-east-1.elb.amazonaws.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing the Application
&lt;/h2&gt;

&lt;p&gt;Now that we have the Front end Load Balancer's dns, we can test the application by simply visiting &lt;code&gt;front-end-lb-**********.us-east-1.elb.amazonaws.com/&lt;/code&gt; which will printout the a hello message with the server's hostname&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwsvcizy4appbei298w8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuwsvcizy4appbei298w8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can call the &lt;code&gt;front-end-lb-**********.us-east-1.elb.amazonaws.com/init&lt;/code&gt; endpoint, that forward the request to the presentation layer, which forwards the requests to the application layer (via the internal Load Balancer) that finally creates a table called &lt;strong&gt;users&lt;/strong&gt;, and adds 2 users in the table, you can view the code &lt;a href="https://github.com/eelayoubi/aws-ha-app/blob/part-2/application-tier/src/index.js#L21" rel="noopener noreferrer"&gt;here&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;app.get('/init', async (req, res) =&amp;gt; {
  connection.query('CREATE TABLE IF NOT EXISTS users (id INT(5) NOT NULL AUTO_INCREMENT PRIMARY KEY, lastname VARCHAR(40), firstname VARCHAR(40), email VARCHAR(30));');
  connection.query('INSERT INTO users (lastname, firstname, email) VALUES ( "Tony", "Sam", "tonysam@whatever.com"), ( "Doe", "John", "john.doe@whatever.com" );');
  res.send({ message: "init step done" })
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To view the &lt;strong&gt;users table&lt;/strong&gt; you can call &lt;code&gt;front-end-lb-**********.us-east-1.elb.amazonaws.com/users&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kexf0djg00xsyfhrrww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3kexf0djg00xsyfhrrww.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Destroying the infrastructure
&lt;/h2&gt;

&lt;p&gt;Keeping in mind that some of the services will incur some charges, don't forget to clean up the environment, you can do so by running &lt;code&gt;terraform apply -destroy -auto-approve&lt;/code&gt;, followed by &lt;code&gt;./destroy-ecrs.sh&lt;/code&gt; which will delete the ECR repositories that store our docker images.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, we completed creating the infrastructure. We created a main VPC, with public and private subnets, an Internet Gateway, Nat Gateways, , Load Balancers, Autoscaling Groups and so on ...&lt;/p&gt;

&lt;p&gt;We saw how to provision and destroy our application using terraform.&lt;/p&gt;

&lt;p&gt;However, you might have noticed that the application's lifecycle is &lt;strong&gt;&lt;em&gt;tightly coupled&lt;/em&gt;&lt;/strong&gt; with the infrastructure's lifecycle. &lt;strong&gt;In the coming blog&lt;/strong&gt;, we will see how we can split them to their own lifecycles and add some refactoring to our scripts.&lt;/p&gt;

&lt;p&gt;Feel free to comment and leave your thoughts 🙏🏻!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>architecture</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Introducing the architecture &amp; Deploying the public side</title>
      <dc:creator>eelayoubi</dc:creator>
      <pubDate>Fri, 18 Mar 2022 15:27:44 +0000</pubDate>
      <link>https://forem.com/eelayoubi/building-a-ha-aws-architecture-using-terraform-part-1-876</link>
      <guid>https://forem.com/eelayoubi/building-a-ha-aws-architecture-using-terraform-part-1-876</guid>
      <description>&lt;p&gt;In this serie of blog posts, we will be building a highly available application on top of AWS cloud services.&lt;/p&gt;

&lt;p&gt;We will be using Terraform to provision and deploy our infrastructure.&lt;/p&gt;

&lt;p&gt;By the end of this serie, we will have the following architecture deployed&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c92xt0hzi3c3jqp2gh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1c92xt0hzi3c3jqp2gh6.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure Setup
&lt;/h2&gt;

&lt;p&gt;For the first part of this application, we will be deploying the public feature as displayed in the below image.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgz2u4nv3r24yd5f3ran.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhgz2u4nv3r24yd5f3ran.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will have multiple resources to deploy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;2 Public Subnets&lt;/li&gt;
&lt;li&gt;Internet Gateway&lt;/li&gt;
&lt;li&gt;Route Table&lt;/li&gt;
&lt;li&gt;Application Load Balancer&lt;/li&gt;
&lt;li&gt;2 EC2 instances&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can check out the source code for this part &lt;a href="https://github.com/eelayoubi/aws-ha-app/tree/part-1" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Resources
&lt;/h2&gt;

&lt;p&gt;Under the terraform folder in the GitHub repository, you will notice couple of files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;vpc.tf&lt;/strong&gt; -&amp;gt; creates the vpc, public subnets, internet gateway, security group, route table&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;lb.tf&lt;/strong&gt; -&amp;gt; creates the application load balancer, the listener, and the target group&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ec2.tf&lt;/strong&gt; -&amp;gt; creates the compute instances&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;main.tf&lt;/strong&gt; -&amp;gt; declares the providers to use (only the aws provider for now)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;variables.tf&lt;/strong&gt; -&amp;gt; declares the variables used in the different resources&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Creating the infrastructure
&lt;/h3&gt;

&lt;p&gt;Our current infrastructure will consist of a vpc resource named main that is declared with a cidr block of "10.0.0.0/16".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "main" {
  cidr_block = var.main_cidr_block

  enable_dns_hostnames = true
  enable_dns_support   = true
  tags = {
    Name = "main"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will have 2 public subnets in different availability zones (to achieve a highly available architecture)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# we are looping over the number of subnets we have and creating public subnets accordingly
resource "aws_subnet" "public_subnets" {
  count                   = length(var.public_cidr_blocks)
  vpc_id                  = aws_vpc.main.id
  cidr_block              = var.public_cidr_blocks[count.index]
  availability_zone       = data.aws_availability_zones.available.names[count.index]
  map_public_ip_on_launch = true

  tags = {
    Name = "public_subnet_${count.index + 1}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We need need to create a security group that allows &lt;strong&gt;HTTP&lt;/strong&gt; traffic in and out of instances&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "main_sg" {
  name        = "allow_connection"
  description = "Allow HTTP"
  vpc_id      = aws_vpc.main.id

  ingress {
    description      = "HTTP from anywhere"
    from_port        = 80
    to_port          = 80
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  tags = {
    Name = "allow_http"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since our VPC will need to connect to the internet, we will need to create an Internet Gateway and attache it to our freshly created VPC as follows&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_internet_gateway" "gw" {
  vpc_id = aws_vpc.main.id

  tags = {
    Name = "main"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We will also create a route table and attach our public subnets to it, so we will have a route from these subnets to the internet&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_route_table" "public_route" {
  vpc_id = aws_vpc.main.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.gw.id
  }

  tags = {
    Name = "public_route"
  }
}

# we are creating two associations one for each subnet
resource "aws_route_table_association" "public_route_association" {
  count          = length(var.public_cidr_blocks)
  subnet_id      = aws_subnet.public_subnets[count.index].id
  route_table_id = aws_route_table.public_route.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating the Application Load Balancer
&lt;/h3&gt;

&lt;p&gt;For this part you can refer to the lb.tf file. This file will create an application load balancer named "front-end-lb"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# you can see here that we are referring to the security group and subnets that we have created earlier
resource "aws_lb" "front_end" {
  name               = "front-end-lb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.main_sg.id]
  subnets            = aws_subnet.public_subnets.*.id

  enable_deletion_protection = false
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rest of the file, create, a load balancer rule, a target group on &lt;strong&gt;port 80&lt;/strong&gt;, and a target group attachement, that attaches the instance we will create to the load balancer target group&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lb_listener" "front_end" {
  load_balancer_arn = aws_lb.front_end.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.front_end.arn
  }
}

resource "aws_lb_target_group" "front_end" {
  name     = "front-end-lb-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = aws_vpc.main.id
}

resource "aws_lb_target_group_attachment" "front_end" {
  count            = length(aws_subnet.public_subnets)
  target_group_arn = aws_lb_target_group.front_end.arn
  target_id        = aws_instance.front_end[count.index].id
  port             = 80
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Creating the Compute instances
&lt;/h3&gt;

&lt;p&gt;The last part here is the EC2 instances that will be provisioned when terraform runs the ec2.tf file. This will create 2 instances in the public subnets, and will have the security group that allows http traffic attached to them&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_instance" "front_end" {
  count                       = length(aws_subnet.public_subnets)
  ami                         = data.aws_ami.amazon_linux_2.id
  instance_type               = "t2.nano"
  associate_public_ip_address = true
  subnet_id                   = aws_subnet.public_subnets[count.index].id

  vpc_security_group_ids = [
    aws_security_group.main_sg.id,
  ]

  user_data = &amp;lt;&amp;lt;-EOF
    #!/bin/bash
    sudo su
    yum update -y
    yum install -y httpd.x86_64
    systemctl start httpd.service
    systemctl enable httpd.service
    echo “Hello World from $(hostname -f)” &amp;gt; /var/www/html/index.html
        EOF

  tags = {
    Name = "HelloWorld_${count.index + 1}"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying the infrastructure + application
&lt;/h2&gt;

&lt;p&gt;Please make sure to have &lt;a href="https://learn.hashicorp.com/tutorials/terraform/install-cli" rel="noopener noreferrer"&gt;terraform installed&lt;/a&gt;, and have &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html" rel="noopener noreferrer"&gt;AWS Credentials configured locally&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Navigate to the terraform folder in your terminal and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will initialise the backend, and install the aws plugin and prepare terraform.&lt;/p&gt;

&lt;p&gt;You should see the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "~&amp;gt; 4.5.0"...
- Installing hashicorp/aws v4.5.0...
- Installed hashicorp/aws v4.5.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can run a &lt;a href="https://www.terraform.io/cloud-docs/run#speculative-plans" rel="noopener noreferrer"&gt;terraform speculative plan&lt;/a&gt; to have an overall view of what will be created.&lt;/p&gt;

&lt;p&gt;We will skip this part and directly run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will be prompted to approve the changes. Type &lt;strong&gt;yes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It will take a couple of minutes to have everything ready. If all goes well terraform will exit without an error and you will see something like the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.........

Apply complete! Resources: 15 added, 0 changed, 0 destroyed.

Outputs:

lb_dns_url = "front-end-lb-*********.us-east-1.elb.amazonaws.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally run &lt;code&gt;terraform output&lt;/code&gt; to print the outputs that are declared in &lt;code&gt;outputs.tf&lt;/code&gt; You should see the following&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lb_dns_url = "front-end-lb-1873116014.us-east-1.elb.amazonaws.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pasting the url in the browser, you should see something like this&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;“Hello World from ip-10-0-1-68.ec2.internal”
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we refresh the page, we should hit another instance, and see the hostname of the second EC2 instance (if not do a hard refresh).&lt;/p&gt;

&lt;h3&gt;
  
  
  Destroying the infrastructure
&lt;/h3&gt;

&lt;p&gt;Keeping in mind that some of the services will incur some charges, don't forget to clean up the environment, you can do so by running &lt;code&gt;terraform apply -destroy -auto-approve&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;In this post, we created the infrastructure + public part of our application. We created a main VPC, with 2 Public Subnets, an Internet Gateway, a Load Balancer, and 2 Compute Instances.&lt;br&gt;
We saw how to provision and destroy our application using terraform.&lt;/p&gt;

&lt;p&gt;In the next blog, we will be deploying private part of the infrastructure, along with some refactoring for our terraform code (example: using modules). Stay tuned, and I hope you have enjoyed &lt;strong&gt;Part 1&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Feel free to comment and leave your thoughts 🙏🏻!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>architecture</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Serverless Server Side Rendering with Angular on AWS Lambda@Edge</title>
      <dc:creator>eelayoubi</dc:creator>
      <pubDate>Mon, 14 Dec 2020 12:47:59 +0000</pubDate>
      <link>https://forem.com/eelayoubi/serverless-server-side-rendering-with-angular-on-aws-lambda-edge-57g5</link>
      <guid>https://forem.com/eelayoubi/serverless-server-side-rendering-with-angular-on-aws-lambda-edge-57g5</guid>
      <description>&lt;p&gt;In this article we will look at how we can enable server side rendering on a Angular application and make it run serverless on 'AWS Lambda@Edge'. &lt;br&gt;
How do we go from running a  non server side rendered static Angular application on AWS S3 to enabling SSR and deploying it to Lambda@Edge, S3 whilst utilising CloudFront in front of it?&lt;/p&gt;
&lt;h3&gt;
  
  
  Lambda@Edge to the rescue
&lt;/h3&gt;

&lt;p&gt;I was recently interested in seeing how to server side render an Angular app with no server. As using Lambda@Edge.&lt;/p&gt;

&lt;p&gt;Lambda@Edge is an extension of AWS Lambda, a compute service that lets you execute functions that customize the content that CloudFront delivers (&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-at-the-edge.html" rel="noopener noreferrer"&gt;more info&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Lambd@Edge can be executed in 4 ways:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Viewer Request&lt;/li&gt;
&lt;li&gt;Origin Request (we will be using this for SSR 🤓)&lt;/li&gt;
&lt;li&gt;Origin Response&lt;/li&gt;
&lt;li&gt;Viewer Response&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this example, I am using: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Angular 11&lt;/li&gt;
&lt;li&gt;Express js for SSR&lt;/li&gt;
&lt;li&gt;AWS S3 for storing the application build&lt;/li&gt;
&lt;li&gt;AWS Cloudfront as the CDN&lt;/li&gt;
&lt;li&gt;and of course the famous Lambda@Edge&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This post already assumes the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;having an aws account&lt;/li&gt;
&lt;li&gt;having &lt;a href="https://aws.amazon.com/cli/" rel="noopener noreferrer"&gt;aws cli configure&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;having &lt;a href="https://www.serverless.com/framework/docs/getting-started/" rel="noopener noreferrer"&gt;serverless framework installed&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Already familiar with Angular SSR&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is the &lt;a href="https://github.com/eelayoubi/angular-lambda-ssr" rel="noopener noreferrer"&gt;Github repo&lt;/a&gt;&lt;br&gt;
And the application is deployed &lt;a href="//d2htk1pm9r9gbg.cloudfront.net"&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Introducing the sample application
&lt;/h2&gt;

&lt;p&gt;The application is pretty simple, as we have 2 modules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SearchModule&lt;/li&gt;
&lt;li&gt;AnimalModule (lazy loaded)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you navigate to the application, you are presented with an input field, where you can type a name (ex: Oliver, leo ...), or an animal (ex: dog, cat). You will be presented with a list of the findings. You can click on an anima to go see the details in the animal component.&lt;/p&gt;

&lt;p&gt;As simple as that. Just to demonstrate the SSR on Lambda@Edge.&lt;/p&gt;

&lt;p&gt;You can clone the &lt;a href="https://github.com/eelayoubi/angular-lambda-ssr" rel="noopener noreferrer"&gt;repo&lt;/a&gt; to check it out&lt;/p&gt;
&lt;h2&gt;
  
  
  Enabling SSR on the application
&lt;/h2&gt;

&lt;p&gt;Okay ... Off to the SSR part. The first thing to do is to run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ng add @nguniversal/express-engine&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Which will generate couple of files (&lt;a href="https://angular.io/guide/universal#universal-tutorial" rel="noopener noreferrer"&gt;more on this here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;To run the default ssr application, just type:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yarn build:ssr &amp;amp;&amp;amp; yarn serve:ssr&lt;/code&gt; and navigate to &lt;a href="http://localhost:4000" rel="noopener noreferrer"&gt;http://localhost:4000&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will notice that angular generated a file called 'server.ts'. This is the express web server. If you are familiar with lambda, you would know that there are no servers. As you don't think about it as a server ... You just give a code, and Lambda runs it ...&lt;/p&gt;

&lt;p&gt;To keep the Angular SSR generated files intact, I made a copy of the following files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;server.ts -&amp;gt; serverless.ts&lt;/li&gt;
&lt;li&gt;tsconfig.server.json -&amp;gt; tsconfig.serverless.json&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the serverless.ts I got rid of the 'listen' part (no server ... no listener 🤷🏻‍♂️).&lt;/p&gt;

&lt;p&gt;The server.ts file uses ngExpressEngine to bootstrap the application. However, I replaced that in serverless.ts with 'renderModule' that comes from '@angular/platform-server' (more flexibility ...)&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In tsconfig.serverless.json, on line 12, instead of including server.ts in the 'files' property, we are including our own serverless.ts.&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;In the angular.json file I added the following part:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"serverless": {
          "builder": "@angular-devkit/build-angular:server",
          "options": {
            "outputPath": "dist/angular-lambda-ssr/serverless",
            "main": "serverless.ts",
            "tsConfig": "tsconfig.serverless.json"
          },
          "configurations": {
            "production": {
              "outputHashing": "media",
              "fileReplacements": [
                {
                  "replace": "src/environments/environment.ts",
                  "with": "src/environments/environment.prod.ts"
                }
              ],
              "sourceMap": false,
              "optimization": true
            }
          }
        }


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Then in the package.json I added the following property:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;"build:sls": "ng build --prod &amp;amp;&amp;amp; ng run angular-lambda-ssr:serverless:production"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the 'options' property we are pointing to our customized main and tsconfig. So when running the &lt;code&gt;yarn build:sls&lt;/code&gt;, these config will be used to generate the &lt;code&gt;dist/angular-lambda-ssr/serverless&lt;/code&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Creating the Lambda function to execute SSR
&lt;/h2&gt;

&lt;p&gt;I added a new file called 'lambda.js. This is the file that contains the Lambda function, that will be executed on every request from CloudFront To the Origin (&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html" rel="noopener noreferrer"&gt;Origin Request&lt;/a&gt;)&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;



&lt;p&gt;I'm using the &lt;a href="https://github.com/eelayoubi/serverless-http/" rel="noopener noreferrer"&gt;serverless-http&lt;/a&gt; package that is a fork of the original &lt;a href="https://github.com/dougmoscrop/serverless-http" rel="noopener noreferrer"&gt;repo&lt;/a&gt;. The main repo maps Api Gateway requests, I added the Lambda@Edge support that can be viewed in this &lt;a href="https://github.com/dougmoscrop/serverless-http/pull/192" rel="noopener noreferrer"&gt;PR&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Anyway, as you can see on line 8, we are passing the app (which is express app) to the serverless function, and it returns a function that accepts the Incoming event and a context. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On line 18 some magic will happen, basically mapping the request and passing it to the app instance which will return the response (the ssr response).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then on line 19 we are just minifying the body, since there is a 1MB limit regarding the Lambda@Edge &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-requirements-limits.html#lambda-requirements-see-limits" rel="noopener noreferrer"&gt;origin-request&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally on line 27 we are returning the response to the user. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep in mind that we are only doing SSR to requests to the index.html or to any request that doesn't have an extension.&lt;/p&gt;

&lt;p&gt;If the request contains an extension, it means you are requesting a file... so we pass the request to S3 to serve it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploying to AWS
&lt;/h3&gt;

&lt;p&gt;You will notice in the repo 2 files: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;serverless-distribution.yml&lt;/li&gt;
&lt;li&gt;serverless.yml&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will first deploy the serverless-distribution.yml:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This will deploy the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloudfront Identity (used by S3 and Cloudfront to ensure that objects in 3 are only accessible via Cloudfront)&lt;/li&gt;
&lt;li&gt;Cloudfront Distribution&lt;/li&gt;
&lt;li&gt;S3 bucket that will store the application build&lt;/li&gt;
&lt;li&gt;A Bucket Policy that Allows the CloudFront Identity to Get the S3 objects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To deploy this stack, on line 58 change the bucket name to something unique for you, since S3 names are global ... Then just run the following command: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;serverless deploy --config serverless-distribution.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This may take a few minutes. When the deployment is done, we need to get the cloudfront endpoint. You can do that by going to the console or by running:&lt;br&gt;
&lt;code&gt;aws cloudformation describe-stacks --stack-name angular-lambda-ssr-distribution-dev&lt;/code&gt;&lt;br&gt;
The endpoint will have the following format:&lt;br&gt;
&lt;strong&gt;d1234244112324.cloudfront.net&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now we need to add the cloudfront endpoint to the search.service.ts:&lt;br&gt;
&lt;/p&gt;
&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;On line 15, replace "/assets/data/animals.json" with "&lt;a href="https://cloudfrontendpointhere/assets/data/animals.json" rel="noopener noreferrer"&gt;https://cloudfrontendpointhere/assets/data/animals.json&lt;/a&gt;"&lt;/p&gt;

&lt;p&gt;Now that we have that done, we need to build the app with our serverless.ts (in case already done, we need to build it again since we changed the endpoint to fetch the data), so run: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;yarn build:sls&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That will generate the dist folder that contains the angular app that we need to sync to S3 (since S3 will serve the static content, as the js, css ..)&lt;/p&gt;

&lt;p&gt;After the dist is generated, go to the browser folder in the dist:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cd dist/angular-lambda-ssr/browser&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Then run the following command to copy the files to S3:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws s3 sync . s3://replacewithyourbucketname&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Be sure to replace the placeholder with your S3 bucket Name.&lt;/p&gt;

&lt;p&gt;Once this is done, we need to deploy the lambda function, which is in serverless.yml, simply run:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;serverless deploy&lt;/code&gt;&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;This will deploy the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Lambda Function&lt;/li&gt;
&lt;li&gt;The Lambda execution role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the stack is created, we need to deploy Lambda@Edge to the Cloudfront behaviour we just created, so copy and paste this link in a browser tab (make sure you are logged in to aws console)&lt;br&gt;
&lt;a href="https://console.aws.amazon.com/lambda/home?region=us-east-1#/functions/angular-lambda-ssr-dev-ssr-origin-req/versions/$LATEST?tab=configuration" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/lambda/home?region=us-east-1#/functions/angular-lambda-ssr-dev-ssr-origin-req/versions/$LATEST?tab=configuration&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;⚠️ Make sure the $LATEST version is selected&lt;/p&gt;

&lt;p&gt;1- Click on 'Actions'&lt;br&gt;
2- Click on 'Deploy to lambda@Edge'&lt;br&gt;
3- Choose the distribution we created&lt;br&gt;
3- Choose the Default behaviour (there is only one for the our distribution)&lt;br&gt;
4- For Cloudfront Event, choose 'Origin Request'&lt;br&gt;
5- Leave the include Body unticked&lt;br&gt;
6- Tick the Acknowledge box&lt;br&gt;
7- Click Deploy&lt;/p&gt;

&lt;p&gt;It will take a couple of minutes to deploy this function to all the cloudfront edge locations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing
&lt;/h3&gt;

&lt;p&gt;You can navigate to the cloudfront endpoint again, and access the application, you should see that the SSR is working as expected.&lt;/p&gt;

&lt;p&gt;You can see that the animal/3 request was served from express server&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6tj24vgvx6eh730ckrw9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F6tj24vgvx6eh730ckrw9.png" alt="Screenshot 2020-12-14 at 9.47.07 AM"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And the main js is served from S3 (it is cached on Cloudfront this time)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj4puo800gvebuqwukvi9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj4puo800gvebuqwukvi9.png" alt="Screenshot 2020-12-14 at 9.52.51 AM"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  Cleanup
&lt;/h3&gt;

&lt;p&gt;To return the AWS account to its previous state, it would be a good idea to delete our created resources.&lt;/p&gt;

&lt;p&gt;Note that in term of spending, this will not be expensive, if you have an AWS Free Tier, you won't be charged, unless you go above the limits (&lt;a href="https://aws.amazon.com/lambda/pricing/" rel="noopener noreferrer"&gt;lambda pricing&lt;/a&gt;, &lt;a href="https://aws.amazon.com/cloudfront/pricing/" rel="noopener noreferrer"&gt;cloudfront pricing&lt;/a&gt;) &lt;/p&gt;

&lt;p&gt;First we need to empty the S3 bucket, since if we delete the Cloudformation stack with a non empty bucket, the stack will fail.&lt;br&gt;
So run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws s3 rm s3://replacewithyourbucketname --recursive&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready to delete the serverless-distribution stack, run the following command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;serverless remove --config serverless-distribution.yml&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We must wait for a while to be able to delete the serverless.yml stack, if you try to delete it now you will run into an error, as the lambda function is deployed on Cloudfront.&lt;/p&gt;

&lt;p&gt;After a while, run the following:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;serverless remove&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Some Gotchas
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We could have combined the two stacks (serverless-distribution &amp;amp; serverless) in one file. However, deleting the stack will fail, as it will delete all resource except the lambda function, since as explained we need to wait until the Replicas are deleted, which might take some time (&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-delete-replicas.html" rel="noopener noreferrer"&gt;more info&lt;/a&gt;) &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We could have more complicated logic in the Lambda function to render specific pages, for specific browsers ... I tried to keep it simple in this example&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Be aware that Lambda@Edge origin-request has some limits:&lt;br&gt;
Size of a response that is generated by a Lambda function, including headers and body : 1MB&lt;br&gt;
Function timeout: 30 seconds&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-requirements-limits.html#lambda-requirements-see-limits" rel="noopener noreferrer"&gt;more info&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We can test the Lambda function locally, thanks to serverless framework, we can invoke our lambda. To do so, run the following command:&lt;br&gt;
&lt;code&gt;serverless invoke local --function ssr-origin-req --path event.json&lt;/code&gt;&lt;br&gt;
You will see the result returned contains the app ssr rendered.&lt;br&gt;
The event.json file contains an origin-request cloudfront request, in other words, the event the Lambda function expects in the parameter. &lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-event-structure.html#lambda-event-structure-request" rel="noopener noreferrer"&gt;more info&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post we saw how we can leverage Lambda@Edge to server side render our angular application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We have a simple angular app&lt;/li&gt;
&lt;li&gt;We enabled SSR with some customisation&lt;/li&gt;
&lt;li&gt;We Created the Lambda Function that will be executed on every request to Origin (to S3 in our case)&lt;/li&gt;
&lt;li&gt;We deployed the serverless-distribution stack&lt;/li&gt;
&lt;li&gt;we deployed the Lambda stack and associated the Lambda to the Cloudfront Behaviour&lt;/li&gt;
&lt;li&gt;We tested that everything is working as expected&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope you found this article beneficial. Thank you for reading ... 🤓 &lt;/p&gt;

</description>
      <category>angular</category>
      <category>serverless</category>
      <category>aws</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Decoupling an Angular app using AWS IOT</title>
      <dc:creator>eelayoubi</dc:creator>
      <pubDate>Tue, 25 Aug 2020 20:08:21 +0000</pubDate>
      <link>https://forem.com/eelayoubi/decoupling-an-app-using-aws-iot-577a</link>
      <guid>https://forem.com/eelayoubi/decoupling-an-app-using-aws-iot-577a</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In this blog, I will walk you through how we can use AWS IOT thing to decouple the frontend application from the backend.&lt;/p&gt;

&lt;p&gt;Basically, the frontend talks to an API Gateway through a rest endpoint. We have two methods one to get all animals in the database. And another method to insert an animal.&lt;/p&gt;

&lt;p&gt;This is a configuration walkthrough blog, meaning the frontend app is very minimalistic.&lt;br&gt;
The frontend consists of a simple Angular 10 application.&lt;br&gt;
To checkout the full code, here is the &lt;a href="https://github.com/eelayoubi/aws-examples"&gt;GitHub repo&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Architecture
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h5eEPFxI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/eelayoubi/aws-examples/master/diagram.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h5eEPFxI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/eelayoubi/aws-examples/master/diagram.png" alt="Alt diagram"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, the backend consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an API Gateway with a rest endpoint with two methods&lt;/li&gt;
&lt;li&gt;A DynamoDB table with Streams enabled on it&lt;/li&gt;
&lt;li&gt;An AlertIOTFunction that gets trigged on the STREAMS change&lt;/li&gt;
&lt;li&gt;An IOT topic, that is used by the AlertIOTFunction to publish a message to.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So on a high level, we can imagine a system where a customer does an action, in this case, adds an animal to the database. This insert, triggers a stream that calls a lambda, which can trigger a process for a payment, or a confirmation or something that can take some time ⏳.&lt;/p&gt;

&lt;p&gt;In our case, this process only takes the newly added animal, and publishes it to an IOT topic. And we can see it in the client's console and act on it if needed (which is most likely to happen 🙄 )&lt;/p&gt;
&lt;h1&gt;
  
  
  Code Examples
&lt;/h1&gt;
&lt;h4&gt;
  
  
  Frontend
&lt;/h4&gt;

&lt;p&gt;For the frontend everything is in the aws-examples inside the github repo. To run it you can follow the &lt;a href="https://github.com/eelayoubi/aws-examples/blob/master/README.md"&gt;README&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To subscribe to the IOT topic, we are using an AWS library called &lt;em&gt;&lt;a href="https://github.com/aws/aws-iot-device-sdk-js"&gt;aws-iot-device-sdk&lt;/a&gt;&lt;/em&gt;. (we could use the &lt;a href="https://github.com/mqttjs/MQTT.js"&gt;MQTT.js&lt;/a&gt; directly if we want.)&lt;/p&gt;

&lt;p&gt;To make it work with the frontend application, I have added the following in the package.json:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"browser": {
   "fs": false,
   "tls": false,
   "path": false
},
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without this piece, running the app will result in build errors: &lt;code&gt;ERROR in ./node_modules/aws-iot-device-sdk/common/lib/tls-reader.js&lt;br&gt;
Module not found: Error: Can't resolve 'fs' in '/Users/.../aws-examples/aws-examples/node_modules/aws-iot-device-sdk/common/lib'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Plus, we have to add the following piece in the polyfill.ts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(window as any)['global'] = window;
global.Buffer = global.Buffer || require('buffer').Buffer;

import * as process from 'process';
window['process'] = process;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;without it the browser will complain that &lt;code&gt;index.js:43 Uncaught ReferenceError: global is not defined&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The code is pretty straightforward. In the &lt;em&gt;app.component.ts&lt;/em&gt;&lt;br&gt;
in the constructor we are connecting to the &lt;em&gt;IOT Topic&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;ℹ️ As you know everything that needs access to an AWS service needs credentials. This is why we are using Cognito. We are using it to generate temporary credentials so the application can subscribe to the IOT topic.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// 1
AWS.config.credentials = new AWS.CognitoIdentityCredentials({
    IdentityPoolId: this.AWSConfiguration.poolId
})

const clientId = 'animals-' + (Math.floor((Math.random() * 100000) + 1)); // Generating a clientID for every browser

// 2
this.mqttClient = new AWSIoTData.device({
    region: AWS.config.region,
    host: this.AWSConfiguration.host,
    clientId: clientId,
    protocol: 'wss',
    maximumReconnectTimeMs: 8000,
    debug: false,
    secretKey: '', // need to be send as an empty string, otherwise it will throw an error
    accessKeyId: '' // need to be send as an empty string, otherwise it will throw an error
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On '1', the IdentityPoolId comes from the backend, where we deploy a template with some Cognito resources, it is explained below 🤓.&lt;/p&gt;

&lt;p&gt;On '2', we are trying to connect to the IOT endpoint (explained in the &lt;a href="https://github.com/eelayoubi/aws-examples/blob/master/README.md"&gt;README&lt;/a&gt;) &lt;/p&gt;

&lt;p&gt;Moving to the ngOnInit, we can see the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;this.mqttClient.on('connect', () =&amp;gt; { // 1
    console.log('mqttClient connected')
    this.mqttClient.subscribe('animals-realtime')
});

this.mqttClient.on('error', (err) =&amp;gt; { // 2
    console.log('mqttClient error:', err);
    this.getCreds();
});

this.mqttClient.on('message', (topic, payload) =&amp;gt; { // 3
    const msg = JSON.parse(payload.toString())
    console.log('IoT msg: ', topic, msg)
});

this.http.get(`${this.api}get-animals` // 4
)
    .subscribe((data) =&amp;gt; {
        console.log('data: ', data)
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On '1', we are listening to the connect event, if it is established correctly we are subscribing the the IOT topic created in AWS.&lt;/p&gt;

&lt;p&gt;On '2', in case of an error we are calling the getCreds method. It is interesting to know, that the first time we run the app, connecting to the IOT topic will throw an error, because the credentials are not passed to the &lt;em&gt;mqttClient&lt;/em&gt;, so in the error event we call the getCreds method to set the credentials correctly.&lt;/p&gt;

&lt;p&gt;On '3', we are listening to the messaged that are published to the IOT topic, here we are just console logging it to keep things simple.&lt;/p&gt;

&lt;p&gt;On '4', we are just making a request to API Gateway endpoint to get the animals in DynamoDB.&lt;/p&gt;

&lt;p&gt;Moving to the getCreds method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const cognitoIdentity = new AWS.CognitoIdentity(); // 1
(AWS.config.credentials as any).get((err, data) =&amp;gt; {
    if (!err) {
        console.log('retrieved identity: ' + (AWS.config.credentials as any).identityId)
        var params = {
            IdentityId: (AWS.config.credentials as any).identityId as any
        }
        // 2
        cognitoIdentity.getCredentialsForIdentity(params, (err, data) =&amp;gt; {
            if (!err) {
                // 3
                this.mqttClient.updateWebSocketCredentials(data.Credentials.AccessKeyId,
                    data.Credentials.SecretKey,
                    data.Credentials.SessionToken,
                    data.Credentials.Expiration
                )
            }
        })
    } else {
        console.log('Error retrieving identity:' + err)
    }
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On '1' we are getting a Cognito Identity instance.&lt;br&gt;
On '2' we are getting the credentials coming from Cognito&lt;br&gt;
On '3' we are updating the &lt;em&gt;mqttClient&lt;/em&gt; with the retrieved credentials.&lt;/p&gt;

&lt;p&gt;To test this, we have a simple button, when we click it, it will call insertAnimal method that will simply post an animal to the database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;insertAnimal() {
    this.http.post(`${this.api}add-animal`, {
        "name": "cat",
        "age": 1
        // other fields ...
    }
    )
        .subscribe((data) =&amp;gt; {
            console.log('data: ', data)
        });
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After a couple of seconds we will receive a console in the console logs printing: &lt;code&gt;IoT msg:  animals-realtime ...&lt;/code&gt; 🎉&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/eelayoubi/aws-examples/blob/master/demo.gif"&gt;demo&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Backend
&lt;/h4&gt;

&lt;p&gt;The backend code is in /backend/iot&lt;br&gt;
We have the resources defined in the &lt;a href="https://github.com/eelayoubi/aws-examples/blob/master/backend/iot/template.yml"&gt;template.yml&lt;/a&gt;. We deploy the backend using  &lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html"&gt;AWS SAM&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To know how to deploy it, please follow the instructions in the project's &lt;a href="https://github.com/eelayoubi/aws-examples/blob/master/README.md"&gt;README&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On a high level, in the template.yml you will find multiple resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AnimalsRealtime  the AWS IOT thing&lt;/li&gt;
&lt;li&gt;InsertAnimalFunction, a Lambda function that gets called when calling the api endpoint with &lt;em&gt;/add-animal&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;GetAnimalsFunction, a Lambda function that gets called when calling the api endpoint with &lt;em&gt;/get-animals&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;AlertIOTFunction, a Lambda function that gets trigger by a DynamoDB Stream&lt;/li&gt;
&lt;li&gt;AnimalsAPI, an API Gateway&lt;/li&gt;
&lt;li&gt;AnimalsTable, the DynamoDB database to store the items&lt;/li&gt;
&lt;li&gt;UserPool &amp;amp; UserIdentity, to give access to the frontend to subscribe to the IOT topic&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;To sum things up, there are many ways to decouple the frontend from the asynchronous/long-term backend processes. One of these approaches could be leveraging the IOT publish/subscribe methodology. Where a client execute an event, and subscribes to a &lt;em&gt;topic&lt;/em&gt;. And when the backend finishes processing the needed tasks, it can publish results/notification to the topic.&lt;/p&gt;

&lt;p&gt;In our case it was a simple action, returning the new added animal to the frontend. It can be more complicated than that, such as handling payment, approvals ...&lt;/p&gt;

&lt;p&gt;I hope you have found this article useful. Please feel free to leave your remarks/questions in comments 🙏&lt;/p&gt;

</description>
      <category>aws</category>
      <category>angular</category>
      <category>serverless</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
