<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: CloudForecast.io</title>
    <description>The latest articles on Forem by CloudForecast.io (@cloudforecast).</description>
    <link>https://forem.com/cloudforecast</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/cloudforecast"/>
    <language>en</language>
    <item>
      <title>Basics of AWS Tags &amp; Terraform with S3 - Part 1</title>
      <dc:creator>Tony Chan</dc:creator>
      <pubDate>Fri, 11 Mar 2022 14:57:20 +0000</pubDate>
      <link>https://forem.com/cloudforecast/basics-of-aws-tags-terraform-with-s3-part-1-577i</link>
      <guid>https://forem.com/cloudforecast/basics-of-aws-tags-terraform-with-s3-part-1-577i</guid>
      <description>&lt;p&gt;Managing &lt;a href="https://aws.amazon.com/"&gt;AWS resources&lt;/a&gt; can be an extremely arduous process. AWS doesn't have logical resource groups and other niceties that Azure and GCP have. This nonwithstanding, AWS is still far and away the most popular cloud provider in the world. Therefore, it's still very important to find ways to organize your resources effectively.&lt;/p&gt;

&lt;p&gt;One of the most important ways to organize and filter your resources is by using &lt;a href="https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html"&gt;AWS tags.&lt;/a&gt; While tagging can be a tedious process, Terraform can help ease the pain by providing &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/resource-tagging"&gt;several ways to tag&lt;/a&gt; your AWS resources. In this blog and accompanying video series, we're going to take a look at various methods and strategies to tag your resources and keep them organized efficiently.&lt;/p&gt;

&lt;p&gt;These posts are written so that you can follow along. You will just need an environment that has access to the AWS API in your region. I typically use &lt;a href="https://aws.amazon.com/cloud9/"&gt;AWS Cloud9&lt;/a&gt; for this purpose, but any environment with access will do.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/-U6k0eQSVfc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Github repo:&lt;/strong&gt; &lt;a href="https://github.com/CloudForecast/aws-tagging-with-terraform"&gt;https://github.com/CloudForecast/aws-tagging-with-terraform&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tag Blocks
&lt;/h2&gt;

&lt;p&gt;The first method we can use to tag resources is by using a basic tag block. Let's create a &lt;code&gt;main.tf&lt;/code&gt; file and configure an S3 bucket to take a look at this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure Terraform to use the AWS provider
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Configure the AWS Provider
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-west-2"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Create a random ID to prevent bucket name clashes
&lt;/h3&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "random_id" "s3_id" {
    byte_length = 2
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We utilize the &lt;code&gt;random_id&lt;/code&gt; function:&lt;br&gt;
&lt;a href="https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id"&gt;https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/id&lt;/a&gt;&lt;br&gt;
to create the entropy needed in our bucket names to ensure we do not overlap with the name of another S3 bucket.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create an S3 Bucket w/ Terraform and Tag It
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "devops_bucket" {
  bucket = "devops-bucket-${random_id.s3_id.dec}"

  tags = {
      Env = "dev"
      Service = "s3"
      Team = "devops"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, let's run &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Once the apply is finished, let's run &lt;code&gt;terraform console&lt;/code&gt; and then run &lt;code&gt;aws_s3_bucket.devops_bucket.tags&lt;/code&gt; to verify the tags:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; aws_s3_bucket.devops_bucket.tags
tomap({
  "Env" = "dev"
  "Service" = "s3"
  "Team" = "devops"
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;To exit the console, run &lt;code&gt;exit&lt;/code&gt; or &lt;code&gt;ctrl+c&lt;/code&gt;. You can also just run &lt;code&gt;terraform state show aws_s3_bucket.devops_bucket.tags&lt;/code&gt;, &lt;code&gt;terraform show&lt;/code&gt;, or just scroll up through the output to see the tags.&lt;/p&gt;

&lt;p&gt;As you can see, AWS tags can be specified on AWS resources by utilizing a &lt;code&gt;tags&lt;/code&gt; block within a resource. This is a simple way to ensure each s3 bucket has tags, but it is in no way efficient. Tagging every resource in AWS like this is not only tedious and the complete opposite of the DRY (Don't Repeat Yourself) principle, but it's also avoidable to an extent!&lt;/p&gt;

&lt;h2&gt;
  
  
  Default AWS Tags &amp;amp; Terraform
&lt;/h2&gt;

&lt;p&gt;In order to specify deployment-wide tags, you can specify a &lt;code&gt;default_tags&lt;/code&gt; block within the provider block. This will allow you to specify fallback tags for any resource that has no tags defined. If, however, you do specify tags on a specific resource, those tags will take precedence. Let's take a look:&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Terraform to Create a Second S3 bucket
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "finance_bucket" {
  bucket = "cloudforecast-finance-${random_id.s3_id.dec)"

  tags = {
    Env = "dev"
    Service = "s3"
    Team = "finance"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once you have added the second bucket definition and saved the file, go ahead and apply the configuration with &lt;code&gt;terraform apply -auto-approve&lt;/code&gt;.&lt;br&gt;
Once you have applied, you can run &lt;code&gt;terraform console&lt;/code&gt; and access both buckets by their resource name:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; aws_s3_bucket.devops_bucket.tags
tomap({
  "Env" = "dev"
  "Service" = "s3"
  "Team" = "devops"
})
&amp;gt; aws_s3_bucket.finance_bucket.tags
tomap({
  "Env" = "dev"
  "Service" = "s3"
  "Team" = "finance"
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;If we were to deploy 10s, 100s, or even 1000s of resources, this would not be very efficient. Let's add default tags to make this more efficient:&lt;/p&gt;

&lt;h3&gt;
  
  
  Add Default AWS Tags w/ Terraform
&lt;/h3&gt;

&lt;p&gt;Within the &lt;code&gt;provider&lt;/code&gt; block of our configuration, add the default tag in order to assign both resources the &lt;code&gt;Env&lt;/code&gt; tag:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-west-2"
    default_tags {
      tags = {
          Env = "dev"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Remove Env tags w/ Terraform
&lt;/h3&gt;

&lt;p&gt;Now that we've added the default tags, let's remove the &lt;code&gt;Env&lt;/code&gt; tag from the AWS S3 buckets:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "devops_bucket" {
    bucket = "devops-bucket-${random_id.s3_id.dec}"

    tags = {
        Service = "s3"
        Team = "devops"
    }
}

resource "aws_s3_bucket" "finance_bucket" {
    bucket = "finance-bucket-${random_id.s3_id.dec}"

    tags = {
        Service = "s3"
        Team = "finance"
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run &lt;code&gt;terraform apply -auto-approve&lt;/code&gt; again and, once it's finished deploying,&lt;br&gt;
run &lt;code&gt;terraform console&lt;/code&gt;. Within the console, type the resource address of each S3 bucket and view the output:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; aws_s3_bucket.devops_bucket.tags
tomap({
  "Service" = "s3"
  "Team" = "devops"
})
&amp;gt; aws_s3_bucket.finance_bucket.tags
tomap({
  "Service" = "s3"
  "Team" = "finance"
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Do you notice something missing? Default tags are not displayed within the &lt;code&gt;tags&lt;/code&gt; attribute. Default tags are found within the &lt;code&gt;tags_all&lt;/code&gt; attribute, so re-run the previous commands with &lt;code&gt;tags_all&lt;/code&gt; replacing &lt;code&gt;tags&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; aws_s3_bucket.devops_bucket.tags_all
tomap({
  "Env" = "dev"
  "Service" = "s3"
  "Team" = "devops"
})
&amp;gt; aws_s3_bucket.finance_bucket.tags_all
tomap({
  "Env" = "dev"
  "Service" = "s3"
  "Team" = "finance"
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;There they are! Keep this in mind. If you are querying the state to perform actions based on tags, you will want to use the &lt;code&gt;tags_all&lt;/code&gt; attribute instead of just &lt;code&gt;tags&lt;/code&gt; by themselves.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tag Precedence
&lt;/h2&gt;

&lt;p&gt;Now, for one last quick test to see the tag precedence in action, let's add the &lt;code&gt;Env&lt;/code&gt; tag back to our finance bucket, but define it as &lt;code&gt;prod&lt;/code&gt; instead of &lt;code&gt;dev&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "finance_bucket" {
  bucket = "finance-bucket-${random_id.s3_id.dec}"

  tags = {
    Env = "prod"
    Service = "s3"
    Team    = "finance"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Run &lt;code&gt;terraform apply -auto-approve&lt;/code&gt; again:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  # aws_s3_bucket.finance_bucket will be updated in-place
  ~ resource "aws_s3_bucket" "finance_bucket" {
        id                                   = "finance-bucket-52680"
      ~ tags                                 = {
          + "Env"     = "prod"
            # (2 unchanged elements hidden)
        }
      ~ tags_all                             = {
          ~ "Env"     = "dev" -&amp;gt; "prod"
            # (2 unchanged elements hidden)
        }
        # (17 unchanged attributes hidden)
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice the changes made, then run &lt;code&gt;terraform console&lt;/code&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; aws_s3_bucket.finance_bucket.tags_all
tomap({
  "Env" = "prod"
  "Service" = "s3"
  "Team" = "finance"
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Notice the &lt;code&gt;Env&lt;/code&gt; tag has now been changed to &lt;code&gt;prod&lt;/code&gt;, our updated value, overriding the default tags.&lt;/p&gt;

&lt;h3&gt;
  
  
  Destroy Resources
&lt;/h3&gt;

&lt;p&gt;Now, if you're ready, go ahead and destroy your resources!&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform destroy -auto-approve&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Alright, so now that we have an idea of how to assign custom tags and default tags, join me on the next part in this series where we dive deeper!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cloudforecast.io/blog/terraform-s3-bucket-aws-tags/"&gt;Original Post&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS DynamoDB Pricing and Cost Optimization Guide</title>
      <dc:creator>Tony Chan</dc:creator>
      <pubDate>Tue, 11 Jan 2022 17:20:15 +0000</pubDate>
      <link>https://forem.com/cloudforecast/aws-dynamodb-pricing-and-cost-optimization-guide-3ljm</link>
      <guid>https://forem.com/cloudforecast/aws-dynamodb-pricing-and-cost-optimization-guide-3ljm</guid>
      <description>&lt;p&gt;Amazon Web Services &lt;a href="https://aws.amazon.com/dynamodb/"&gt;DynamoDB&lt;/a&gt;, a NoSQL database service, is excellent for applications that require low-latency data access, such as web, mobile, IoT, and gaming apps. It improves the durability of an application by handling large amounts of data quickly and efficiently. Among its features are built-in caching and security and backup support for web applications. Because AWS DynamoDB supports ACID transactions it can be scaled at the enterprise level, allowing for the development of business-critical applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--txYt9fUk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/HEfwEEL.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--txYt9fUk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/HEfwEEL.png" alt="DynamoDB home" width="880" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS DynamoDB does offer a free tier, but its pricing for paid plans charge based on six factors:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The amount of data storage&lt;/li&gt;
&lt;li&gt;The amount of data you read and write&lt;/li&gt;
&lt;li&gt;The amount of data transfer&lt;/li&gt;
&lt;li&gt;Backup and restore operations you performed&lt;/li&gt;
&lt;li&gt;DynamoDB streams&lt;/li&gt;
&lt;li&gt;Amount of write request units replicated when using global tables&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This article will thoroughly explain DynamoDB pricing structure and how to reduce expenses in DynamoDB by taking the above factors into account, so you can get the best performance at the lowest cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB Pricing
&lt;/h2&gt;

&lt;p&gt;DynamoDB can be extremely expensive to use. There are two pricing structures to choose from: provisioned capacity and on-demand capacity.&lt;/p&gt;

&lt;h3&gt;
  
  
  DynamoDB Provisioned Capacity
&lt;/h3&gt;

&lt;p&gt;In this Amazon DybamoDB Pricing Plan, you’re billed hourly per the use of operational capacity units (or read and write capacity units). You can control costs by specifying the maximum amount of resources needed by each database table being managed. The provisioned capacity provides autoscaling and dynamically adapts to an increase in traffic. However, it does not implement autoscaling for sudden changes in data traffic unless that’s enabled.&lt;/p&gt;

&lt;h3&gt;
  
  
  DynamoDB On-demand Pricing
&lt;/h3&gt;

&lt;p&gt;This plan is billed per request units (or read and write request units). You’re only charged for the requests you make, making this a truly serverless choice. This choice can become expensive when handling large production workloads, though. The on-demand capacity method is perfect for autoscaling if you’re not sure how much traffic to expect.&lt;/p&gt;

&lt;p&gt;Knowing which capacity best suits your requirements is the first step in optimizing your costs with DynamoDB. Here are some factors to consider before making your choice.&lt;/p&gt;

&lt;p&gt;You should use provisioned capacity when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have an idea of the maximum workload your application will have&lt;/li&gt;
&lt;li&gt;Your application’s traffic is consistent and does not require scaling (unless you enable the autoscaling feature, which costs more)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You should use on-demand capacity when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You’re not sure about the workload your application will have&lt;/li&gt;
&lt;li&gt;You don’t know how consistent your application’s data traffic will be&lt;/li&gt;
&lt;li&gt;You only want to pay for what you use&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can learn more about how costs are calculated for both pricing structures &lt;a href="https://aws.amazon.com/dynamodb/pricing/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  DynamoDB Pricing Calculator
&lt;/h3&gt;

&lt;p&gt;There are a few options available to help estimate and calculate what you might pay for with AWS DynamoDB. The best we've found for DynamoDB is AWS's own AWS Pricing Calculator which can be found here, directly: &lt;a href="https://calculator.aws/#/createCalculator/DynamoDB"&gt;DynamoDB Pricing Calculator&lt;/a&gt;. With this calculator, you can easily pick and choose DynamoDB features, the provisioned capacity, read/write settings and then get a clear estimate:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--8xzqpvRv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.cloudforecast.io/blog/assets/media/dynamodb-aws-pricing-calculator.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--8xzqpvRv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.cloudforecast.io/blog/assets/media/dynamodb-aws-pricing-calculator.png" alt="DynamoDB AWS Pricing Calculator" width="880" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB’s Read and Write Capacity
&lt;/h2&gt;

&lt;p&gt;The read capacity of a DynamoDB table indicates how much you can read from it. Read capacity units (RCUs) are used to measure the read capacity of a table. For an object up to 500 KB in size, one RCU equals one strongly &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html"&gt;consistent read&lt;/a&gt; per second or two ultimately consistent reads per second.&lt;/p&gt;

&lt;p&gt;The write capacity of a DynamoDB table informs you how much you can write into it. Write capacity units (WCUs) denote one write per second for items up to 1 KB in size.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB Autoscaling
&lt;/h2&gt;

&lt;p&gt;Database behaviors can be tricky to measure, which makes scaling problematic. Underscaling your database can lead to catastrophe, while overscaling can lead to a waste of resources. The DynamoDB autoscaling functionality configures suitable read and write throughput to meet the request rate of your application. This means that when your workload changes, DynamoDB automatically adjusts and dynamically redistributes your database partitions to better fit changes in read throughput, write throughput, and storage.&lt;/p&gt;

&lt;p&gt;Autoscaling is the default capacity setting when you create a DynamoDB table, but you can activate it on any table. In DynamoDB, you define autoscaling by specifying the minimum and maximum levels of read and write capacity, as well as the desired usage percentage. When the amount of consumed reads or writes exceeds the desired usage percentage for two minutes in a row, the upper threshold alarm is activated. The lower threshold alarm is triggered when traffic falls below the desired utilization, minus twenty percent, for fifteen minutes in a row. When both alarms are raised, the autoscaling process begins.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring DynamoDB Resources
&lt;/h2&gt;

&lt;p&gt;Monitoring AWS resources such as DynamoDB for latency, traffic, errors, and saturation is called resource monitoring. It makes scaling your DynamoDB database easier since you can get the metrics you need, like network throughput, CPU utilization, or read/write operations. For example, after monitoring your database, you discover that the database experienced a high surge in traffic. This suggests that a large amount of data is being either read from or written to your database. You may decide to increase the read capacity of your database to accommodate more read requests or increase the write capacity to accommodate more write requests.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB Cost Optimization
&lt;/h2&gt;

&lt;p&gt;Now that you know some of the factors behind how you are billed when using AWS DynamoDB, here are some suggestions for making these factors work in your favor and ensuring DynamoDB is cost-effective&lt;/p&gt;

&lt;h3&gt;
  
  
  Picking the Right Capacity
&lt;/h3&gt;

&lt;p&gt;You may already know which capacity structure you’d like to adopt, but keep these points in mind as you make your final choice.&lt;/p&gt;

&lt;h3&gt;
  
  
  DynamoDB On-Demand Capacity Cost Optimization
&lt;/h3&gt;

&lt;p&gt;According to &lt;a href="https://hackernoon.com/understanding-the-scaling-behaviour-of-dynamodb-ondemand-tables-80d80734798f"&gt;Yan Cui's blog&lt;/a&gt;, his calculations suggest that on-demand tables are about five to six times more costly per request than provisioned tables. If your workload maintains consistent usage with no unexpected spikes, but you’re unsure about future usage, consider using provisioned mode with autoscaling enabled.&lt;/p&gt;

&lt;h3&gt;
  
  
  DynamoDB Provisioned Capacity Cost Optimization
&lt;/h3&gt;

&lt;p&gt;If you use provisioned capacity and your capacity exceeds 100 units, consider purchasing &lt;a href="https://aws.amazon.com/blogs/aws/dynamodb-price-reduction-and-new-reserved-capacity-model/"&gt;reserved capacity&lt;/a&gt;. Compared to granted throughput capacity, reserved capacity delivers a seventy-six percent discount over a three-year term and a fifty-three percent discount over one year.&lt;/p&gt;

&lt;h3&gt;
  
  
  Finding Unused DynamoDB Tables
&lt;/h3&gt;

&lt;p&gt;Unused DynamoDB tables are a waste of resources and unnecessarily raise your costs. You have two options for handling this. You can use the on-demand capacity mode to ensure you only pay for the database tables to which you make read/write requests. Alternately, try to detect the unused tables and eliminate them. To do this, you need to review the read/write operations on your tables. If a table has no read/write activity in the last ninety days, it is an unused table. &lt;/p&gt;

&lt;h3&gt;
  
  
  Reduce DynamoDB Backup Needs
&lt;/h3&gt;

&lt;p&gt;According to estimations, the Amazon DynamoDB backup pipeline, which enables automatic backups and drops of data, can considerably raise your costs. This is due to the amount of WCUs constantly expended on creating a backup of the current DynamoDB tables and the amount of RCUs that will need to be expended when retrieving the backup tables. As a result, the backup table has no specified resource sizes and the table is growing without bounds. Instead use the native DynamoDB backup, which uses provisioned capacity. That way you know how much backup size you need and how much read/write resource capacity it will require.&lt;/p&gt;

&lt;h3&gt;
  
  
  Using AWS Cheaper Regions
&lt;/h3&gt;

&lt;p&gt;Some &lt;a href="https://www.cloudsavvyit.com/2368/which-aws-region-should-you-choose/"&gt;AWS regions&lt;/a&gt; are more expensive than others. If you’re not concerned about your data location, choose the cheapest region you can get.&lt;/p&gt;

&lt;p&gt;The cheapest regions are us-east-1, us-east-2, and us-west-2, costing $0.25 per GB/month, $0.00065 per WCU/hour, and $0.00013 per RCU/hour.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Amazon DynamoDB can be an important tool for your software projects, but it can also be an expensive tool if you’re not careful. Following these tips to optimize your costs can help you keep your budget down and free you up to focus on other aspects of your business.&lt;/p&gt;

&lt;p&gt;This article was originally posted on the CloudForecast Blog: &lt;a href="https://www.cloudforecast.io/blog/dynamodb-pricing/"&gt;https://www.cloudforecast.io/blog/dynamodb-pricing/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>AWS EMR Cost Optimization Guide</title>
      <dc:creator>Tony Chan</dc:creator>
      <pubDate>Tue, 14 Dec 2021 17:19:06 +0000</pubDate>
      <link>https://forem.com/cloudforecast/aws-emr-cost-optimization-guide-3eba</link>
      <guid>https://forem.com/cloudforecast/aws-emr-cost-optimization-guide-3eba</guid>
      <description>&lt;p&gt;AWS EMR (&lt;a href="https://aws.amazon.com/emr/"&gt;Elastic MapReduce&lt;/a&gt;) is Amazon’s managed big data platform which allows clients who need to process gigabytes or petabytes of data to create EC2 instances running the &lt;a href="https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html"&gt;Hadoop File System (HDFS)&lt;/a&gt;. AWS generally bills storage and compute together inside instances, but AWS EMR allows you to scale them independently, so you can have huge amounts of data without necessarily requiring large amounts of compute. AWS EMR clusters integrate with a wide variety of &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-file-systems.html"&gt;storage options&lt;/a&gt;. The most common and cost-effective are Simple Storage Service (S3) buckets and the HDFS. You can also integrate with dozens of other AWS services, including RDS, S3 Glacier, Redshift, and Data Pipeline.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--g6K8iMjp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.cloudforecast.io/blog/assets/media/v2zzfi1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--g6K8iMjp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.cloudforecast.io/blog/assets/media/v2zzfi1.png" alt="EMR Data Store Options" width="512" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS EMR is powerful, but understanding pricing can be a challenge. Because the service has several unique features and extensively utilizes other AWS services, it’s easy to lose track of all the elements factored into your monthly spend. In this article, I’ll share an overview of AWS EMR’s pricing model, some tips for controlling your AWS EMR costs, and resources for monitoring your EMR spend. While it’s hard to generalize advice for EMR because each data warehouse is different, this article should give you a starting point for understanding how your use case will be priced by Amazon.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS EMR Cluster Pricing
&lt;/h2&gt;

&lt;p&gt;Most of the costs of running an AWS EMR cluster come from the utilization of other AWS resources, like EC2 instances and S3 storage. To run a job, an AWS EMR cluster must have at least one primary node and one core node. The EC2 instances in the cluster are &lt;a href="https://aws.amazon.com/emr/pricing/"&gt;charged by the minute&lt;/a&gt; based on instance size. Every instance in a cluster is created with an attached, ephemeral EBS volume with &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-custom-ami-boot-volume-size.html"&gt;10 GiB of provisioned space&lt;/a&gt; (instances without attached instance storage are &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-storage.html"&gt;given more&lt;/a&gt;) to hold HDFS data and any temporary data like caching or buffers. These volumes are &lt;a href="https://aws.amazon.com/ebs/pricing/"&gt;charged per GiB provisioned&lt;/a&gt; and prorated over the time the instance runs.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;data to process&lt;/strong&gt; and &lt;strong&gt;data processing application&lt;/strong&gt; are stored in S3 buckets, where you're charged &lt;a href="https://dev.toTODO:%20Link%20to%20S3%20Pricing%20Guide"&gt;per Gibibyte (GiB) per month&lt;/a&gt;. A job is &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/AddingStepstoaJobFlow.html"&gt;submitted&lt;/a&gt; to the EMR cluster via &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-work-with-steps.html"&gt;steps&lt;/a&gt; or &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/interactive-jobs.html"&gt;a Hadoop job&lt;/a&gt;. You can also  &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/making_api_requests.html"&gt;automate cluster launch&lt;/a&gt; using a  service like &lt;a href="https://aws.amazon.com/blogs/big-data/automating-emr-workloads-using-aws-step-functions/"&gt;Step Functions&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-manage-recurring.html"&gt;Data Pipeline&lt;/a&gt;, or &lt;a href="https://docs.aws.amazon.com/step-functions/latest/dg/callback-task-sample-sqs.html"&gt;Lambda Functions&lt;/a&gt;. To start the job, the &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-fs.html"&gt;EMR File System (EMRFS)&lt;/a&gt; retrieves data from S3 (adding GET request fees to the S3 bucket). Any buckets in a different region will also be charged per GiB for data transferred to the cluster.&lt;/p&gt;

&lt;p&gt;You can set the minimum and maximum number of EC2 instances your EMR cluster uses to help control your costs vs. availability. For example, this cluster uses &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-managed-scaling.html"&gt;managed scaling&lt;/a&gt;, and has the maximum number of non-primary &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/managed-scaling-allocation-strategy.html"&gt;nodes&lt;/a&gt; set to “3”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NKaoJh49--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/YLG7E5D.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NKaoJh49--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/YLG7E5D.png" alt="Cluster Scaling Options" width="880" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the job starts, EMR will monitor utilization and add nodes if needed. EMR managed scaling adds nodes in a specific order: &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-purchasing-options.html"&gt;On-Demand&lt;/a&gt; core nodes, On-Demand task nodes, &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-instances-guidelines.html#emr-plan-spot-instances"&gt;Spot instance&lt;/a&gt; core nodes, and Spot instance task nodes. These additional instances also have costs and attached EBS volumes.&lt;/p&gt;

&lt;p&gt;Other data, like configurations for auto-scaling instances and log data can also be stored in S3 buckets. Once the job completes (or finishes a step), intermediate data can be stored in HDFS for more processing or written to an S3 bucket (with a PUT request fee, storage costs, and any cross regions data-transfer charges). At the end of the job, EMR will terminate idle instances (and attached EBS volumes) down to the minimum, while remaining instances wait for the next workload.&lt;/p&gt;

&lt;p&gt;In short, if your EMR cluster is sitting idly, waiting for data, it should scale down appropriately, but you'll still pay for storage costs during downtime. While putting a cap on the number of EC2 instances EMR uses helps you control your costs, your data warehouse might struggle if it's hit with a sudden spike.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS EMR Cost Optimization Tips
&lt;/h2&gt;

&lt;p&gt;Now that we've laid the groundwork for how pricing in EMR works, let's look at some of the levers you can pull to decrease your EMR costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prepare Your Data
&lt;/h3&gt;

&lt;p&gt;When you’re working on the petabyte scale, disorganized data can dramatically increase costs by increasing the amount of time it takes to find the data you intend to process. Good ways to improve the efficiency of your EMR cluster are data partitioning, compression, and formatting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data partitioning&lt;/strong&gt; is vital to ensure you’re not wading through an entire data lake to find the few lines of data you want to process, racking up bandwidth and compute costs in the process. You can partition data by carefully planning to &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html"&gt;use prefixes&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/dev/selecting-content-from-objects.html"&gt;S3 Select&lt;/a&gt;. Or use a Hadoop tool like &lt;a href="https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-metastore-external-hive.html"&gt;Hive&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-presto-glue.html"&gt;Presto&lt;/a&gt;, or &lt;a href="https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-glue.html"&gt;Spark&lt;/a&gt; in tandem with a metadata storage service like &lt;a href="https://aws.amazon.com/glue/pricing/"&gt;Glue&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Partitioning by date is common and suits many tasks, but you can partition by any key. A daily partition could prevent an EMR cluster from requesting and scanning a week’s worth of data. Much like database indexing, some partitioning is extremely useful, but over-partitioning can hurt performance by forcing the primary node to track additional metadata and distribute many small files. When reading data, aim to keep partitions larger than 128 MB (the default HDFS &lt;a href="https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hdfs-config.html"&gt;block size&lt;/a&gt;) to avoid the performance hit associated with loading many small files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data compression&lt;/strong&gt; has the obvious benefit of reducing storage space. It also saves on bandwidth for data passed in and out of your cluster. Hadoop can handle reading gzip, bzip2, LZO, and snappy compressed files without any &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/HowtoProcessGzippedFiles.html"&gt;additional configuration&lt;/a&gt;. Gzip is not splittable after compression, so it’s not as appealing as other compression formats. You can also configure EMR to &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-output-compression.html"&gt;compress the output&lt;/a&gt; of your job, saving bandwidth and storage in both directions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data formatting&lt;/strong&gt; is another place to make gains. When dealing with huge amounts of data, finding the data you need can take up a significant amount of your compute time. &lt;a href="https://parquet.apache.org/"&gt;Apache Parquet&lt;/a&gt; and &lt;a href="https://orc.apache.org/"&gt;Apache ORC&lt;/a&gt; are &lt;a href="https://aws.amazon.com/nosql/columnar/"&gt;columnar&lt;/a&gt; data formats optimized for analytics that pre-aggregate metadata about columns. If your EMR queries column intensive data like &lt;code&gt;sum&lt;/code&gt;, &lt;code&gt;max&lt;/code&gt;, or &lt;code&gt;count&lt;/code&gt;, you can see significant speed improvements by reformatting data &lt;a href="https://www.cloudforecast.io/blog/Athena-to-transform-CSV-to-Parquet/"&gt;like CSVs&lt;/a&gt; into one of these columnar formats.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use the Right Instance Type
&lt;/h3&gt;

&lt;p&gt;Once your data is stored efficiently, you can bring attention to optimizing how that data is processed. The EC2 instances EMR uses to process data and run the cluster are charged per second. The cost of EC2 instances scales with size, so doubling the size of an instance doubles the hourly cost, but the cost of managing the EMR overhead for a cluster sometimes remains fixed. For many instance families, the hourly EMR fee for an .8xlarge is the same as the hourly EMR fee for a .24xlarge machine. This means larger machines running many tasks are more cost efficient, they decrease the percentage of your budget spent supporting EMR overhead.&lt;/p&gt;

&lt;h3&gt;
  
  
  Choose Your AWS EC2 Pricing
&lt;/h3&gt;

&lt;p&gt;There are four options for purchasing EC2 instances:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/ec2/pricing/on-demand/"&gt;&lt;strong&gt;On-Demand instances&lt;/strong&gt;&lt;/a&gt; can be started or shut down at any time with no commitment and are the most expensive. The upside is that they'll always be available and can't be taken away (like spot instances can).&lt;/li&gt;
&lt;li&gt;One and three year &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-reserved-instances.html"&gt;&lt;strong&gt;reserved instances&lt;/strong&gt;&lt;/a&gt; are On-Demand EC2 instances you reserve in exchange for discounts of 40% to 70%, but you're committing to a long-term commitment with a specific &lt;a href="https://dev.toAmazon%20EC2%20Instance%20Types%20-%20Amazon%20Web%20Services"&gt;instance family&lt;/a&gt; within a specific region .&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/savingsplans/"&gt;Savings Plans&lt;/a&gt; are a slightly more flexible version of reserved instances. You still commit to purchase a certain amount of computer for a one or three year term, but you can choose to change instance family and region. This contract for compute can also be applied to AWS Fargate and AWS Lambda usage.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/ec2/spot/use-case/emr/"&gt;&lt;strong&gt;Spot instances&lt;/strong&gt;&lt;/a&gt; allow clients to purchase unused EC2 capacity, with discounts that can reach 90% and are tied to demand over time. The downside is that spot instances could be claimed back at any time, so they aren't appropriate for most long-running jobs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a3fXBTpc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/HXB20g5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a3fXBTpc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/HXB20g5.png" alt="The Minimum Necessary Cluster" width="880" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The best way to determine &lt;strong&gt;which instance type to use&lt;/strong&gt; is by testing your application in EMR while monitoring your cluster through the &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-manage-view-clusters.html"&gt;EMR management console&lt;/a&gt;, &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-manage-view-web-log-files.html"&gt;log files&lt;/a&gt;, and &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/UsingEMR_ViewingMetrics.html"&gt;CloudWatch metrics&lt;/a&gt;. You want to be fully utilizing as much of your EMR system as possible, making sure you don’t have large amounts of compute idling while ensuring you can reliably hit your SLAs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oI_kifZz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/bxzFc4D.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oI_kifZz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/bxzFc4D.png" alt="Cluster Console Metrics" width="880" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Use the Right Number of Primary, Core, and Task Nodes
&lt;/h3&gt;

&lt;p&gt;There are &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-master-core-task-nodes.html"&gt;three types of nodes&lt;/a&gt; in an EMR cluster. It's important to understand what they do so you can devote the right number of instances to each of these types.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;primary node&lt;/strong&gt; (there can only be one or &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-ha-launch.html"&gt;three&lt;/a&gt; running) manages the cluster and tracks the health of nodes by running the YARN Resource Manager and the HDFS Name Node Service. These machines don’t run tasks and can be smaller than other nodes (the exception is if your cluster runs &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-concurrent-steps.html"&gt;multiple steps in parallel&lt;/a&gt;). Having three primary nodes gives you redundancy in case one goes down, but you will obviously pay three times as much for the peace of mind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core nodes&lt;/strong&gt; run tasks and speak to HDFS by running the HDFS DataNode Daemon and the YARN Node Manager service. These are the workhorses of any EMR cluster and can be scaled up or down as needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task nodes&lt;/strong&gt; don’t know about HDFS and only run the YARN Node Manager service. They are best suited for parallel computation, like Hadoop MapReduce tasks and Spark executors. Because they can be reclaimed without risking losing data stored in HDFS, they are ideal candidates to become Spot instances.&lt;/p&gt;

&lt;p&gt;While EMR handles the scaling up and down of core and task nodes, you have the ability to set minimums and maximums. If your maximum is too low, large jobs might back up and take a long time to run. If your minimum is too low, spikes in data take longer as more instances ramp up. On the flip side, if your maximum is too high, an error in your data pipeline could lead to huge cost increases.&lt;/p&gt;

&lt;h3&gt;
  
  
  Instance Configuration
&lt;/h3&gt;

&lt;p&gt;Once you've tested and selected the appropriate instance types, sizes, and number of nodes, you have to make a configuration decision. You can deploy an &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-fleet.html"&gt;instance fleet&lt;/a&gt; or &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-uniform-instance-group.html"&gt;uniform instance groups&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0i1fXTXP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/pqNkNQn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0i1fXTXP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/pqNkNQn.png" alt="EMR Groups or Fleets?" width="880" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instance fleets are flexible and designed to utilize Spot instances effectively. When creating an instance fleet, you specify up to five instance types, a range of availability zones (avoid saving a few cents on instances only to spend them transferring data between zones), a target for Spot and On-Demand instances, and a maximum price you’d pay for a Spot instance. When the fleet launches, EMR provisions instances until your targets are met.&lt;/p&gt;

&lt;p&gt;You can set a provisioning timeout, which allows you to terminate the cluster or switch to On-Demand instances if no Spot instances are available. Instance fleets also support &lt;a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html#fixed-duration-spot-instances"&gt;Spot instances for predefined durations&lt;/a&gt;, allowing your cluster to confidently access a Spot instance for 1 to 6 hours.&lt;/p&gt;

&lt;p&gt;Running all your EMR clusters as Spot Instances would be great for your budget but will leave you a system you can’t always use to process work promptly. Evaluate your requirements, plan to have more expensive and more reliable long-running instances to ensure you meet your SLAs while adding cheaper, less reliable Spot instances to handle spikes in demand.&lt;/p&gt;

&lt;p&gt;Uniform Instance Groups are more targeted, requiring you to specify a single instance size and decide between On Demand on Spot instances before launching. Instance Groups are perfect for &lt;strong&gt;tasks that are well understood and need a concrete, consistent amount of resources&lt;/strong&gt;. Instance Fleets are great for grabbing Spot instances where possible while allowing the cluster to fall back to On-Demand instances if needed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Scaling AWS EMR Clusters
&lt;/h3&gt;

&lt;p&gt;AWS EMR clusters are big, powerful, and expensive. EMR utilization also often comes in peaks and valleys of utilization, making &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-scale-on-demand.html"&gt;scaling&lt;/a&gt; your cluster a good cost-saving option when handling usage spikes. Instance fleets and uniform instance clusters can both use &lt;a href="https://aws.amazon.com/blogs/big-data/introducing-amazon-emr-managed-scaling-automatically-resize-clusters-to-lower-cost/"&gt;&lt;strong&gt;EMR Managed Scaling&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This scaling service automatically adds nodes when utilization is high and removes them when it decreases. Unfortunately, it's only available for applications that use Yet Another Resource Manager (YARN) (sorry, &lt;a href="https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-presto.html"&gt;Presto&lt;/a&gt;). If you're running an instance group, you can also &lt;a href="https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-automatic-scaling.html"&gt;specify your own scaling policy&lt;/a&gt; using a CloudWatch metric and other parameters you specify. This can give you more fine-grained control over scaling, but it's obviously more complicated to set up.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terminate AWS EMR clusters
&lt;/h3&gt;

&lt;p&gt;Another fundamental decision you'll have to make about every EMR cluster you spin up is whether it should &lt;strong&gt;terminate&lt;/strong&gt; after running the job or &lt;strong&gt;keep running&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DbQ9HMOK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/gQtfyEI.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DbQ9HMOK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/gQtfyEI.png" alt="Whether 'tis nobler to suffer the EC2 instance costs" width="865" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Terminating clusters after running jobs is great for saving money - you no longer pay for an instance or its attached storage - but auto-terminating clusters also has drawbacks. Any data in the HDFS is &lt;strong&gt;lost forever upon cluster termination&lt;/strong&gt;, so you will have to write stateless jobs that rely on a metadata store in S3 or Glue.&lt;/p&gt;

&lt;p&gt;Auto-termination is also inefficient when running many small jobs. It generally takes less than 15 minutes for a cluster to get provisioned and start processing data, but if a job takes 5 minutes to run and you’re running 10 in a row, auto-termination quickly takes a toll.&lt;/p&gt;

&lt;p&gt;To get the most out of long-running clusters, try to smooth out utilization over time. Scatter jobs throughout the day, and try to draw all the EMR users in your organization to the instance to fill gaps in long-running clusters. Long-running, large clusters can be quite cost-effective. Instance costs decrease relative to size, and the EMR fees on top often don’t increase when increasing instance size, increasing the relative value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Monitoring AWS EMR Costs
&lt;/h2&gt;

&lt;p&gt;Now that your AWS EMR cluster has instances scaling smoothly while reading beautifully compressed and formatted data, check &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/"&gt;Cost Explorer&lt;/a&gt; to track your cost reduction progress.&lt;/p&gt;

&lt;p&gt;Cost Explorer gives you a helpful map of what’s running in your organization but requires some attention to yield the greatest gains. &lt;a href="https://www.cloudforecast.io/blog/aws-tagging-best-practices/"&gt;Tag your resources&lt;/a&gt;, giving each cluster an owner and business unit to attribute your costs to. Tagging is especially important for EMR because it relies on so many other Amazon services. It can be really hard to differentiate between the many resources used by your EMR cluster when they're in the same AWS account as your production application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other AWS EMR Resource
&lt;/h2&gt;

&lt;p&gt;If you're looking for more practical articles on this subject, another solid resource can be found on &lt;a href="https://medium.com/teads-engineering/reducing-aws-emr-data-processing-costs-7c12a8df6f2a"&gt;Teads engineering blog&lt;/a&gt;. This article was written by &lt;a href="https://medium.com/@wassimmaaoui"&gt;Wassim Almaaoui&lt;/a&gt; and he describes 3 measures that helped them significantly lower their data processing cost on EMR:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run our Workloads on Spot Instances&lt;/li&gt;
&lt;li&gt;Leverage the EMR pricing of bigger EC2 Instances.&lt;/li&gt;
&lt;li&gt;Automatically detect idle clusters (and terminate them ASAP).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check this article out when you get a chance to supplement this one: &lt;a href="https://medium.com/teads-engineering/reducing-aws-emr-data-processing-costs-7c12a8df6f2a"&gt;Reducing AWS EMR data processing costs&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Knowing Your Requirements, Meeting Your Goals
&lt;/h2&gt;

&lt;p&gt;In this post, you've learned how EMR pricing works and what you can do to minimize and track your EMR costs. It's possible to grow your EMR infrastructure while controlling costs, but it will likely take a deep understanding of your data processing requirements and a little trial and error. EMR can eat up huge amounts of storage, so be sure you partition, compress, and format data to reduce your storage costs. Categorize jobs according to priority, schedule, and resource requirements, then nestle them into the right instance types. Consider cluster termination for jobs that run only occasionally and auto-scale long-running instances when appropriate.&lt;/p&gt;

&lt;p&gt;If this is overwhelming, &lt;a href="https://www.cloudforecast.io/"&gt;CloudForecast&lt;/a&gt; can help. Reach out to our CTO, &lt;a href="//mailto:francois@cloudforecast.io"&gt;Francois (francois@cloudforecast.io)&lt;/a&gt;, if you’d like help implementing a long-term cost-reduction strategy for EMR.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was originally published on the CloudForecast Blog: &lt;a href="https://www.cloudforecast.io/blog/aws-emr-cost-optimization-guide/"&gt;AWS EMR Cost Optimization Guide&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Node Exporter and Kubernetes Guide</title>
      <dc:creator>Tony Chan</dc:creator>
      <pubDate>Wed, 10 Nov 2021 17:02:48 +0000</pubDate>
      <link>https://forem.com/cloudforecast/node-exporter-and-kubernetes-guide-1070</link>
      <guid>https://forem.com/cloudforecast/node-exporter-and-kubernetes-guide-1070</guid>
      <description>&lt;p&gt;Monitoring is essential to a reliable system. It helps keep your services consistent and available by preemptively alerting you to important issues. In legacy (non-Kubernetes) systems, monitoring is simple. You only need to set up dashboards and alerts on two components: the application and the host. But when it comes to Kubernetes, monitoring is significantly more challenging.&lt;/p&gt;

&lt;p&gt;In this guide, we explore the challenges associated with Kubernetes monitoring, how to set up a monitoring and alert system for your Kubernetes clusters using Grafana and Prometheus, a pricing comparison of four different hosted monitoring systems, and some key metrics that you can set up as alerts in your system.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Is Kubernetes Monitoring Different From Traditional Systems?
&lt;/h2&gt;

&lt;p&gt;Kubernetes is highly distributed, being composed of several different nested components. You need to monitor your application and hosts, as well as your containers and clusters. Kubernetes also has an additional layer of complexity—automated scheduling.&lt;/p&gt;

&lt;p&gt;The scheduler manages your workloads and resources optimally, creating a moving target. As the architect, you can’t be certain of the identity or number of nodes running on your pods. You can either manually schedule your nodes (not recommended) or deploy a robust tagging system alongside logging. Tagging allows you to collect information from your clusters, which exposes the metrics to an endpoint for a service to scrape.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is a Node Exporter by Prometheus
&lt;/h2&gt;

&lt;p&gt;The most popular service that tags and exports metrics in Kubernetes is Node Exporter by &lt;a href="https://prometheus.io/"&gt;Prometheus&lt;/a&gt;, an open source service that installs through a single static binary. Node Exporter monitors a host by exposing its hardware and OS metrics which Prometheus pulls from.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Install Node Exporter
&lt;/h2&gt;

&lt;p&gt;To monitor your entire deployment, you’ll need a node exporter running on each node—this can be configured through a &lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/"&gt;DaemonSet&lt;/a&gt;. Prometheus has a &lt;a href="https://github.com/prometheus-operator/kube-prometheus#quickstart"&gt;good quick start resource&lt;/a&gt;on this in their public repo.&lt;/p&gt;

&lt;p&gt;You can use Helm—the package manager—to install Prometheus in one line:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm install prometheus-operator stable/prometheus-operator --namespace monitor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Alternatively, you can &lt;code&gt;wget&lt;/code&gt; a &lt;code&gt;tar&lt;/code&gt; file through GitHub and unzip it:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/prometheus/node_exporter/releases/download/v*/node_exporter-*.*-amd64.tar.gz
tar xvfz node_exporter-*.*-amd64.tar.gz
cd node_exporter-*.*-amd64
./node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;After installation, verify that the monitors are running on the Pods:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods -n monitor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You should see Prometheus, &lt;a href="https://grafana.com/"&gt;Grafana&lt;/a&gt;, which is an included open-source analytics platform, node-exporter, and kube-state-metrics on the Pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Use the Node Exporter and View Your Kubernetes Metrics
&lt;/h2&gt;

&lt;p&gt;This Prometheus Node Exporter exposes an endpoint, &lt;code&gt;/metrics&lt;/code&gt;, which you can &lt;code&gt;grep&lt;/code&gt;. The node exporter scrapes targets at a specified interval, attaches labels to them, and displays them through a metrics URL as text or a protocol buffer. Alternatively, this is available through &lt;code&gt;http://localhost:9100/metrics&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can also explore these metrics through the Prometheus console dashboard to get specific information. The dashboard and Prometheus metrics can be seen through &lt;code&gt;http://localhost:8080/docker/prometheus&lt;/code&gt;. This is different from the &lt;a href="https://prometheus.io/docs/visualization/browser/"&gt;Prometheus web UI&lt;/a&gt;, where you can explore container metrics through expressions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mqQe1eKS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/IyGepHM.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mqQe1eKS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/IyGepHM.png" alt="Prometheus metrics dashboard" width="880" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The scraped metrics get saved to a database that you can &lt;a href="https://prometheus.io/docs/prometheus/latest/querying/basics/"&gt;query using PromQL&lt;/a&gt; through this web console. For example, a query to select all the values of the HTTP &lt;code&gt;GET&lt;/code&gt; requests received in your staging, testing, and development environments would be:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http_requests_total{environment=~"staging|testing|development",method!="GET"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;It is intuitive to query for a number of other summary metrics such as average, max, min, or specific percentiles. Prometheus also allows you to set up alerts through email, Slack, and other supported mediums that get triggered based on conditions via Alertmanager. For example, you can set a trigger to send a high priority Slack message when a P90 is hit on various account creation failures.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Set Up Alertmanager
&lt;/h2&gt;

&lt;p&gt;There are a couple of options to install Alertmanager. You can bootstrap a &lt;a href="https://artifacthub.io/packages/helm/prometheus-community/prometheus"&gt;Prometheus deployment through Helm Charts&lt;/a&gt; directly with a single command.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Otherwise, you can download and extract the latest Alertmanager &lt;code&gt;tar&lt;/code&gt; from Prometheus's official &lt;a href="https://prometheus.io/download/#alertmanager"&gt;download link&lt;/a&gt;. This will link you to the latest version on GitHub, which you can fetch using the code snippet below.&lt;/p&gt;

&lt;p&gt;After installation, you can start Alertmanager on localhost port 9093 and begin setting up alerts through the communication channels mentioned above.&lt;/p&gt;

&lt;p&gt;Note that &lt;a href="https://github.com/google/cadvisor"&gt;CAdvisor&lt;/a&gt;, Google’s solution for natively monitoring Kubernetes, can also be &lt;a href="https://prometheus.io/docs/guides/cadvisor/"&gt;used alongside Prometheus&lt;/a&gt; to view metrics out of the box. You can explore the metrics through the CAdvisor web UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managed Hosting Through Prometheus, Grafana, New Relic, and Datadog
&lt;/h2&gt;

&lt;p&gt;Once you set up the metrics you're interested in, you can aggregate and display them through dashboards on a hosted backend. Hosted backends include Grafana Cloud, New Relic, Datadog, or you can self-host through Prometheus as discussed earlier.&lt;/p&gt;

&lt;p&gt;There are some benefits to locating your metrics servers on premises, but it’s generally poor practice. Exceptions include very large entities with data servers distributed across the world, or those who have strong restrictions around highly sensitive data.&lt;/p&gt;

&lt;p&gt;Keeping metrics on-premises results in a single point of failure that cannot be root caused if your metrics are down. Benefits include control over your security protocols and monitoring services, but these will be less robust than cloud platforms unless very well planned.&lt;/p&gt;

&lt;p&gt;Thankfully, there are several hosted options for Prometheus and metrics dashboarding. Let’s break down the differences between the most popular:&lt;/p&gt;

&lt;h2&gt;
  
  
  Hosted Prometheus Pricing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pricing: Freemium. Can be used as a managed service on Cloud Services.

&lt;ul&gt;
&lt;li&gt;AWS Example: Up to 40 M samples and 10 GB of queries free. $0.90/10 MM samples for the first 2 billion samples.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Type: Dashboard&lt;/li&gt;
&lt;li&gt;My Thoughts: Self-hosting your metrics and alerting instances creates a single point of failure that can be critical. Another option is that you can instead host it on cloud providers and replicate in multiple AZ/regions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Grafana Cloud
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pricing: Freemium. Free account with 10,000 series for Prometheus metrics, 50 GB of logs, 50 GB of traces, and 3 team members.14 day free trial for Pro. $49/mo + usage afterward.&lt;/li&gt;
&lt;li&gt;Type: Dashboard&lt;/li&gt;
&lt;li&gt;My Thoughts: Grafana is a good tool that is natively bundled with Prometheus for dashboarding. Grafana Cloud maintains Grafana for you—including updates, support, and guaranteed uptime.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  NewRelic
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pricing: Freemium. Free account for 1 admin user and unlimited viewing users with 100 GB/mo for ingestion and 8+ days retention, with unlimited querying, alerts, and anomaly detection.

&lt;ul&gt;
&lt;li&gt;$0.25/GB above 100 GB.&lt;/li&gt;
&lt;li&gt;$99/user above the first.&lt;/li&gt;
&lt;li&gt;$0.50/event above 1000 incidents.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Type: Monitoring tool&lt;/li&gt;
&lt;li&gt;My Thoughts: New Relic can be cheaper with lots of hosts, while providing a competitive base of features similar to the ones provided by Datadog. This is because you pay on ingress/egress rather than number of hosts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  DataDog
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pricing: Freemium.

&lt;ul&gt;
&lt;li&gt;Free account up to 5 hosts.&lt;/li&gt;
&lt;li&gt;Infrastructure&lt;/li&gt;
&lt;li&gt;$15/mo per host for Pro plan.&lt;/li&gt;
&lt;li&gt;$23/mo per host for Enterprise plan.&lt;/li&gt;
&lt;li&gt;Logging&lt;/li&gt;
&lt;li&gt;$0.10/GB for ingestion.&lt;/li&gt;
&lt;li&gt;$1.70/MM events for 15 day retention.&lt;/li&gt;
&lt;li&gt;See pricing page for other services.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Type: Monitoring tool&lt;/li&gt;
&lt;li&gt;My Thoughts; Datadog is a pricier option as you need to opt into multiple services while the others are bundled.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Kubernetes Metrics to Monitor
&lt;/h2&gt;

&lt;p&gt;Once you've chosen a service provider, it’s time to set up a list of key metrics. Triggering metrics through severity standards is a good approach here. You can set up a range of alerts, from &lt;code&gt;Severity 1&lt;/code&gt; for individual-productivity problems, to &lt;code&gt;Severity 5&lt;/code&gt; for breaking issues impacting customers worldwide. For these issues, you might want to consider metrics around system performance. This could include CPU, memory, disk space, plus network usage and their trends. Below, you can see a few examples to start off:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CPU Usage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;100 - (avg(irate(node_cpu{mode="idle", instance=~"$instance"}[1m])) * 100)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory Usage (10^9 refers to GB)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;node_memory_MemAvailable{instance="$instance"}/10^9&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;node_memory_MemTotal{instance="$instance"}/10^9&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disk Space Usage: Free Inodes vs. Total Inodes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;node_filesystem_free{mountpoint="/", instance="$instance"}/10^9&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Ingress&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;node_network_receive_bytes_total{instance=”$instance”}/10^9&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Egress&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;node_network_transmit_bytes_total{instance=”$instance”}/10^9&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster CPU Usage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sum (rate (container_cpu_usage_seconds_total{id="/"}[1m])) / sum (machine_cpu_cores) * 100&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod CPU Usage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sum (rate (container_cpu_usage_seconds_total{image!=""}[1m])) by (pod_name)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IO Usage by Container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sum(container_fs_io_time_seconds_total{name=~"./"}) by (name)&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Although Kubernetes monitoring is more complex, there are a number of viable options to facilitate the process. We discussed how Node Exporter can help you export metrics, compare different hosted monitoring service options, and explore some key metrics to utilize for monitoring memory, network, and CPU usage.&lt;/p&gt;

&lt;p&gt;Once you’ve decided to implement your monitoring stack, consider revisiting your Kubernetes administration and exploring cost reduction through CloudForecast new k8s cost management tool, &lt;a href="https://cloudforecast.io/kubernetes-eks-and-ecs-cost-management.html"&gt;Barometer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was originally published on: &lt;a href="https://www.cloudforecast.io/blog/node-exporter-and-kubernetes/"&gt;https://www.cloudforecast.io/blog/node-exporter-and-kubernetes/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>AWS NAT Gateway Pricing and Cost Reduction Guide</title>
      <dc:creator>Tony Chan</dc:creator>
      <pubDate>Wed, 03 Nov 2021 17:25:27 +0000</pubDate>
      <link>https://forem.com/cloudforecast/aws-nat-gateway-pricing-and-cost-reduction-guide-2lk9</link>
      <guid>https://forem.com/cloudforecast/aws-nat-gateway-pricing-and-cost-reduction-guide-2lk9</guid>
      <description>&lt;h2&gt;
  
  
  What Is a NAT Device?
&lt;/h2&gt;

&lt;p&gt;A NAT device is a server that relays packets between devices on a private subnet and the internet. It relays responses back to the server that sent the original request. Since it only sends response packets to the private subnet, it keeps your private subnet secure.&lt;/p&gt;

&lt;p&gt;The NAT works by replacing the source address of incoming packets with its own address and forwarding them to their destination on the internet. Similarly when the NAT receives an incoming packet, it replaces the destination address with the address of the server on the private subnet that sent the initial request.&lt;/p&gt;

&lt;p&gt;The most common use case for a NAT device in AWS is to download updates on instances in a private subnet, but the NAT can be used any time you want to keep a subnet private and still allow it to talk to the internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS NAT Devices
&lt;/h2&gt;

&lt;p&gt;You can use two different types of NAT devices in your VPC. The oldest of the two is called a NAT Instance, and the newer one is called a NAT Gateway.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is an AWS NAT Instance?
&lt;/h3&gt;

&lt;p&gt;An AWS NAT Instance is really just an EC2 instance running a service in a public subnet.&lt;/p&gt;

&lt;p&gt;Amazon currently provides a NAT AMI. You can find them by searching for &lt;code&gt;amzn-ami-vpc-nat&lt;/code&gt; in the name. You may, however, need to build your own NAT images in the future since Amazon built the NAT AMIs on a version of Amazon Linux that is EOL, and they &lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat.html#vpc-nat-ami%7B:target=%22blank%22%7D"&gt;don't plan on updating the NAT images&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eLBHYET8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://imgur.com/KToUuZk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eLBHYET8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://imgur.com/KToUuZk.png" alt="Example NAT Instance setup" width="568" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on the &lt;a href="https://aws.amazon.com/compliance/shared-responsibility-model/"&gt;AWS shared responsibility model&lt;/a&gt;{:target="blank"}, you will need to manage updating and scaling your NAT instance. As a tradeoff, you get more control over traffic routing, and you can run software on the instance beyond just a NAT service.&lt;/p&gt;

&lt;p&gt;Performance of your NAT Instance will be up to you, since it can vary based on the &lt;a href="https://aws.amazon.com/ec2/instance-types/"&gt;instance type that you choose&lt;/a&gt;{:target="blank"}. For example, a &lt;code&gt;t3.micro&lt;/code&gt; instance can have up to 5 Gbps, but a &lt;code&gt;m5n.12xlarge&lt;/code&gt; can get 50 Gbps of bandwidth.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Is AWS NAT Gateway?
&lt;/h3&gt;

&lt;p&gt;AWS NAT Gateway is the new, managed solution to setting up a NAT device in your VPC. Since it's a managed device, you can set it up once and forget about it. AWS will take care of automatically scaling and updating it as needed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iZ6dPJo4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://imgur.com/lhii3oW.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iZ6dPJo4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://imgur.com/lhii3oW.png" alt="Example NAT Gateway" width="568" height="592"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The AWS NAT Gateway can scale to allow up to 45 Gbps through it. If you need more bandwidth, you can always create another one and send different subnet traffic through different gateways.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nat Gateway vs Nat Instance Pricing
&lt;/h2&gt;

&lt;p&gt;The cost of an AWS NAT instance is just like any other EC2 instance. It’s determined by the type of instance and the amount of data transferred out to the internet.&lt;/p&gt;

&lt;p&gt;When you use an AWS NAT Gateway, you're &lt;a href="https://aws.amazon.com/vpc/pricing/"&gt;charged for two things&lt;/a&gt;{:target="blank"}: a flat rate for every hour that it's running, and a fee for every GB that passes through it.&lt;/p&gt;

&lt;h3&gt;
  
  
  NAT Gateway Pricing
&lt;/h3&gt;

&lt;p&gt;You can use the &lt;a href="https://calculator.aws/"&gt;AWS Pricing Calculator&lt;/a&gt;{:target="blank"} to estimate the costs of VPC configurations. Using the example of the auto repair shop from the introduction, you can calculate some example costs. We'll assume that you'll be transferring 100 GB every month.&lt;/p&gt;

&lt;p&gt;You can use the &lt;code&gt;t3.micro&lt;/code&gt; and &lt;code&gt;m5n.12xlarge&lt;/code&gt; instance types from earlier to get an idea of the range of instance costs. Assuming that you want to run the instance all the time and use an EC2 Instance Savings Plan, you will get the following values:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;t3.micro&lt;/td&gt;
&lt;td&gt;$7.75&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;m5n.12xlarge&lt;/td&gt;
&lt;td&gt;$1,316.27&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NAT Gateway&lt;/td&gt;
&lt;td&gt;$37.35&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Ensuring High Availability
&lt;/h3&gt;

&lt;p&gt;If you follow AWS best practices in your VPC, you'll need to set up &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/use-fault-isolation-to-protect-your-workload.html"&gt;redundancy across multiple availability zones&lt;/a&gt;{:target="blank"} to ensure your application is highly available. This means you'll need to create a NAT Instance or Gateway for each availability zone (AZ). Depending on your availability requirements, this means you'll need to multiply each of the costs in the table by 2 at minimum.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reduce AWS NAT Gateway Costs
&lt;/h2&gt;

&lt;p&gt;Now that you know how much a NAT device is going to cost, you may be wondering if there's a way to reduce your AWS bill. Read on to learn a few of my favorite methods.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use the Right Tool
&lt;/h3&gt;

&lt;p&gt;You can see from the comparison table above that the prices of NAT Instances can vary greatly. If you need bandwidth close to 45 Gbps, then you should definitely use the NAT Gateway. In the example above, you would save $1,278.92 and offload maintenance work onto Amazon.&lt;/p&gt;

&lt;p&gt;On the other hand, if you need to run a bastion server and 5 Gbps is enough bandwidth, the &lt;code&gt;t3.micro&lt;/code&gt; is plenty. This would save $29.60 every month. While it's not as big of a savings as switching from an &lt;code&gt;m5n&lt;/code&gt; instance to the NAT Gateway, you do gain the option of using it as a bastion server, too.&lt;/p&gt;

&lt;p&gt;Another consideration is maintenance time and costs. If you’re at a smaller company where everyone has multiple roles, offloading maintenance time to AWS can provide a substantial productivity boost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Take Advantage of Maintenance Windows
&lt;/h3&gt;

&lt;p&gt;In the maintenance shop example, you need to keep the NAT device running all the time if the service places orders for parts throughout the day. If you change the service to place vendor orders at a specific time every day, you could turn on a NAT instance on a schedule.&lt;/p&gt;

&lt;p&gt;You can create an EC2 Auto Scaling Group that spins up your NAT Instance &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html"&gt;a few minutes before your maintenance window&lt;/a&gt;{:target="blank"} and scales down to zero when &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html"&gt;incoming network traffic dies off&lt;/a&gt;{:target="blank"}.&lt;/p&gt;

&lt;p&gt;If you want to use a NAT Gateway on a schedule, you certainly could do that. It'd be a bit more complicated though, since you would need to create and destroy the gateway on a schedule. You could set up a CloudWatch event that triggers a lambda that updates your VPC infrastructure.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Has a Gift for You
&lt;/h3&gt;

&lt;p&gt;The easiest way to save some money is to take advantage of AWS's &lt;a href="https://aws.amazon.com/free/?all-free-tier.sort-by=item.additionalFields.SortRank&amp;amp;all-free-tier.sort-order=asc&amp;amp;awsf.Free%20Tier%20Types=tier%23always-free&amp;amp;awsf.Free%20Tier%20Categories=*all"&gt;always free resources&lt;/a&gt;{:target="blank"}. You can run a single &lt;code&gt;t3.micro&lt;/code&gt; for 750 hours a month for free. So if you don't have a lot of traffic (less than 5 Gbps), throw up a free NAT instance and use that money on something else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;A NAT device acts as a secure bridge between your private subnet and the internet. AWS provides two NAT device types: a NAT instance that you manage yourself, and a NAT gateway. Since there are some tradeoffs on performance, cost, maintenance, and configurability, you'll need to evaluate both options for your project.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was originally published on: &lt;a href="https://www.cloudforecast.io/blog/aws-nat-gateway-pricing-and-cost/"&gt;https://www.cloudforecast.io/blog/aws-nat-gateway-pricing-and-cost/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Kubernetes Cost Management and Analysis Guide</title>
      <dc:creator>Tony Chan</dc:creator>
      <pubDate>Wed, 22 Sep 2021 17:13:04 +0000</pubDate>
      <link>https://forem.com/cloudforecast/kubernetes-cost-management-and-analysis-guide-1e1b</link>
      <guid>https://forem.com/cloudforecast/kubernetes-cost-management-and-analysis-guide-1e1b</guid>
      <description>&lt;p&gt;The popularity of Kubernetes is constantly increasing, with more and more companies moving their workload to this way of orchestration. Some organizations exclusively develop new applications on Kubernetes, taking advantage of the architecture designs it enables. Other organizations move their current infrastructure to Kubernetes in a lift-and-shift manner. While some tools offer native solutions to cost analysis, these can quickly become too simple of an overview.&lt;/p&gt;

&lt;p&gt;Having your workload running in Kubernetes can bring lots of benefits, but costs become difficult to manage and monitor. In this article, we’ll examine the key reasons why cost can be so difficult to manage in Kubernetes. Plus, you’ll gain insight into how you can improve your cost management significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Traditional vs. Kubernetes Resource Management
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FNgZvnRf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FNgZvnRf.png" alt="Architecture Overview"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before diving into cost management, it's important to first understand how the underlying resources differ. We’ll use the simple webshop above as an example. This webshop contains three distinct components: a frontend service, a cart service, and a product service. The frontend service is responsible for serving everything visually. The cart service is responsible for saving a customer's order in the database. Lastly, the product service is an API that other services, like the frontend, can query in order to get product information. An actual webshop will naturally be more complicated, but we'll stick with this as an example.&lt;/p&gt;

&lt;h3&gt;
  
  
  Traditional Architecture
&lt;/h3&gt;

&lt;p&gt;Traditionally you would spin up each service on their own pools of VMs, giving these pools the appropriate sizes. This makes it easy to see the cost of each service; you just need to look at the bill. For example, you can quickly figure out the product service is taking up a lot of resources, which you can then start looking into.&lt;/p&gt;

&lt;p&gt;Since traditional architecture has been around for so long, many tools—especially cloud providers—are used to reporting costs this way. This isn't the case for Kubernetes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Architecture
&lt;/h3&gt;

&lt;p&gt;It's possible to re-create the traditional architecture in Kubernetes with a dedicated node pool for each service, but this isn’t the best practice. Ideally, you should use a single or a few pools to host your applications, meaning the three distinct services can run on the same set of nodes. Because of this, your bill can’t tell you what service is taking up what amount of resources.&lt;/p&gt;

&lt;p&gt;Kubernetes does provide you with standard metrics like CPU and RAM usage per application, but it’s still tough to decipher not only what is costing you a lot, but specifically how you can lower costs. Given Kubernetes’ various capabilities, many strategies can be implemented to lower costs.&lt;/p&gt;

&lt;p&gt;Strategies can involve rightsizing nodes, which isn't too different from a traditional architecture, but Kubernetes offers something new. Kubernetes lets you rightsize Pods. Using limits and requests, as well as specifying the right size of your nodes, you can make sure Pods are efficiently stacked on your nodes for optimal utilization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Comparing Architectures
&lt;/h3&gt;

&lt;p&gt;While Kubernetes offers many advantages over a traditional architecture, moving your workload to this orchestrator does present challenges. Kubernetes requires extra focus on cost; it won’t be possible to simply look at the bill and know what resources are costing a lot.&lt;/p&gt;

&lt;p&gt;With Kubernetes, you should look into using specialized tools for cost reporting. Many of these tools include recommendations on how to lower your cost, which is especially useful. Let's take a deeper dive into how you would manage cost in a Kubernetes setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Kubernetes Costs
&lt;/h2&gt;

&lt;p&gt;Managing costs in Kubernetes is not a one-and-done process. There are a number of pitfalls that, if overlooked, could result in businesses experiencing higher costs than what they may have predicted. Let’s talk about some areas where you should be on the lookout for opportunities to mitigate costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Workload Considerations
&lt;/h3&gt;

&lt;p&gt;First, understand the nature of your application and how it translates to a cluster environment. Does your application consist of long-lived operations, batch operations that get triggered, are they stateful (ie, databases) or are they stateless?&lt;/p&gt;

&lt;p&gt;The answers to these questions should inform the decision-making process around what Kubernetes objects need to be created. Ensuring that your environment only runs the necessary resources is a key step to cost optimization.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Workload Resource Management
&lt;/h3&gt;

&lt;p&gt;Once you have a clear picture of your resources, you can set some limits and configure features like Horizontal Pod Autoscaling (HPA) to scale pods up and down based on utilization. HPAs can be configured to operate based on metrics like CPU and memory out of the box, and can be additionally configured to operate on custom metrics. As you analyze your workload, you can further modify the settings that determine the behavior of your resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kubernetes Infrastructure Resource Management
&lt;/h3&gt;

&lt;p&gt;Managing Kubernetes costs around infrastructure can be especially tricky as you try to figure out the right type of nodes to support your workloads. Your node types will depend on the applications, their resource requirements, and factors related to scaling.&lt;/p&gt;

&lt;p&gt;Operators can configure monitoring and alerts to keep track of how nodes are coping and what occurrences in your workload may be triggering scaling events. These kinds of activities can help organizations save costs related to overprovisioning by leveraging scaling features and tools like &lt;a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="noopener noreferrer"&gt;Cluster Autoscaler&lt;/a&gt; to scale nodes when necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  Leveraging Observability
&lt;/h3&gt;

&lt;p&gt;In the same vein as the previous point, your organization can make more informed decisions regarding your Kubernetes cluster size and node types by monitoring custom application metrics (ie, requests per second) along with CPU, memory, network, and storage utilization by Pods.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing Kubernetes Cost with Monitoring
&lt;/h2&gt;

&lt;p&gt;One of the main ways to optimize the costs associated with running Kubernetes clusters is to set up the correct tooling for monitoring. You’ll also need to know how to react to the information you receive, and make sure it’s given to you effectively.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloudforecast.io/kubernetes-eks-and-ecs-cost-management.html" rel="noopener noreferrer"&gt;Barometer&lt;/a&gt; is coming soon to CloudForecast, which will be helpful for use cases like this.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monitoring Kubernetes Cluster Cost
&lt;/h3&gt;

&lt;p&gt;The first things you need to monitor in your Kubernetes Cluster are CPU and memory usage. These metrics give you a quick overview of how many resources your Kubernetes cluster is using. By making sure resources in your Kubernetes cluster are correctly tagged using labels or namespaces, you’ll quickly learn what services are costing the most in your organization.&lt;/p&gt;

&lt;p&gt;The easiest way to monitor these metrics is via automated reporting. &lt;a href="https://cloudforecast.io/kubernetes-eks-and-ecs-cost-management.html" rel="noopener noreferrer"&gt;CloudForecast’s upcoming tool&lt;/a&gt; will be able to consolidate these reports and deliver them to your team by email or Slack. This ensures each team is aware of how their services are performing, and whether they're using up too many costly resources.&lt;/p&gt;

&lt;p&gt;Setting up a general overview is highly recommended. Additionally, you should also ensure you get notified if something out of the ordinary happens. For example, you’ll want to be notified if the product service suddenly starts costing a lot more; this allows you to troubleshoot why and work on fixes.&lt;/p&gt;

&lt;p&gt;Kubernetes comes with various different metrics you can use to determine the cost of a specific service. Using the &lt;code&gt;/metrics&lt;/code&gt; endpoint provided by the Kubernetes API, you can get a view into &lt;code&gt;pod_cpu_utilization&lt;/code&gt; and &lt;code&gt;pod_memory_utilization&lt;/code&gt;. With these metrics it becomes easier to see what workloads are drawing what costs. Tools like CloudForecast’s Barometer use these metrics to calculate how many dollars every pod is spending. Having this overview and g_tting a baseline cost of your_ Kubernetes cluster, will help you know when costs are rising too rapidly, and exactly where it’s happening. Knowing how cAdvisor works with Prometheus, and the metrics they collectively expose is incredibly valuable when you want to examine your clusters.&lt;/p&gt;

&lt;p&gt;While there are many metrics that can be analyzed, RAM and CPU are typically the ones you want to focus on, as these are the ones that drive your provider to allocate more resources. You can think of RAM and CPU metrics as the symptoms of your cost. With a proper overview they will allow you to know what workloads are costing you more than normal, and from there you start drilling into the service to figure out why it’s happening.&lt;/p&gt;

&lt;h3&gt;
  
  
  Acting on Monitoring Data
&lt;/h3&gt;

&lt;p&gt;Once you've been notified of irregularities in your Kubernetes cluster, you need to act. There are many valid strategies for lowering k8s cluster cost. As mentioned earlier, a good first step in Kubernetes is to rightsize your nodes and Pods so they run efficiently. Whatever steps you take to optimize cost, doing it manually is tough.&lt;/p&gt;

&lt;p&gt;Tools automatically suggest why your cost is high and how to reduce it. This allows you to quickly implement cost optimizations, but also uncover solutions that otherwise wouldn't have come to mind.&lt;/p&gt;

&lt;h3&gt;
  
  
  What to Monitor
&lt;/h3&gt;

&lt;p&gt;Tools can help a lot, but they’re wasted without a good foundation. To set up a good foundation, you should determine a set of Key Performance Indicators (KPIs) to monitor. A great KPI example is the number of untagged resources. Having your resources tagged allows your tool reports to be more precise, delivering better optimizations.&lt;/p&gt;

&lt;p&gt;You could also monitor the total cost of your untagged resources. This can act as motivation for getting your resources tagged, and remind your team to maintain a good baseline approach when setting up new resources. Tracking your KPIs before and after the introduction of a tool is a great way to determine how much it actually helps. In any case, determining KPIs will make sure you’re on top of what's happening in your Kubernetes cluster.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Develop a Unit Cost Calculator
&lt;/h2&gt;

&lt;p&gt;Crucial to understanding your cost is knowing how to use the &lt;a href="https://calculator.aws/" rel="noopener noreferrer"&gt;AWS Pricing Calculator&lt;/a&gt;. This helps you compare costs associated with running a self-hosted Kubernetes cluster versus with an Amazon EKS cluster.&lt;/p&gt;

&lt;p&gt;The CPU (vCPUs) and Memory (GiB) specified in the following example are just for demonstrative purposes and will vary depending on workload requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS EKS Cluster Cost and Pricing Estimation
&lt;/h3&gt;

&lt;p&gt;The following calculations are for a Highly Available (HA) Kubernetes cluster with a control plane managed by AWS (EKS) and three worker nodes with 4 vCPUs and 16 GiB of memory each. The instance type used in this case is a t4g.xlarge Reserved EC2 instance (1 year period). This instance type is automatically generated as a recommendation based on the CPU and memory requirements that are specified.&lt;/p&gt;

&lt;h4&gt;
  
  
  Unit Calculations
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Instance Savings Plans rate for t4g.xlarge in the EU (Ireland) for 1 Year term and No Upfront is 0.0929 USDHours in the commitment: 365 days x 24 hours x 1 year = 8760.0 hours&lt;/li&gt;
&lt;li&gt;Total Commitment: 0.0929 USD x 8760 hours = 813.8 USD&lt;/li&gt;
&lt;li&gt;Upfront: No Upfront (0% of 813.804) = 0 USD&lt;/li&gt;
&lt;li&gt;Hourly cost for EC2 Instance Savings Plans = (Total Commitment - Upfront cost)/Hours in the term: (813.804 - 0)/8760 = 0.0929 USD&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Please note that you will pay an hourly commitment for the Savings Plans and your usage will be accrued at a discounted rate against this commitment.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Pricing Calculations
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;1 Cluster x 0.10 USD per hour x 730 hours per month = 73 USD&lt;/li&gt;
&lt;li&gt;[worker nodes] 3 instances x 0.0929 USD x 730 hours in month = 203.45 USD (monthly instance savings cost)&lt;/li&gt;
&lt;li&gt;30 GB x 0.11 USD x 3 instances = 9.90 USD (EBS Storage Cost)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monthly Cost: 286.35 USD&lt;br&gt;
Annual Cost: 3,436.20 USD&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6ReUhLG.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2F6ReUhLG.png" alt="HA EKS with Reserved Instances"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-Hosted Kubernetes Cluster Pricing and Cost Estimation
&lt;/h3&gt;

&lt;p&gt;The following calculations are for a custom Highly Available (HA) Kubernetes cluster that is self hosted in AWS, and also consists of three worker nodes with 4 vCPUs and 16 GiB of memory each. Similar to the previous analysis of EKS cluster cost estimations, this analysis will use the same instance type for the same reasons detailed above.&lt;/p&gt;

&lt;h4&gt;
  
  
  Unit Conversions
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Instance Savings Plans rate for t4g.xlarge in the EU (Ireland) for 1 Year term and No Upfront is 0.0929 USD&lt;/li&gt;
&lt;li&gt;Hours in the commitment: 365 days * 24 hours * 1 year = 8760.0 hours&lt;/li&gt;
&lt;li&gt;Total Commitment: 0.0929 USD * 8760 hours = 813.8 USD&lt;/li&gt;
&lt;li&gt;Upfront: No Upfront (0% of 813.804) = 0 USD&lt;/li&gt;
&lt;li&gt;Hourly cost for EC2 Instance Savings Plans = (Total Commitment - Upfront cost)/Hours in the term: (813.804 - 0)/8760 = 0.0929 USD&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Please note that you will pay an hourly commitment for Savings Plans and your usage will be accrued at a discounted rate against this commitment.&lt;/em&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Pricing Calculations
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;[control-plane nodes] 3 instances x 0.0929 USD x 730 hours in month = 203.45 USD (monthly instance savings cost)&lt;/li&gt;
&lt;li&gt;[worker nodes] 3 instances x 0.0929 USD x 730 hours in month = 203.45 USD (monthly instance savings cost)&lt;/li&gt;
&lt;li&gt;30 GB x 0.11 USD x 3 instances = 9.90 USD (EBS Storage Cost)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Monthly Cost: 426.70 USD&lt;br&gt;
Annual Cost: 5,120.40 USD&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FzbTlSQj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fi.imgur.com%2FzbTlSQj.png" alt="HA Custom Control Plane with Saving Plans"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By now you've learned how Kubernetes architecture differs from traditional architecture. You've learned what challenges arise once you start to manage costs in Kubernetes, and how to keep them under control. Features like labeling and namespacing can have a great impact on the traceability of your cost, allowing you to reap the full benefits of a Kubernetes architecture. Also, you’ve learned how using the AWS Pricing Calculator can help you estimate the costs associated with running your workloads on a custom Kubernetes cluster compared to running an EKS cluster.&lt;/p&gt;

&lt;p&gt;Using a tool like CloudForecast’s &lt;a href="https://cloudforecast.io/kubernetes-eks-and-ecs-cost-management.html" rel="noopener noreferrer"&gt;Barometer&lt;/a&gt; can greatly improve the tracking of cost in your cluster. Barometer not only offers you an effective general overview, it also gives you actionable cost optimization insights.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This article was originally published on: &lt;a href="https://www.cloudforecast.io/blog/kubernetes-cost-management-and-analysis/" rel="noopener noreferrer"&gt;https://www.cloudforecast.io/blog/kubernetes-cost-management-and-analysis/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>k8s</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>AWS RDS Pricing and Optimization Guide</title>
      <dc:creator>Tony Chan</dc:creator>
      <pubDate>Tue, 17 Nov 2020 16:33:40 +0000</pubDate>
      <link>https://forem.com/cloudforecast/aws-rds-pricing-and-optimization-guide-46g9</link>
      <guid>https://forem.com/cloudforecast/aws-rds-pricing-and-optimization-guide-46g9</guid>
      <description>&lt;p&gt;Amazon Web Services makes getting your data into their &lt;a href="https://aws.amazon.com/rds/" rel="noopener noreferrer"&gt;Relational Database Service&lt;/a&gt;(RDS) relatively easy. Import costs are free, and you can store up to &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.Limits" rel="noopener noreferrer"&gt;100 terabytes across all your instances&lt;/a&gt;. AWS RDS hosts your relational databases in the cloud, and their engineers handle patching, monitoring, availability, and some security concerns.&lt;/p&gt;

&lt;p&gt;These factors make getting started with AWS RDS easy, but understanding and controlling your costs is another matter entirely. In this article, you’ll learning the following with AWS RDS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
AWS RDS Cost and Pricing

&lt;ul&gt;
&lt;li&gt;AWS RDS Database Engine&lt;/li&gt;
&lt;li&gt;RDS Instance Sizes&lt;/li&gt;
&lt;li&gt;Reserved Instances&lt;/li&gt;
&lt;li&gt;RDS Storage: Aurora and Autoscaling&lt;/li&gt;
&lt;li&gt;RDS Backups&lt;/li&gt;
&lt;li&gt;Regions&lt;/li&gt;
&lt;li&gt;Multi-AZ Deployments&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

RDS Cost Monitoring

&lt;ul&gt;
&lt;li&gt;AWS Cost Explorer&lt;/li&gt;
&lt;li&gt;RDS Management Console and Enhanced Monitoring&lt;/li&gt;
&lt;li&gt;CloudForecast&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

AWS RDS Cost Optimization

&lt;ul&gt;
&lt;li&gt;Right Sizing your Instances&lt;/li&gt;
&lt;li&gt;Database Hygiene&lt;/li&gt;
&lt;li&gt;RDS IOPS&lt;/li&gt;
&lt;li&gt;RDS CloudWatch Metrics&lt;/li&gt;
&lt;li&gt;RDS Data Transfer Cost&lt;/li&gt;
&lt;li&gt;RDS Snapshots&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;




&lt;h2&gt;
  
  
  AWS RDS Cost and Pricing
&lt;/h2&gt;

&lt;p&gt;Instance usage, storage, I/O, backups, and data transfers drive the bulk of your AWS RDS costs. Instance usage and storage are unavoidable, but generally, you should minimize them while adequately addressing your needs. AWS RDS offers some I/O and backup capability bundled into the cost of storage, but you might need more. Moving data into RDS from the internet is free, but moving it out of RDS can get expensive.&lt;/p&gt;

&lt;p&gt;In this section, I’ll dive deeper into each of AWS RDS’s pricing factors to help you understand how your usage might affect your monthly bill.&lt;/p&gt;

&lt;h3&gt;
  
  
  RDS Database Engine
&lt;/h3&gt;

&lt;p&gt;Amazon currently offers six database engines: &lt;a href="https://aws.amazon.com/rds/aurora" rel="noopener noreferrer"&gt;Amazon Aurora&lt;/a&gt;, &lt;a href="https://aws.amazon.com/rds/postgresql/" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt;, &lt;a href="https://aws.amazon.com/rds/mysql/" rel="noopener noreferrer"&gt;MySQL&lt;/a&gt;, &lt;a href="https://aws.amazon.com/rds/mariadb/" rel="noopener noreferrer"&gt;MariaDB&lt;/a&gt;, &lt;a href="https://aws.amazon.com/rds/oracle/" rel="noopener noreferrer"&gt;Oracle&lt;/a&gt;, and &lt;a href="https://aws.amazon.com/rds/sqlserver/" rel="noopener noreferrer"&gt;Microsoft SQL Server&lt;/a&gt;. Normally you won’t be able to change your database engine, but you can choose to optimize for memory, performance, or I/O.&lt;/p&gt;

&lt;p&gt;The three open-source databases (Postgres, MySQL, and MariaDB) are similar in price. Depending on the size, PostgreSQL instances are five to ten percent more expensive per hour, but PostgreSQL, MySQL, and MariaDB share pricing for storage, provisioned I/O, and data transfer.&lt;/p&gt;

&lt;p&gt;AWS Aurora is Amazon's proprietary database, so it gets special treatment. AWS offers a &lt;a href="https://aws.amazon.com/rds/aurora/serverless/" rel="noopener noreferrer"&gt;serverless option&lt;/a&gt;, making it ideal for applications that don’t need to be on all the time, like test environments. Aurora also has a &lt;a href="https://aws.amazon.com/rds/aurora/global-database/" rel="noopener noreferrer"&gt;multi-zone backup system&lt;/a&gt; that charges per million replicated I/O operations. Storage per gibibyte (GiB) is a few cents more expensive, but if you're dealing with intermittent usage or need fast failovers and many read replicas, Aurora can save money over implementing these features on other engines.&lt;/p&gt;

&lt;p&gt;Oracle and SQL Server aren't open-source or owned by Amazon. To accomodate licensing, hourly instances can cost nearly twice as much. You can self-license with Oracle, which brings the cost in line with open-source options. Other fees, like storage and data transfer, match their open-source counterparts.&lt;/p&gt;

&lt;p&gt;While this guide isn’t intended to help you choose the best database engine, it is important to note that pricing varies based on the engine you choose.&lt;/p&gt;

&lt;h3&gt;
  
  
  RDS Instance Sizes
&lt;/h3&gt;

&lt;p&gt;Once you select an engine, you have to select an RDS &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html" rel="noopener noreferrer"&gt;instance size&lt;/a&gt; with the appropriate computational (vCPU), network (Mbps), and memory capacity (GiB RAM). RDS offers instances ranging from &lt;code&gt;db.t3.micro&lt;/code&gt; (2 vCPUS, 1 GiB RAM, 2085 Mbps) to &lt;code&gt;db.m5.24xlarge&lt;/code&gt; (96 vCPUS, 384 GiB RAM, 19,000 Mbps).&lt;/p&gt;

&lt;p&gt;Selecting the right RDS instance size can be challenging. To estimate the RDS instance size you’ll need, estimate or track the amount of data your queries need (called your &lt;a href="http://www.tocker.ca/2013/05/31/estimating-mysqls-working-set-with-information_schema.html" rel="noopener noreferrer"&gt;working set&lt;/a&gt;), then select an instance that can fit your working set into &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html#CHAP_BestPractices.Performance.RAM" rel="noopener noreferrer"&gt;memory&lt;/a&gt;. I’ll touch on RDS monitoring and right-sizing your RDS instances later in this guide.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reserved Instances
&lt;/h3&gt;

&lt;p&gt;Without additional configuration, RDS instances are created on-demand. These instances are billed in one-second increments from the moment the instance starts to its termination. You can stop, start, or change an on-demand &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html" rel="noopener noreferrer"&gt;instance size&lt;/a&gt; at any time.&lt;/p&gt;

&lt;p&gt;The alternative to on-demand pricing is &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-optimization/reserved-instances/" rel="noopener noreferrer"&gt;Reserved Instances&lt;/a&gt; for RDS. You commit to lease an RDS instance for a set period (1 or 3+ years) in exchange for discounts up to 60%. AWS offers sizing flexibility for all Reserved Instance engines except SQL Server and License Included Oracle, allowing administrators to freely change instance size within the same family. If you're able to commit to RDS for a year or three and have monitored your requirements enough to develop a solid performance baseline, you can save money by trading away the flexibility to turn off or downsize your databases.&lt;/p&gt;

&lt;h3&gt;
  
  
  RDS Storage: Aurora and Autoscaling
&lt;/h3&gt;

&lt;p&gt;For most engines, you buy storage per GiB in advance. Aurora is the exception: you only pay for what you use. It's important to accurately predict your monthly storage needs as you &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/rds-db-storage-size/" rel="noopener noreferrer"&gt;cannot reduce storage&lt;/a&gt; on an instance (&lt;a href="https://aws.amazon.com/about-aws/whats-new/2020/10/amazon-aurora-enables-dynamic-resizing-database-storage-space/" rel="noopener noreferrer"&gt;except Aurora&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;AWS can &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.Autoscaling" rel="noopener noreferrer"&gt;auto-scale&lt;/a&gt; your storage when an instance has under 10% space remaining for more than 5 minutes. This option has the benefit of keeping storage costs low, but can surprise you if something unexpected happens. To protect against auto-scaling to 65,536 GiB, set a maximum storage threshold for your instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cloudforecast.io%2Fblog%2Fassets%2Fmedia%2Fautoscaling.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cloudforecast.io%2Fblog%2Fassets%2Fmedia%2Fautoscaling.png" title="RDS Aurora and Autoscaling Thresholds" alt="Enabling a storage threshold for auto-scaling"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you can accurately predict your storage needs, manually provisioning is the cheapest option. If you're facing unpredictability or unused storage, consider auto-provisioning and focus on maintaining a reasonable storage maximum.&lt;/p&gt;

&lt;h3&gt;
  
  
  RDS Backups
&lt;/h3&gt;

&lt;p&gt;AWS backs up 100% of the storage you've purchased in any zone for free. If you buy 20 GiB of storage across two instances, it includes 20 GiB of backup space.&lt;/p&gt;

&lt;p&gt;If you need more space for backups, you pay per GiB at a slightly lower rate than regular storage costs. RDS automatically backs up each storage volume every day. These backups are stored according to the backup retention period. Automated backups will not occur if the DB's state is not &lt;code&gt;AVAILABLE&lt;/code&gt; (for example, if the state is &lt;code&gt;STORAGE_FULL&lt;/code&gt;). Users can also create manual backups. These never expire and count against your backup storage total.&lt;/p&gt;

&lt;h3&gt;
  
  
  Regions
&lt;/h3&gt;

&lt;p&gt;As with most AWS services, RDS costs are specific to a &lt;a href="https://aws.amazon.com/about-aws/global-infrastructure/regions_az/" rel="noopener noreferrer"&gt;region&lt;/a&gt;. Choose your region carefully because the most expensive regions double the hourly cost of instances, increase storage by a few cents per GiB, and 5x inter-zone data transfer pricing.&lt;/p&gt;

&lt;p&gt;On the other hand, if your database is located further from your application servers, you’ll add latency to every database call. If this latency degrades user experience, you probably don’t want to use a distant, slightly cheaper region.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-AZ Deployments
&lt;/h3&gt;

&lt;p&gt;If you need availability when an AWS regional data center encounters trouble, you can enable &lt;a href="https://aws.amazon.com/rds/features/multi-az/" rel="noopener noreferrer"&gt;Multi-AZ deployments&lt;/a&gt;. This creates a backup database instance and replicates your data to a second AWS data center. Be aware that this &lt;strong&gt;doubles your monthly instance and storage costs&lt;/strong&gt; but enhances the reliability of critical services. If you need a Multi-AZ deployment, focus on reducing your storage needs and instance size, these gains are won twice.&lt;/p&gt;




&lt;h2&gt;
  
  
  RDS Cost Monitoring
&lt;/h2&gt;

&lt;p&gt;Once you understand the many options RDS offers and set up an instance, there are a few tools that will help you audit your current usage and predict future requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Cost Explorer
&lt;/h3&gt;

&lt;p&gt;The best way to audit your RDS spending is the [AWS Cost Explorer](&lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/" rel="noopener noreferrer"&gt;https://aws.amazon.com/aws-cost-management/aws-cost-explorer/&lt;/a&gt;. Activating and examining your daily or monthly spend is an excellent way to visualize your organization's priorities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cloudforecast.io%2Fblog%2Fassets%2Fmedia%2Frds_costexplorer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cloudforecast.io%2Fblog%2Fassets%2Fmedia%2Frds_costexplorer.png" title="AWS Cost Explorer" alt="Using the AWS Cost Explorer to see your RDS spend"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cloudforecast.io/blog/aws-tagging-best-practices/" rel="noopener noreferrer"&gt;Tagging your resources&lt;/a&gt; helps you understand which projects and teams are using which databases. Cost Explorer also offers &lt;a href="https://docs.aws.amazon.com/savingsplans/latest/userguide/sp-recommendations.html" rel="noopener noreferrer"&gt;suggestions&lt;/a&gt; for using reserved instances based on your past usage.&lt;/p&gt;

&lt;h3&gt;
  
  
  RDS Management Console and Enhanced Monitoring
&lt;/h3&gt;

&lt;p&gt;AWS provides a “Monitoring” tab in the RDS Console that displays free-tier CloudWatch metrics like the number of connections and CPU utilization. Keeping an eye on your usage in the console can help you prepare to right-size your storage or purchase reserved instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cloudforecast.io%2Fblog%2Fassets%2Fmedia%2Frds_cloudwatch.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cloudforecast.io%2Fblog%2Fassets%2Fmedia%2Frds_cloudwatch.png" title="CloudWatch" alt="Using the RDS Console to see your RDS utilization"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS gives you the option to activate additional monitoring services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/rds/performance-insights/" rel="noopener noreferrer"&gt;&lt;strong&gt;Performance Insights&lt;/strong&gt;&lt;/a&gt; gathers data about the database load. This tool has its own &lt;a href="https://aws.amazon.com/rds/performance-insights/pricing" rel="noopener noreferrer"&gt;pricing model&lt;/a&gt; with a free tier that includes 7-day retention.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Enhanced Monitoring&lt;/strong&gt;&lt;/a&gt; is stored and priced as &lt;a href="https://aws.amazon.com/cloudwatch/pricing/" rel="noopener noreferrer"&gt;CloudWatch logs&lt;/a&gt;. It reports metrics from a user agent instead of the hypervisor, allowing you to examine running processes and the OS, which is useful for examining the resource usage of individual queries.&lt;/p&gt;

&lt;p&gt;Finally, you can enable and access &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html" rel="noopener noreferrer"&gt;database logs&lt;/a&gt; directly for the price of storing the files.&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudForecast
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.cloudforecast.io/" rel="noopener noreferrer"&gt;CloudForecast&lt;/a&gt;supplements AWS’s Cost Explorer through proactive monitoring and optimization reports that keeps your RDS cost in check.&lt;/p&gt;

&lt;p&gt;Through the &lt;a href="https://cloudforecast.io/aws-daily-cost-report-tool.html" rel="noopener noreferrer"&gt;daily cost reports&lt;/a&gt;, you'll receive a daily report via email or slack that details your RDS cost in relation to your overall spend and alerts you of any cost anamolies with RDS.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.cloudforecast.io/aws-tagging-compliance-report.html" rel="noopener noreferrer"&gt;tagging compliance report&lt;/a&gt; helps make sure your RDS instances are properly tagged and let you know exactly which RDS resources are not following compliance.&lt;/p&gt;

&lt;p&gt;Finally, the &lt;a href="https://www.cloudforecast.io/kb/docs/general-info/whatis/#aws-cost-optimization" rel="noopener noreferrer"&gt;ZeroWaste Health Report&lt;/a&gt; (beta) let's you know of possible inefficiencies by identifying all your over-provisioned and unused RDS instances in one single report.&lt;/p&gt;




&lt;h2&gt;
  
  
  RDS Cost Optimization
&lt;/h2&gt;

&lt;p&gt;Armed with insights into your requirements and RDS's abilities, it's time to put cost-saving measures and AWS &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html" rel="noopener noreferrer"&gt;best practices&lt;/a&gt; to use. In the rest of this guide, I’ll offer some strategies for decreasing your RDS costs using the insights you gathered above.&lt;/p&gt;

&lt;h3&gt;
  
  
  Right Sizing your Instances
&lt;/h3&gt;

&lt;p&gt;In short, turn off anything that's not being used.&lt;/p&gt;

&lt;p&gt;Every month, you pay for instances and storage and the infrastructure attached to them. You can check each database’s utilization using the connections metric in the RDS Console. On-demand RDS instances can be stopped for up to 7 days. When stopped, you aren't charged for DB Instance hours, but you are charged for storage. You can use an AWS Lambda &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents-tutorial.html" rel="noopener noreferrer"&gt;scheduled event&lt;/a&gt; and DB Instance API calls to programmatically stop and start instances.&lt;/p&gt;

&lt;p&gt;To practice &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-optimization/right-sizing/" rel="noopener noreferrer"&gt;right-sizing&lt;/a&gt;, act on your monitoring and purchase the machine with the minimum requirements to meet your needs. Multiple RDS instances can also be consolidated into a single instance to minimize costs. This is especially helpful for development environments where a low number of users can access several databases running on a single instance. &lt;/p&gt;

&lt;h3&gt;
  
  
  Database Hygiene
&lt;/h3&gt;

&lt;p&gt;Indexing and database sanitation are both important for controlling costs. Proper indexing is important for performance and I/O, as it allows your instance size to remain small and minimizes bottlenecks.&lt;/p&gt;

&lt;p&gt;Removing unused tables, columns, and indexes directly impacts your storage costs. &lt;a href="https://aws.amazon.com/caching/database-caching/" rel="noopener noreferrer"&gt;Caching&lt;/a&gt;and batching statements can improve performance. You can use &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html" rel="noopener noreferrer"&gt;Enhanced Monoring&lt;/a&gt; and database-specific tools like the &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.MySQL.html#USER_LogAccess.MySQL.Generallog" rel="noopener noreferrer"&gt;MySQL slow query log&lt;/a&gt; to track and examine outliers that take lots of resources, then optimize them.&lt;/p&gt;

&lt;h3&gt;
  
  
  RDS IOPS
&lt;/h3&gt;

&lt;p&gt;Input/Output (I/O) operations are extremely important to databases. You use an input operation to write to the database and an output operation to read data. You can monitor read and write I/O per second (IOPS) from the RDS Console.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cloudforecast.io%2Fblog%2Fassets%2Fmedia%2Frdsiops-1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.cloudforecast.io%2Fblog%2Fassets%2Fmedia%2Frdsiops-1.png" alt="I/O Management in RDS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#Concepts.Storage.GeneralSSD" rel="noopener noreferrer"&gt;General Purpose SSD&lt;/a&gt; instances start fully stocked with 5.4 million IOPS credits, enough to perform 3,000 operations a second for 30 minutes. They also generate I/O credits at a rate of &lt;em&gt;3 IOPS per GiB of storage&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In the first months of an RDS transition, keep &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MonitoringOverview.html#USER_Monitoring" rel="noopener noreferrer"&gt;an eye&lt;/a&gt; on your IOPS credit balance (also reported in the RDS console) so you don't run out.&lt;/p&gt;

&lt;p&gt;The options for directly increasing I/O are purchasing more storage or switching to the &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#USER_PIOPS" rel="noopener noreferrer"&gt;Provisioned IOPS storage type&lt;/a&gt;, where you pay a fixed price per thousand IOPS. Indexing and increasing the instance size can also increase I/O speed as each item in the queue is handled more efficiently.&lt;/p&gt;

&lt;h3&gt;
  
  
  RDS CloudWatch Metrics
&lt;/h3&gt;

&lt;p&gt;CloudWatch is an excellent monitoring tool, but it incurs its own &lt;a href="https://aws.amazon.com/cloudwatch/pricing/" rel="noopener noreferrer"&gt;costs&lt;/a&gt;. It’s important to consider your monitoring frequency. Going from 5-minute monitoring to 1-minute monitoring will dramatically increase the size of your logs. If your CloudWatch budget is slowly expanding, consider modifying the &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#SettingLogRetention" rel="noopener noreferrer"&gt;log data retention period&lt;/a&gt; and auditing your &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html" rel="noopener noreferrer"&gt;alarms&lt;/a&gt; to be sure they're relevant.&lt;/p&gt;

&lt;h3&gt;
  
  
  RDS Data Transfer Cost
&lt;/h3&gt;

&lt;p&gt;Importing data to RDS from the internet &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/rds-import-data/" rel="noopener noreferrer"&gt;is free&lt;/a&gt;, but you often want to use your data elsewhere. Data transfer out to the internet costs between .09 and .13 cents per GiB. Data transfer prices between zones are very dependent on the chosen zone, ranging in price from .02 to .13 cents per GiB.&lt;/p&gt;

&lt;p&gt;Be aware that you're actually charged twice: once when the data leaves a zone and again when it enters a target zone. To reduce these costs, minimize the amount of data you're sending. Limit queries, and don't re-run reports. Transferring data between Amazon RDS and EC2 Instances in the same Availability Zone is free, so you can sidestep some data fees by consolidating services into a single zone.&lt;/p&gt;

&lt;h3&gt;
  
  
  RDS Snapshots
&lt;/h3&gt;

&lt;p&gt;Database snapshots contribute to your storage costs as well. You can reduce backup storage by reducing your backup retention period and deleting manually created snapshots, which are never automatically removed. If you need to store snapshots, move them somewhere cheaper &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ExportSnapshot.html" rel="noopener noreferrer"&gt;like an S3 instance&lt;/a&gt;. For MySQL, storing snapshots in S3 is about 1/5 as expensive as keeping them in RDS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;AWS RDS is extremely powerful and deeply customizable, but the complex pricing model means that your monthly spend can grow quickly and unexpectedly.&lt;/p&gt;

&lt;p&gt;RDS optimization starts with understanding your current requirements using monitoring tools like CostExplorer and CloudWatch. Once you know the resources you need, you can select the best region, instance engine, and size. Then, keep an eye on your IOPS credits, data transfers between zones and out to the internet, and backup requirements. Database hygiene matters even more when you’re paying per GiB, so optimize queries and use efficient indexes.&lt;/p&gt;

&lt;p&gt;If you’re struggling to understand your AWS spend, &lt;a href="https://www.cloudforecast.io/" rel="noopener noreferrer"&gt;CloudForecast&lt;/a&gt; can help. Reach out to our CTO, &lt;a href="//mailto:francois@cloudforecast.io"&gt;francois@cloudforecast.io&lt;/a&gt;, if you’d like help tagging your resources or implementing a long-term cost-reduction strategy.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This was originally posted on our blog on 11/17/2020: &lt;a href="https://www.cloudforecast.io/blog/aws-rds-pricing-and-optimization/" rel="noopener noreferrer"&gt;https://www.cloudforecast.io/blog/aws-rds-pricing-and-optimization/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>AWS Tagging Best Practices Guide: Part 1 of 3</title>
      <dc:creator>Tony Chan</dc:creator>
      <pubDate>Tue, 18 Aug 2020 15:19:52 +0000</pubDate>
      <link>https://forem.com/cloudforecast/aws-tagging-best-practices-guide-part-1-of-3-3f85</link>
      <guid>https://forem.com/cloudforecast/aws-tagging-best-practices-guide-part-1-of-3-3f85</guid>
      <description>&lt;p&gt;&lt;em&gt;This was originally posted on our blog on August 12, 2020: &lt;a href="https://www.cloudforecast.io/blog/aws-tagging-best-practices/"&gt;https://www.cloudforecast.io/blog/aws-tagging-best-practices/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Part 1: An Introduction to AWS Tagging Strategies
&lt;/h1&gt;

&lt;p&gt;If you've worked in Amazon Web Services for long, you've probably seen or used &lt;a href="https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html"&gt;AWS cost allocation tags&lt;/a&gt; to organize your team's resources. AWS tags allows you to attach metadata to most resources in the form of key-value pairs called tags. In this guide (the first in a three-part series), we'll cover some of the most common use-cases for AWS tags and look at some best AWS tagging best practices for selecting and organizing your AWS tags. Finally, we'll explore some examples of AWS resource tagging strategies used by real companies to improve visibility into their resource utilization in Amazon Web Services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why use AWS Tags?
&lt;/h2&gt;

&lt;p&gt;AWS tags can help you &lt;a href="https://www.cloudforecast.io/blog/how-tagging-resources-can-reduce-your-aws-bill/"&gt;understand and control your AWS costs&lt;/a&gt;. &lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/"&gt;AWS Cost Explorer&lt;/a&gt; allows you to use tags to break down your AWS resource usage over time, while tools like &lt;a href="https://www.cloudforecast.io/?utm_source=blog&amp;amp;utm_medium=banner&amp;amp;utm_campaign=tagging_part1"&gt;CloudForecast&lt;/a&gt; keep you informed of your spending proactively.&lt;/p&gt;

&lt;p&gt;Understanding and controlling your costs isn’t the only reason you should AWS tags to tag your resources. You can use AWS tags to answer a variety of questions, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which team member is the point of contact for this AWS resource?&lt;/li&gt;
&lt;li&gt;How many of our servers have been updated with the latest version of our operating system?&lt;/li&gt;
&lt;li&gt;How many of our services have alerting enabled?&lt;/li&gt;
&lt;li&gt;Which AWS resources are unnecessary at low-load hours?&lt;/li&gt;
&lt;li&gt;Who should have access to this resource?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before you start adding AWS tags to all of your AWS resources, it's essential to create a strategy that will help you sustainably manage your tags. AWS tags can be helpful, but without a consistently applied plan, they can become an unsustainable mess.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Read also: &lt;a href="https://www.cloudforecast.io/blog/how-tagging-resources-can-reduce-your-aws-bill/"&gt;How Tagging AWS Resources can Reduce Your Bill&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Tagging Best Practices
&lt;/h2&gt;

&lt;p&gt;While there isn't a perfect AWS tagging strategy that works for every organization, there are a few AWS tagging best practices that you should be familiar with.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Know how each tag you create will be used
&lt;/h3&gt;

&lt;p&gt;AWS cites &lt;a href="https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html#tag-categories"&gt;four categories for cost allocation tags&lt;/a&gt;: technical, business, security, and automation. Consider which of these categories you will need when creating your AWS tagging strategy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Technical Tags&lt;/strong&gt; help engineers identify and work with the resource. These might include an application or service name, an environment, or a version number.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business Tags&lt;/strong&gt; allow stakeholders to analyze costs and the teams or business units responsible for each resource. For example, you might want to know what percentage of your AWS spend is going towards the new product you launched last year so you can determine the return on investment of that effort.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Tags&lt;/strong&gt; ensure compliance and security standards are met across the organization. These tags might be used to limit access or denote specific data security requirements for &lt;a href="https://en.wikipedia.org/wiki/Health_Insurance_Portability_and_Accountability_Act"&gt;HIPAA&lt;/a&gt; or &lt;a href="https://aws.amazon.com/compliance/soc-faqs/"&gt;SOC&lt;/a&gt; compliance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automation Tags&lt;/strong&gt; can be used to automate the cleanup, shutdown, or usage rules for each resource in your account. For example, you could tag sandbox servers and run a script to delete them after they're no longer in use.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Decide which AWS tags will be mandatory
&lt;/h3&gt;

&lt;p&gt;As you decide which AWS tags you need and how you will use them, set rules about their usage. Decide which AWS tags will be mandatory, what character should be used as a delimiter, and who will be responsible for creating them. If you already have many resources, you may have to delegate tag assignment to the teams who use them.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Develop a consistent AWS tag naming convention
&lt;/h3&gt;

&lt;p&gt;Choosing a consistent and scalable AWS tag naming convention for your AWS tag keys and values can be complicated. There are different AWS tag naming convention rules about which characters you can use and how long AWS tag keys and AWS tag values can be. Be sure to &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions"&gt;read up on these tag restrictions&lt;/a&gt; before you select a AWS tag naming convention.&lt;/p&gt;

&lt;p&gt;A common AWS tag naming convention pattern is to use lowercase letters with hyphens between words and colons to namespace them. For example, you might use something like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tag Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;mifflin:eng:os-version&lt;/td&gt;
&lt;td&gt;1.0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Where &lt;code&gt;mifflin&lt;/code&gt; is the name of your company, &lt;code&gt;eng&lt;/code&gt; designates this tag as being relevant to the engineering team, &lt;code&gt;os-version&lt;/code&gt; indicates the purpose of the tag, and &lt;code&gt;1.0&lt;/code&gt; is the value.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Limit the number of AWS tags you adopt
&lt;/h3&gt;

&lt;p&gt;There are technical and practical limits to the number of tags you should use. First, AWS enforces a 50-tag limit on each resource. More importantly, engineers will have a hard time keeping track of and remembering how to properly use tags if you require too many.&lt;/p&gt;

&lt;p&gt;Fortunately, many tags can be avoided by relying on AWS's built-in resource metadata. For example, you don't have to store the creator of an EC2 instance because &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html"&gt;Amazon adds a &lt;code&gt;createdBy&lt;/code&gt; tag by default&lt;/a&gt;. Decide which tags you need and try to limit the creation of new tags.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Automate AWS tag management
&lt;/h3&gt;

&lt;p&gt;As the number of AWS resources in your account grows, keeping up with your AWS tags, enforcing conventions, and updating tags will get increasingly difficult. In Part 2 and 3 of this guide, we'll look at how you can use &lt;a href="//terraform.io"&gt;Terraform&lt;/a&gt;, &lt;a href="https://aws.amazon.com/cloudformation/"&gt;CloudFormation&lt;/a&gt;, &lt;a href="https://cloudcustodian.io/"&gt;Cloud Custodian&lt;/a&gt; to manage tags across your resources.&lt;/p&gt;

&lt;p&gt;Amazon also offers &lt;a href="https://aws.amazon.com/blogs/aws/new-use-tag-policies-to-manage-tags-across-multiple-aws-accounts/"&gt;tag policies&lt;/a&gt;, &lt;a href="https://aws.amazon.com/blogs/aws/resource-groups-and-tagging/"&gt;tagging by resource group&lt;/a&gt;, and a &lt;a href="https://aws.amazon.com/blogs/aws/new-aws-resource-tagging-api/"&gt;resource tagging API&lt;/a&gt; to help you govern and assign tags in bulk. Automating as much of the tag management process as possible will result in higher quality, more maintainable tags in the long run.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Plan to audit and maintain AWS tags
&lt;/h3&gt;

&lt;p&gt;You will undoubtedly need to revisit your AWS tags periodically to make sure they're still useful and accurate. Depending on how many resources you deploy, this might mean setting a reminder to audit your tags every quarter, or it might mean creating a committee to review and update tags every month. We'll look at some tools and strategies for managing your tags in Part 3 of this guide.&lt;/p&gt;

&lt;p&gt;Amazon Web Services provides a &lt;a href="https://d1.awsstatic.com/whitepapers/aws-tagging-best-practices.pdf"&gt;comprehensive document of their recommended practices&lt;/a&gt; for tagging resources. Be sure to review it if you're new to AWS tags and want to dive deeper into some of these AWS tagging best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example AWS Tagging Strategies
&lt;/h2&gt;

&lt;p&gt;Let's look at a few real-world tagging strategies. These are adapted from real companies that use AWS tags to organize their resources for various reasons. While they may differ from your use case, they'll offer you insight into how you might tag your resources in AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 1: A Service-Based AWS Tagging Strategy
&lt;/h3&gt;

&lt;p&gt;A widespread pattern for tagging resources is by service and environment. For example, if an organization has two services (&lt;code&gt;cart&lt;/code&gt; and &lt;code&gt;search&lt;/code&gt;) and two environments (&lt;code&gt;prod&lt;/code&gt; and &lt;code&gt;dev&lt;/code&gt;), they might set up the following tags:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Key&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;service&lt;/td&gt;
&lt;td&gt;cart or search&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;contact&lt;/td&gt;
&lt;td&gt;Name of the engineer who maintains this resource&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;env&lt;/td&gt;
&lt;td&gt;prod or dev&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If these two services share a single &lt;a href="https://aws.amazon.com/rds/"&gt;RDS&lt;/a&gt; instance, then the database can be tagged &lt;code&gt;service=cart|search&lt;/code&gt; (to indicate that this resource serves both services) and the architecture might look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d6F1_qje--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/nPLHmfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d6F1_qje--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/nPLHmfp.png" alt="A service-based tagging strategy in AWS" width="880" height="1264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you choose an AWS tagging strategy like the one above, you have to consider how tags will change over time. For example, if you add a new service that shares the same RDS instance, you’ll have to update the database’s tags to include the name of the new service. For this reason, some teams opt to use a single tag to indicate that a resource may be used by all services (eg: &lt;code&gt;service=common&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Service-based tagging strategies like this are usually a good starting point if you'd like to understand which services contribute the most to your AWS costs. The business team can use these tags to see how much they're paying for each service or environment and reach out to the appropriate contact if they have questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 2: A Compliance AWS Tagging Strategy
&lt;/h3&gt;

&lt;p&gt;AWS cost allocation tags may also help organizations manage governance or compliance. Tools like &lt;a href="//www.cloudforecast.io"&gt;CloudForecast&lt;/a&gt; through their &lt;a href="https://www.cloudforecast.io/aws-tagging-compliance-report.html"&gt;AWS Tagging Compliance feature&lt;/a&gt; can help you maintain tagging compliance if you tag your resources in specific ways. These AWS tags might be used to limit access or run extra security checks on particular resources.&lt;/p&gt;

&lt;p&gt;In this example, the company tags resources that contain user data with &lt;code&gt;user-data=true&lt;/code&gt; so that they can audit them more frequently and ensure they meet specific standards. All resources have a &lt;code&gt;contact&lt;/code&gt; and &lt;code&gt;env&lt;/code&gt; tag to designate the responsible team member and ensure someone is accountable for keeping them up to date.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--shDPinIK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/VfWPQ6O.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--shDPinIK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/VfWPQ6O.png" alt="A compliance-based tagging strategy in AWS" width="880" height="959"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Using a compliance tagging strategy does not preclude you from using other strategies as well. One of the advantages of AWS tags is that they let you segment your AWS resources in a nearly infinite number of ways.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example 3: Account Segmented Environments
&lt;/h3&gt;

&lt;p&gt;The final example we'll look at is an account-segmented tagging strategy. While AWS's IAM permissions allow you to assign access to users, roles, and teams granularly, some organizations may want to go a step further.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"When resources across heterogeneous logical environments are colocated, it is deceptively easy to accidentally use resources from another environment if you're not extraordinarily careful when provisioning resources and designing network/IAM policies" - Platform Engineer at &lt;a href="https://www.cars.com/"&gt;Cars.com&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this example, the organization designated business unit and team tags to each resource, with each environment having a separate AWS account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DvIdZJtH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/Lg36Cug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DvIdZJtH--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://i.imgur.com/Lg36Cug.png" alt="An account-segmented environment and tagging strategy in AWS" width="880" height="1528"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This allows them to generate reports in each environment to see what their resource costs are for the marketing (&lt;code&gt;mktg&lt;/code&gt;) unit is vs. the data warehousing (&lt;code&gt;data&lt;/code&gt;) unit. If the team uses this method of account-segmented tagging, they’ll need to &lt;a href="https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/consolidated-billing.html"&gt;use a master account&lt;/a&gt; to see resource usage across their entire organization. You can also use &lt;a href="https://www.cloudforecast.io"&gt;CloudForecast&lt;/a&gt; to generate regular cost reports and breakdowns across multiple AWS accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Any organization that uses AWS at scale will need to develop a tagging strategy that works for them. Consider the AWS tagging best practices and examples above, as well as your organization's goals.&lt;/p&gt;

&lt;p&gt;Once you decide on a AWS tagging strategy, you will need a plan for adding and maintaining AWS cost allocation tags. In the next part of this guide, we'll look at tools you can adopt to ensure your engineering teams are using AWS tags consistently across all your AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.cloudforecast.io/aws-tagging-compliance-report.html?utm_source=blog&amp;amp;utm_medium=banner&amp;amp;utm_campaign=tagging_part1"&gt;&lt;br&gt;
&lt;img alt="Need help with finding all your untagged AWS resources? Start a trial and discover all your resources that required AWS tags." src="https://res.cloudinary.com/practicaldev/image/fetch/s--usGvWsuK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://www.cloudforecast.io/blog/assets/media/tag_trial_revised.png" width="880" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awstags</category>
    </item>
    <item>
      <title>Introduction to AWS Reserved Instances</title>
      <dc:creator>Hiroko Nishimura</dc:creator>
      <pubDate>Wed, 06 Nov 2019 16:46:45 +0000</pubDate>
      <link>https://forem.com/cloudforecast/introduction-to-aws-reserved-instances-1e6f</link>
      <guid>https://forem.com/cloudforecast/introduction-to-aws-reserved-instances-1e6f</guid>
      <description>&lt;p&gt;There is nothing I love more than saving money (except maybe delicious food). And I don’t know too many people who don’t love saving money, not to mention making the accounting department happy!&lt;/p&gt;

&lt;p&gt;Today, I’m here to introduce a way to save money on AWS that is often feared because of its perceived (or actual) complexity. It’s called “&lt;strong&gt;Reserved Instances (RI)&lt;/strong&gt;,” and it’s available for many popular AWS services such as Amazon EC2, Amazon RDS, Amazon Elasticsearch, Amazon ElastiCache, Amazon DynamoDB, and Amazon Redshift, etc.&lt;/p&gt;

&lt;h1&gt;
  
  
  What are AWS Reserved Instances?
&lt;/h1&gt;

&lt;p&gt;You might remember buying coupons on Groupon for 50% off a $100 meal at a local restaurant for a frugal date night. In both cases, in exchange for making a commitment and paying in advance, you’ve locked in a deal for a discounted rate.&lt;/p&gt;

&lt;p&gt;AWS &lt;strong&gt;Reserved Instances (RI)&lt;/strong&gt; work in similar ways, allowing you to pay steeply discounted rates, compared to paying hourly. For Amazon EC2, AWS’s virtual server service, you could save up to 75% off hourly rates by using Reserved Instances!&lt;/p&gt;

&lt;p&gt;Capacity reservations using Reserved Instances are available with Amazon EC2, Amazon RDS, Amazon Elasticsearch, Amazon ElastiCache, and Amazon Redshift.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why Would I buy AWS Reserved Instances?
&lt;/h1&gt;

&lt;p&gt;There are many moving parts in how much resource capacities you need for your products or services. It’s hard to guess what the demands will be for specific AWS resources, especially at product launch. You don’t want to commit to too little… or too much for any of your resources. The allure of Cloud Computing platforms like Amazon Web Services is that you can use as much… or as little as you want, month to month, and you will only be billed for what you use.&lt;/p&gt;

&lt;p&gt;So why would you want to reserve capacities with AWS Reserved Instances? Doesn’t that go against the “On-Demand” feature of Cloud Computing?&lt;/p&gt;

&lt;p&gt;You’re absolutely right.&lt;/p&gt;

&lt;p&gt;By purchasing AWS Reserved Instances, you are committing to pay discounted prices in advance, which is a little like purchasing that physical server for your on-premises data center before you start a new project. However, unlike physical servers, doing so allows you to save a lot of money… As long as you are utilizing the Reserved Instances beyond their “break even point.”&lt;/p&gt;

&lt;p&gt;The “break even point” is the point at which the savings from reserving the instance is realized. If you bought a “$200 for $100” voucher for your favorite Italian restaurant, your “break even point” is eating at least $100 worth of food so that you get your money’s worth. Beyond that, you’re saving money.&lt;/p&gt;

&lt;p&gt;As your products mature, you might begin seeing patterns for resource usage. You might find that every month, you are using a certain amount of compute resources with your Amazon EC2 instances. Or you might find that your database usage is fairly consistent month to month up to a certain point, and feel ready to commit to reserving that amount of Amazon Redshift resources upfront.&lt;/p&gt;

&lt;p&gt;When you establish predictable resource usage for certain services, it might be time to consider paying upfront to make capacity reservations to save big.&lt;/p&gt;

&lt;h1&gt;
  
  
  What are my Options with AWS Reserved Instances?
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Reservation Area: Regional or Availability Zone
&lt;/h3&gt;

&lt;p&gt;There are two ways to make reservations for EC2 Reserved Instances: Regional and Zonal. &lt;strong&gt;Regional Reserved Instances&lt;/strong&gt; are purchased for a whole Region, and provide Availability Zone flexibility. &lt;strong&gt;Zonal Reserved Instances&lt;/strong&gt; are assigned to a specific Availability Zone, and cannot be moved from one Availability Zone to another. (Remember: a Region includes multiple Availability Zones.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Terms: 1 or 3 Years
&lt;/h3&gt;

&lt;p&gt;You can make capacity reservation for a term of 1 or 3 years. This means that you are committing to purchase certain amount of capacity for 1 or 3 years, and then paying for it upfront. In exchange, you will be able to purchase the capacity for substantially lower prices than if you purchased them On-Demand. As you might expect, purchasing a 3 year term offers bigger discounts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rEtIJy7f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gr3qlkvzhidy0uj2qnmn.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rEtIJy7f--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/gr3qlkvzhidy0uj2qnmn.jpg" alt="AWS Reserved Instances Terms"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Types: Standard and Convertible
&lt;/h3&gt;

&lt;p&gt;There are two types of Reserved Instances: Standard, and Convertible. &lt;strong&gt;Standard Reserved Instances&lt;/strong&gt; have some modifiable features after purchase, like instance size, but you cannot change the instance family. With &lt;strong&gt;Convertible Reserved Instance&lt;/strong&gt;, you can exchange the instance with another Convertible Reserved Instance with new attributes like instance family, type, and platform. You can also modify some attributes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xmxH5x8E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/thdxehsjl4lsgot6wvc0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xmxH5x8E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/thdxehsjl4lsgot6wvc0.png" alt="AWS Reserved Instances Types"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Standard RI, you can modify but not exchange, and with Convertible RI, you can both modify and exchange features. As you might expect, Standard RIs are cheaper than Convertible RIs, but lack the flexibility, which may be detrimental depending on the situation.&lt;/p&gt;

&lt;p&gt;There are four &lt;strong&gt;attributes&lt;/strong&gt; that affect pricing with Reserved Instances on EC2s. They are &lt;strong&gt;instance types&lt;/strong&gt;, &lt;strong&gt;platform&lt;/strong&gt;, &lt;strong&gt;scope&lt;/strong&gt;, and &lt;strong&gt;tenancy&lt;/strong&gt;. You can think of these attributes like features on a server like the operating system running.&lt;/p&gt;

&lt;h3&gt;
  
  
  Payment: All Upfront, Partial Upfront, or No Upfront
&lt;/h3&gt;

&lt;p&gt;There are also different ways to pay for the Reserved Instances. You can pay for everything &lt;strong&gt;upfront&lt;/strong&gt;, &lt;strong&gt;partially upfront&lt;/strong&gt;, or &lt;strong&gt;nothing upfront&lt;/strong&gt;. The more you pay upfront, the less you have to pay. And whatever you didn’t pay upfront, you will be paying in monthly installments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6TnkFPnC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/89k3937ddpbd8ubcoear.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6TnkFPnC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/89k3937ddpbd8ubcoear.jpg" alt="AWS Reserved Instances Payment Options"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;No upfront reservation might seem counter-intuitive. Isn’t it the same as On-Demand? The reason why it’s different from On-Demand is because you are committing to a term of 1 or 3 years, even though you are paying monthly. So you are still eligible for discounts (though not as much as paying all or partially upfront) because you’ve made a reservation commitment. It’s worth noting though, that even if you don’t end up using up your RIs, you still have to pay for it.&lt;/p&gt;

&lt;p&gt;Have multiple AWS accounts in your organization? No problem! Your Reserved Instances can “float” across all of your linked accounts.&lt;/p&gt;

&lt;p&gt;You can link multiple AWS accounts using a feature in AWS called &lt;strong&gt;Consolidated Billing&lt;/strong&gt;, and doing so allows you to utilize many resources and discounts as though all of the accounts are “one” big account! Thankfully, Reserved Instances are one of the resources that can be utilized across all linked accounts.&lt;/p&gt;

&lt;h1&gt;
  
  
  How Do I Buy AWS Reserved Instances?
&lt;/h1&gt;

&lt;p&gt;You can purchase your Reserved Instances using &lt;strong&gt;AWS Management Console&lt;/strong&gt; or &lt;strong&gt;API tools&lt;/strong&gt;. For example, to purchase Reserved Instances for EC2, you can log into your AWS management Console, navigate to the EC2 section, and find “Reserved Instances” on the left navigation pane to purchase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6DWgNu7g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/z0og9b15rol9xyxrrvn2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6DWgNu7g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/z0og9b15rol9xyxrrvn2.jpg" alt="Purchase AWS Reserved Instances"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the process, you might find even further discounted capacities from 3rd party sellers! Third party sellers are people or companies like you or me, who bought Reserved Instances, but later found that they no longer need them. You don’t have to worry about the quality of the instance you will be buying either; all Reserved Instances sold by 3rd party sellers are legitimate and identical to what you would get if you bought them directly from AWS.&lt;/p&gt;

&lt;p&gt;This also means that if you bought more capacity than you need, you also have the option of selling on the Reserved Instance Marketplace as a 3rd party seller to potentially mitigate the financial losses. This feature is important, because purchases of AWS Reserved Instances are non-refundable. However, keep in mind that Convertible Reserved Instances cannot be resold in the &lt;strong&gt;Marketplace&lt;/strong&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Considering Utilizing AWS Reserved Instances?
&lt;/h1&gt;

&lt;p&gt;Unfortunately, your work is not done once you click that “buy” button on your Reserved Instances. You need to actively monitor them to make sure you are utilizing them to their fullest potential. What’s the “break-even point” of each RI? Are you hitting it, or are you way off the mark? Thankfully, there’s a service that can help you monitor all that and send you daily reports, called &lt;a href="https://www.cloudforecast.io/"&gt;CloudForecast&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;CloudForecast can help you &lt;a href="https://www.cloudforecast.io/blog/AWS-Reserved-Instances-Weekly-Report/"&gt;monitor Reserved Instances utilization and expiration&lt;/a&gt; so your capacity reservations don’t go to waste! You can get daily reports in multiple channels like email, Slack, and PagerDuty, written in languages easy to understand!&lt;/p&gt;

&lt;p&gt;Take the guesswork out of your Reserved Instances utilization by giving &lt;a href="https://www.cloudforecast.io/"&gt;CloudForecast&lt;/a&gt; a try… And your to-do lists and accounting department will thank you!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This post was originally published on &lt;a href="https://www.cloudforecast.io/blog/Introduction-to-AWS-Reserved-Instances/"&gt;CloudForecast Blog&lt;/a&gt;!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>reservedinstances</category>
      <category>amazonwebservices</category>
    </item>
    <item>
      <title>Analyzing the Cost of Your Serverless Functions Using Faast.js</title>
      <dc:creator>Kyle Galbraith</dc:creator>
      <pubDate>Tue, 02 Jul 2019 20:14:45 +0000</pubDate>
      <link>https://forem.com/cloudforecast/analyzing-the-cost-of-your-serverless-functions-using-faast-js-4oai</link>
      <guid>https://forem.com/cloudforecast/analyzing-the-cost-of-your-serverless-functions-using-faast-js-4oai</guid>
      <description>&lt;h3&gt;
  
  
  What is faast.js?
&lt;/h3&gt;

&lt;p&gt;Faast.js is an open source project that streamlines invoking serverless functions like AWS Lambda. It allows you to invoke your serverless functions as if they were regular functions in your day to day code. But the benefits don’t stop there. It allows you to spin up your serverless infrastructure when the function is actually invoked. No more upfront provisioning of your serverless environments.&lt;/p&gt;

&lt;p&gt;This is an interesting take on Infrastructure as Code. With faast we are no longer defining our infrastructure inside of a language like HCL or YAML. Instead, this is more akin to Pulumi where our infrastructure lives in the code we actually use in our services. But with the big difference that our infrastructure is provisioned when our function is called.&lt;/p&gt;

&lt;p&gt;But wait, if my infrastructure is allocated on demand for my serverless pipeline, how will I know what it costs to run it? &lt;/p&gt;

&lt;p&gt;Faast.js has you covered there as well. You can estimate your costs in real time using the cost snapshot functionality. If you need a deeper look you can use the cost analyzer to estimate the cost of many configurations in parallel. &lt;/p&gt;

&lt;p&gt;In this post, we are going to explore how we can use faast.js to provision a serverless function in AWS Lambda. We are going to create a simple serverless function and invoke it using faast.js to see how our workload is dynamically created and destroyed. We will also dive into some of the slick features like cost analysis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Our serverless function using faast.js
&lt;/h3&gt;

&lt;p&gt;To get started we first need to have our AWS CLI configured. This is required for faast.js to know which cloud provider our serverless function is using. By installing the CLI with the correct access keys our faast setup will detect that we are using AWS Lambda for our environment.&lt;/p&gt;

&lt;p&gt;Once we are all configured to use AWS as our cloud provider we can get started with faast by installing the library into our project.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm install faastjs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, let’s create our serverless function implementation inside of a file named &lt;code&gt;functions.js&lt;/code&gt;. Our function is going to be very simple for this blog post. We want to focus on the benefits faast provides but we need a realistic serverless function to do that.&lt;/p&gt;

&lt;p&gt;An important thing to remember when using faast is that our serverless function must be idempotent. This means that it takes an input and produces the same output every time it is invoked with that. This is because the abstraction faast provides leaves the door open to functions being retried.&lt;/p&gt;

&lt;p&gt;For our purpose let’s create a simple function that takes an array of numbers and multiplies them, returning the result. This is a naive example but it will allow us to demonstrate how we can use faast to scale out our invocations as well as estimate the cost of our function. It’s also a basic example of idempotency, the same two inputs will always result in the same product.&lt;/p&gt;

&lt;p&gt;Let’s dive into what the code looks like for our serverless function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;multiply&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;numbers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;numbers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;reduce&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;currTotal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;num&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;currTotal&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="nx"&gt;num&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pretty straightforward right? We have a one-line function that takes an array of numbers and returns the final product of them all.&lt;/p&gt;

&lt;p&gt;Now that we have our basic serverless function, let’s incorporate faast.js into our setup. Inside of our &lt;code&gt;index.js&lt;/code&gt; file we are going to start off by creating some random number arrays. We can then use those arrays to invoke our serverless function many times in parallel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;faast&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;faastjs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;funcs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./functions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;testArrays&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;randomLength&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;floor&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nb"&gt;Math&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;random&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="kd"&gt;var&lt;/span&gt; &lt;span class="nx"&gt;arr&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;

        &lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="nx"&gt;randomLength&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="nx"&gt;testArrays&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;arr&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;


    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Invoking serverless functions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;invokeFunctions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;testArrays&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Done invoking serverless functions&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are generating 1000 random length arrays and then passing them to our &lt;code&gt;invokeFunctions&lt;/code&gt; function. It is that function that makes use of faast to invoke our multiplication serverless function in parallel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;invokeFunctions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;arrays&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;invoker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;faast&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;funcs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;promises&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;arrays&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;promises&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;invoker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;functions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;multiply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;arrays&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;


    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;promises&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;invoker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cleanup&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Invocation results&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our &lt;code&gt;invokeFunctions&lt;/code&gt; method creates our faast invoker. It then invokes our &lt;code&gt;multiply&lt;/code&gt; function for each test array we passed into it. Our function invocation returns a promise that is added to a &lt;code&gt;promises&lt;/code&gt; array where we can &lt;code&gt;await&lt;/code&gt; on all our invocations. Once all our serverless functions complete we call the &lt;code&gt;cleanup&lt;/code&gt; method on our invoker to destroy the infrastructure that was created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Running our serverless function
&lt;/h3&gt;

&lt;p&gt;Now that we have our serverless function and the outer invocation logic that faast will use to invoke it, it’s time to test things out.&lt;/p&gt;

&lt;p&gt;This is done with a &lt;code&gt;node&lt;/code&gt; call to our entry point script. From the root of the directory where our code lives, run the following commands. Note that .js should be replaced with the name of the file where the faast js invoker calls your serverless function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm install
$ node src/&amp;lt;your-entry-point&amp;gt;.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That’s it! We just invoked our serverless function via the faast.js framework. We should see logs in our output that look something like this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ node src/index.js
Invoking serverless functions
Invocation results
[ 720,
  6,
  40320,
  720,
  3628800,
  120,
  3628800,
.....]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Pretty cool right? We were able to write our serverless function in its own module and then invoke it as if it was any old function from our code using faast.js. There was no upfront provisioning of our AWS infrastructure. No need to handle retries or errors, and everything was cleaned up for us.&lt;/p&gt;

&lt;p&gt;We can see this for ourselves by checking out the CloudWatch log groups that were created for each of our functions. You can view these logs by going to CloudWatch Logs in your AWS account and then filtering for the prefix &lt;code&gt;/aws/lambda/faast&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--UgC82_e---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.cloudforecast.io/blog/assets/images/posts/lambdafaast.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--UgC82_e---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://www.cloudforecast.io/blog/assets/images/posts/lambdafaast.png" alt="CloudWatch Log Groups" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is an exciting take on infrastructure as code. It removes the need to provision infrastructure ahead of time. We don’t have to configure these Lambda functions ahead of time, they are created dynamically on when our faast module is invoked. That alone is very exciting because it allows developers to invoke serverless workloads as if they were functions in our everyday code.&lt;/p&gt;

&lt;p&gt;But it gets even better.&lt;/p&gt;

&lt;h3&gt;
  
  
  How much did our invocations cost?
&lt;/h3&gt;

&lt;p&gt;With great power comes the risk of doing things very wrong. Or put in terms of AWS, getting a high bill at the end of the month because you got some configuration wrong.&lt;/p&gt;

&lt;p&gt;It turns out faast can help us out with that as well with their built-in cost analyzer. Let’s update our logic to make use of the cost analyzer so we can see a breakdown of what our invocations are costing us.&lt;/p&gt;

&lt;p&gt;All we need to do is invoke a function called &lt;code&gt;costSnapshot&lt;/code&gt; on our faast invoker. So we add that below to see a full breakdown of what our serverless invocations are costing us. Here is the updated code that handles this.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nx"&gt;invokeFunctions&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;arrays&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;invoker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;faast&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;funcs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;promises&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;arrays&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;promises&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;push&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;invoker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;functions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;multiply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;arrays&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]))&lt;/span&gt;


    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;promises&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;invoker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;cleanup&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;results&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;costSnapshot&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;invoker&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;costSnapshot&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;costSnapshot&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So what does our current serverless pipeline cost us? Here is the log output from the call to &lt;code&gt;costSnapshot&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;functionCallDuration  $0.00002813/second          100.1 seconds    $0.00281588    91.9%  [1]
functionCallRequests  $0.00000020/request          1001 requests   $0.00020020     6.5%  [2]
outboundDataTransfer  $0.09000000/GB         0.00052891 GB         $0.00004760     1.6%  [3]
sqs                   $0.00000040/request             0 request    $0              0.0%  [4]
sns                   $0.00000050/request             0 request    $0              0.0%  [5]
logIngestion          $0.50000000/GB                  0 GB         $0              0.0%  [6]
---------------------------------------------------------------------------------------
                                                                   $0.00306368 (USD)

  * Estimated using highest pricing tier for each service. Limitations apply.
 ** Does not account for free tier.





[6]: https://aws.amazon.com/cloudwatch/pricing/ - Log ingestion costs not currently included.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we see that we had 1001 function requests with a total duration of 100 seconds and a small fraction of outbound data transfer. All of this for a total of $0.003 cents.&lt;/p&gt;

&lt;h3&gt;
  
  
  Putting it all together
&lt;/h3&gt;

&lt;p&gt;What we have demonstrated is that we can build a serverless function that requires no upfront infrastructure. Our multiply function is provisioned on the fly via faast. We can even dump cost snapshots from faast to see what our invocations are costing us as a whole and on a per request basis.&lt;/p&gt;

&lt;p&gt;What this allows us as developers to do is to abstract away the serverless world but still gain all the advantages of it.&lt;/p&gt;

&lt;p&gt;Imagine if our invoker wrapper wasn’t a script that we run from the command line but rather another function that is invoked in an API that we are building. The developer of the API needs to only know how to invoke our function in JavaScript. All the serverless knowledge and infrastructure is completely abstracted from them. To their code, it’s nothing more than another function.&lt;/p&gt;

&lt;p&gt;This is a great abstraction layer for folks that are new to the serverless world. It gives you all the advantages of it without climbing some of the learning curve.&lt;/p&gt;

&lt;p&gt;But, it does come with a cost. Done wrong our serverless costs could go through the roof. If the API developer invokes our function in a &lt;code&gt;while&lt;/code&gt; loop without understanding the ramifications of that, our AWS bill at the end of the month could make us cry.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;Faast.js is a very cool idea from a serverless and infrastructure as code perspective. The best code is the code you never have to write. Faast gives us that by provisioning our infrastructure for us when we need it. It also allows us to treat our serverless workloads as just another function in our code.&lt;/p&gt;

&lt;p&gt;It does come with a cost and some hiccups that might not fit all use cases. For example the role that is created for the Lambda functions has Administrator access and there is no way to configure that. Not a security best practice. There is also the case where other resources can be left lying around in your account if the &lt;code&gt;cleanup&lt;/code&gt; method is not called.&lt;/p&gt;

&lt;p&gt;These are things that I am sure the project is looking to address. In the meantime I would suggest trying out Faast in a development/test context to gain an understanding of what your serverless workloads are going to cost you at scale.&lt;/p&gt;

&lt;p&gt;If you have any questions about Faast.js or serverless in general feel free to ping me via twitter &lt;a class="mentioned-user" href="https://dev.to/kylegalbraith"&gt;@kylegalbraith&lt;/a&gt; or leave a comment below. Also, check out my weekly &lt;a href="https://kylegalbraith.com/learn-by-doing"&gt;Learn by Doing newsletter&lt;/a&gt; or my &lt;a href="https://kylegalbraith.com/learn-aws"&gt;Learn AWS By Using It course&lt;/a&gt; to learn even more about the cloud, coding, and DevOps.&lt;/p&gt;

&lt;p&gt;If you have questions about CloudForecast to help you monitor and optimize your AWS cost, feel free to ping Tony: &lt;a href="mailto:tony@cloudforecast.io"&gt;tony@cloudforecast.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>aws</category>
      <category>node</category>
      <category>devops</category>
    </item>
    <item>
      <title>Chicago AWS Summit: Things to do, eat and see. Recommendations from a local south-sider.</title>
      <dc:creator>Francois LAGIER</dc:creator>
      <pubDate>Wed, 22 May 2019 18:03:28 +0000</pubDate>
      <link>https://forem.com/cloudforecast/chicago-aws-summit-things-to-do-eat-and-see-recommendations-from-a-local-south-sider-4kdd</link>
      <guid>https://forem.com/cloudforecast/chicago-aws-summit-things-to-do-eat-and-see-recommendations-from-a-local-south-sider-4kdd</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published by co-founder &lt;a href="https://twitter.com/toeknee123" rel="noopener noreferrer"&gt;Tony Chan&lt;/a&gt; on &lt;a href="https://cloudforecast.io/blog" rel="noopener noreferrer"&gt;cloudforecast.io/blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Heading to the AWS Summit in Chicago on May 30th and need recommendations on things to eat, see or drink near McCormick Place? &lt;/p&gt;

&lt;p&gt;For those who have been to McCormmick place, you already know the area is lacking quality restaurant, bars or things to do. The area surrounding McCormick place, South Loop and Motor Row, is still “up and coming”, but it’s still 3-5 years out from having more viable options for people attending a conference. Do a quick google map search for “restaurants” around the area and you’ll see what I mean. &lt;/p&gt;

&lt;p&gt;I have listed a few of my favorite places near McCormick Place, but I recommend venturing off to the surrounding neighborhoods for more options and a unique experience (Bridgeport, Pilsen, Chinatown). I grew up in Bridgeport, was a tour guide in Chinatown in college and still reside in the area. I hope my local knowledge of the “South-side” will make you feel a bit better venturing into uncharted territory for many. This is definitely not a complete list, but it's a starting point of a select few places.&lt;/p&gt;

&lt;p&gt;If you need more recommendations or have other places to add, feel free to reach out to me directly, &lt;a href="//mailto:tony@cloudforecaset.io"&gt;tony@cloudforecaset.io&lt;/a&gt;. Happy to help in any way! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudforecast.io%2Fblog%2Fassets%2Fimages%2Fposts%2Fsouthloop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudforecast.io%2Fblog%2Fassets%2Fimages%2Fposts%2Fsouthloop.png" alt="South Loop and Motor Row"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Walking distance, &amp;lt;15-20 minute walk. (South Loop and Motor Row).
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Places to eat&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pizannos&lt;/strong&gt; - Sit down restaurant. One of the best restaurants for Chicago style thin crust. They also have deep dish if that is what you are looking for. Crust is buttery and corn meal based similar to Lou’s and Pequods. &lt;a href="https://goo.gl/maps/S7D2xnx9q9BqvDSj9" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;14 Parish&lt;/strong&gt; - Sit down restaurant. Carribean food within &amp;lt;5 min walk of McCormick Place. This will be a good option if you need a spot for a business meeting. &lt;a href="https://goo.gl/maps/fhqetANyijKh43ZA6" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Opart Thai&lt;/strong&gt; - Sit down restaurant. Considered one of the best Thai restaurants in Chicago and a neighborhood staple.  Another good option if you need a spot for a business meeting. &lt;a href="https://goo.gl/maps/WsTphXpCZ2yq4mvH6" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chef Lucianos&lt;/strong&gt; - Quick bite. Hole in the wall restaurant for quick take out or lunch. A lot of their dishes are Indian inspired and unique. I recommend the fried chicken or Chicken curry. &lt;a href="https://goo.gl/maps/fdtnQxZh5gtSMFR2A" rel="noopener noreferrer"&gt;Google Maps link&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Places for a drink&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Weather Mark Tavern&lt;/strong&gt; - Nautical themed dive bar. Great drink menu and daily food specials. &lt;a href="https://goo.gl/maps/sZG9Zqw3AuuSADrGA" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reggies Chicago&lt;/strong&gt; - Another dive bar and neighborhood staple. Live music everyday with a rooftop patio. Very grungy vibe, but a very fun bar.&lt;a href="https://goo.gl/maps/6978caWmbavTcYya7" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fat Pour:&lt;/strong&gt;  Relatively new place for the area. Wide beer selection but gets pretty busy during conferences due to lack of options around the area. &lt;a href="https://goo.gl/maps/gEdXLgo9hMpipGGV9" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Other options:&lt;/strong&gt; Vu Rooftop bar, Woven and Bound Bar, First Draft’
Further South Loop Options: First Draft, Kaseys Tavern &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Coffee shops to work out of&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Spoke and Bird&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;TeaPotBrew Bakery&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudforecast.io%2Fblog%2Fassets%2Fimages%2Fposts%2Fcbp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcloudforecast.io%2Fblog%2Fassets%2Fimages%2Fposts%2Fcbp.png" alt="South Loop and Motor Row"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &amp;lt;15 min rideshare distance (Chinatown, Bridgeport and Pilsen)
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Side note: You can get to Chinatown via the 21 Cermak bus which should take less than 10 minutes. Alternatively, it’s about a 20 minute walk.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Places to eat&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chinatown:&lt;/strong&gt; Chi Cafe - Sit down restaurant. Hong Kong style chinese food. Great stop for a quick lunch. I recommend the Sizziling Beef in Sake Sauce. &lt;a href="https://goo.gl/maps/SbZoT5ySK6d3i6CR9" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chinatown:&lt;/strong&gt; Yummy Yummy Noodles - Sit down restaurant. Small family owned place that is known for their noodle soups. &lt;a href="https://goo.gl/maps/mfgRHQ9ep7mpZXNP6" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chinatown:&lt;/strong&gt; Ming Hin - Sit down restaurant. One of the better places for dim sum. Picture menu and ipads for easy ordering. &lt;a href="https://goo.gl/maps/aghZ7YhgnV3Tmhri7" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Other options in Chinatown&lt;/strong&gt;: Dolo, Qing Xiang Yuan Dumplings, Happy Lamb Hot Pot &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bridgeport:&lt;/strong&gt; Ricobenes - Quick bite. Chicago south side staple. Known for their breaded steak sandwiches, vesuvio chicken sandwiches and pizza. &lt;a href="https://goo.gl/maps/bYKt7GF9a9GcgQDX7" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Bridgeport:&lt;/strong&gt; Phil’s Pizza - Sit down restaurant. One of the best places in the city for Chicago tavern style pizza. Opens at 4pm. &lt;a href="https://goo.gl/maps/SNQGsYwfAmwnstMy6" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Other options in Bridgeport:&lt;/strong&gt; Freddies, The Duck Inn, Nana’s. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pilsen:&lt;/strong&gt; Carnitas don Pedro - Quick bite. Hole in the wall restaurant service the best Michoacana style carnitas in the city. Pig cooked in it’s own fat, what can go wrong? Cash only. I recommend a mix plate of Carnitas to try out a little bit of everything. Not open for dinner. &lt;a href="https://goo.gl/maps/TckVpxTeb19Vjzse6" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Pilsen:&lt;/strong&gt; &lt;a href="https://goo.gl/maps/ZXKHbbKxCHuJH8Zn9" rel="noopener noreferrer"&gt;Tortillería y Taquerías Atotonilco&lt;/a&gt; or &lt;a href="https://goo.gl/maps/K1nTQ7fHhUYMsqv97" rel="noopener noreferrer"&gt;El Milagro&lt;/a&gt; for tacos.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Other options in Pilsen:&lt;/strong&gt; HaiSous, S.K.Y., Monnie Burkes or La Vaca. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Places for a drink&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bridgeport:&lt;/strong&gt; Marias and Kimski - Fun little dive bar with a wide beer selection and korean/polish food. Great option for a business meeting and drinks. &lt;a href="https://goo.gl/maps/WnA1FY7j9VMCcX8E6" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bridgeport:&lt;/strong&gt; Marz Brewery - One of the best breweries in the city with great food. Their beers are unique and everything here is delicious. It’s off the beaten path, but worth going to if you need to find a place for drinks. &lt;a href="https://goo.gl/maps/cHJqiek1sh26ReEz7" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pilsen:&lt;/strong&gt; Moody Tongue - 12 layer Chocolate cake and beers. Enough said. &lt;a href="https://goo.gl/maps/Uctm3cTLcLZ8EYie6" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Coffee shops to work out of:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bridgeport:&lt;/strong&gt; Bridgeport Coffee &lt;a href="https://goo.gl/maps/xob7QHMEiZ8JcLA68" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bridgeport:&lt;/strong&gt; Red Line Cafe &lt;a href="https://goo.gl/maps/Uctm3cTLcLZ8EYie6" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bridgeport:&lt;/strong&gt; Jackalope &lt;a href="https://goo.gl/maps/RnVTfbHNM6h5jwW48" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pilsen:&lt;/strong&gt; La Catrina Cafe &lt;a href="https://goo.gl/maps/zLHL6jsbjovG2A8f8" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pilsen:&lt;/strong&gt; Cafe Jumping Bean &lt;a href="https://goo.gl/maps/wZGJQEBJP84aNED49" rel="noopener noreferrer"&gt;Google Maps Link&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>awssummitchicago</category>
    </item>
    <item>
      <title>Interview with GitLab CEO, Sid Sijbrandij - Keys to Developing a Self-Hosted Version of Your App.</title>
      <dc:creator>Francois LAGIER</dc:creator>
      <pubDate>Thu, 09 May 2019 18:46:35 +0000</pubDate>
      <link>https://forem.com/cloudforecast/interview-with-gitlab-ceo-sid-sijbrandij-keys-to-developing-a-self-hosted-version-of-your-app-j53</link>
      <guid>https://forem.com/cloudforecast/interview-with-gitlab-ceo-sid-sijbrandij-keys-to-developing-a-self-hosted-version-of-your-app-j53</guid>
      <description>&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://cloudforecast.io/blog/"&gt;cloudforecast.io/blog&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;We had the opportunity to chat with &lt;a href="https://twitter.com/sytses"&gt;Sid&lt;/a&gt;, the CEO of &lt;a href="https://gitlab.com/?utm_source=cloudforecast"&gt;GitLab&lt;/a&gt;, on what it takes to offer a self-hosted version of your software a few weeks ago. &lt;a href="https://gitlab.com/?utm_source=cloudforecast"&gt;GitLab&lt;/a&gt; is a single application for the entire software development lifecycle. From project planning and source code management to CI/CD, monitoring, and security. &lt;/p&gt;

&lt;p&gt;In addition to a SaaS product, &lt;a href="https://gitlab.com/?utm_source=cloudforecast"&gt;GitLab&lt;/a&gt; offers a self-managed/self-hosted version of their app that has been a great business model for them. We’ve been working through the idea of offering a self-hosted version of &lt;a href="https://www.cloudforecast.io/?utm_source=blog"&gt;CloudForecast&lt;/a&gt; and knew that Sid would be a great resource to learn from.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR summary:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Leverage third-party software to deploy faster:&lt;/strong&gt; Look into third party software such as  &lt;a href="https://www.replicated.com/?utm_source=cloudforecast"&gt;Replicated&lt;/a&gt; to help speed up your deployment process. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support is important:&lt;/strong&gt; Keep it simple and include support for all your pricing tiers. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offer a freemium version of your app:&lt;/strong&gt; Consider a free version of your software with a few limitations in place. That limitation can be a 30-45 day free trial and flexibility to extend further. This will help you get your foot in the door and create sales opportunities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Focus your messaging around data security:&lt;/strong&gt; “Data does not leave your network” is the single most important thing you need to focus on while building a self-hosted version of your software. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build a product that saves time:&lt;/strong&gt; Engineers do not want to fiddle with tools; they want to build software. Focusing on an out of the box experience that can get them results in 5 minutes or less is very important. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are a few highlights from our conversation. We hope this can help your company if you are considering building a self-hosted version of your app. The full interview video is available after this post. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;There are a lot of technical challenges involved with building an self-hosted version of your software. We found a third party company called Replicated, that makes it easy for SaaS and software companies to make the move into selling modern, on-prem software.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are your thoughts with using software like &lt;a href="https://www.replicated.com/?utm_source=cloudforecast"&gt;Replicated&lt;/a&gt;?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Faster is better, especially if you are not sure if there is a giant market. It makes a lot of sense to go that route. I know the people from &lt;a href="https://www.replicated.com/?utm_source=cloudforecast"&gt;Replicated&lt;/a&gt; - they are good and they know their stuff. When we first started packaging GitLab, Replicated was not there yet or we were not aware of them at the time. &lt;/p&gt;

&lt;p&gt;There are a few lessons that we ran into while doing it on our own and things you need to consider. First of all, it’s A LOT of work to package up your stuff. We have an entire distribution team that is focused on this. We based our packaging on a technology from Chef called Omnibus. The end user experience is great, the team did a great job, but it is a significant investment. That speaks for using a third party tool like &lt;a href="https://www.replicated.com/?utm_source=cloudforecast"&gt;Replicated&lt;/a&gt; to speed up your process. &lt;/p&gt;

&lt;p&gt;Few things to be aware of in the Enterprise space, not everyone uses Docker and Kubernetes. That might be different for your customers. But, we found that a lot of our customers do not like running them in production. They might have other tools that assume a boring Unix app. Examples: "Oh! I want to provision it as expected user", or "I want to run an agent on the system”. They cannot accept a black box but they want more flexibility. With that being said, I think time is the most important thing you have in a startup which makes going with a solution like &lt;a href="https://www.replicated.com/?utm_source=cloudforecast"&gt;Replicated&lt;/a&gt; makes sense. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How should we consider pricing if we were to go with a self-hosted version of CloudForecast? What are the lessons you’ve learned while trying to figure out a pricing model for GitLab?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;We're frequently bought by companies that don't do a consumption pricing model. They want a predictable amount and they know what they will be charged at the end of the month.&lt;/p&gt;

&lt;p&gt;They get a budget for it and they have to put in a dollar number when they request it. Making it variable makes it difficult to purchase. It might not make it difficult though if you can figure out a way to lump it into the existing cloud provider bill. Cloud providers are starting to offer marketplaces that allow you to bill for your software. In that case, your offering of variable pricing can lump right into cloud provider bill. &lt;/p&gt;

&lt;p&gt;A boring solution would to have a few tiers with fixed pricing based on their monthly spend with their cloud provider. It’s not ideal, but it makes it a lot easier to purchase because they know exactly what they will be billed using your software. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do you all handle and price support out for GitLab? Any lessons there with approaches you’ve tried out?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You should never sell without support. You should always include support because you’ll end up with a situation like, "Well, I can't use it I have a problem with it and they say well tough luck you didn't buy support.". The reverse can also be true with ONLY selling support. We did that with GitLab in the beginning. People didn’t have any problems and they cancelled their support contracts. It made sense for them since they were able to use the product with no issues. &lt;/p&gt;

&lt;p&gt;To handle both scenarios, we just include support if they want support or not. They will pay the same price for either/or. I am a huge proponent of building it in no matter what. There is one subscription pricing and it includes support. There is a scenario with support with a good, better, best model. Example for us: the best model means that you’ll get better support that is more timely, more extensive and with more hand holding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the sales process like selling into enterprise companies? Who should we be talking? Any tips on ways we can start selling into these companies?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitLab is an open core company so lots of people that we get to sell into start with our open source offering. This has really helped us get our foot in the door early on and helps speeds up the sales cycle because they are using your product. Typical sales cycle on average for GitLab: Less than 10k is about 60 days. 10k - 100k is about 90 days. 100k and more is about 120 days. &lt;/p&gt;

&lt;p&gt;If you don’t want to open source your software, you should consider a free-tier version of CloudForecast. There is a whole big thing called “Shadow IT”, where people don’t like going through the process and want to run something that can help them. They’ll frequently use free tools since so they don’t have to expense it. &lt;/p&gt;

&lt;p&gt;However, you still have to consider that they might not be able to use the free SaaS version since they cannot share confidential data or credentials. If you had a on-premise, free version of your software, they can use your tool and still be good stewards of the private data. You become an approved vendor since the data is never leaving the network, which becomes less of a problem. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To follow up on a free version of your product, would you recommend maybe offering a limited version of your software? What are some variants you’ve seen that has worked well?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There can be many variants to a free version of your product. For GitLab, we’re open core, so our free version will always be free and then we charge for additional features on top. With a lot of proprietary software, the limited version might be a fixed “time limit” and you cannot continue after X days. It doesn’t have to be 30 days and it can be a whole year as an extreme example. &lt;/p&gt;

&lt;p&gt;With someone who has a problem of spending too much with their cloud provider, their boss has tasked them to solve that problem right away. They have to go through a purchasing process of 30-60 days to get the vendor approved when they need a solution right now. They might not object to a 30-45 day free trial to see how the product can work for them. From what I’ve seen, that seems to be the industry standard and it makes a lot of sense. For GitLab, we’ve settled to offer 30 days and if people ask for an extension, we are happy to oblige and extend their free trial.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are some messaging and important features to consider that resonates to decision maker looking for an on-prem solution?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The most important thing to highlight with messaging is “Data does not leave your network”. Maybe with CloudForecast, focus on getting your first cost report in less than five minutes should be key. You also want to highlight average percentage someone can save. Highlighting pricing is also very important. We’ve found that the pricing page is the most popular page on any website. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With your experience at Gitlab, what are your customers looking for and considering as an on-prem solution?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;GitLab wins because we're a single application for the entire DevOps life cycle. We save them time because they don't have to combine a chain of other applications together and spend a lot of time integrating them all.  All this leads to a subpar experience anyways since they are all still separate apps. &lt;/p&gt;

&lt;p&gt;You’ll start to understand quickly that people have very limited time. They have this assignment to save cost and they need to get results quickly. People want to make software, not string 50 different DevOps tool to get it. That out-of-the-box experience that can get you results in 5 minutes or less is very important. I’d encourage you to show a video of you downloading and installing your software, them putting in their AWS credentials and the result is getting a super cool cost report. That is insightful. If you can make that video in less than three minutes, that is awesome.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do you have a rough idea how long it takes from signing up to installing it and having a working GitLab on your servers? Is there any manual processes involved or is it fully automated process that takes less than 10 minutes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Five minutes including registering the subdomain is what we are shooting for. In previous years I went online and I live streamed myself doing it. I learned a lot from that.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Not a customer and interested in trying us out?&lt;/strong&gt; Sign up today and get started with a risk-free 30 day free-trial with us: &lt;a href="https://app.cloudforecast.io/users/sign_up?utm_source=blog"&gt;Start 30 Day Free-trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Please reach out to us if you have any feedback for us or have suggestions on what we should build next.&lt;/strong&gt; We would love to hear from you:  &lt;a href="//mailto:hello@cloudforecast.io"&gt;hello@cloudforecast.io&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Lo0bejtOnQc"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>gitlab</category>
      <category>marketing</category>
    </item>
  </channel>
</rss>
