<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>Forem: Ricardo Cino</title>
    <description>The latest articles on Forem by Ricardo Cino (@ricardocino).</description>
    <link>https://forem.com/ricardocino</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://forem.com/feed/ricardocino"/>
    <language>en</language>
    <item>
      <title>AWS Lambda Stubs for unit testing</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Mon, 03 Nov 2025 11:37:55 +0000</pubDate>
      <link>https://forem.com/aws-builders/aws-lambda-stubs-for-unit-testing-96p</link>
      <guid>https://forem.com/aws-builders/aws-lambda-stubs-for-unit-testing-96p</guid>
      <description>&lt;p&gt;When working with AWS Lambda functions, unit testing can be a challenge due to the need to mocking all external dependencies, including AWS SDK clients. But also to provide the correct input when calling your Lambda handler functions. To make this easier, I created a small npm package that provides stubs for AWS Lambda handler functions, allowing you to easily provide a default event and customize the input with minimal changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing aws-lambda-stubs
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://www.npmjs.com/package/aws-lambda-stubs" rel="noopener noreferrer"&gt;aws-lambda-stubs&lt;/a&gt; package provides a set of stubs for AWS Lambda handler functions, making it easier to unit test your Lambda functions by providing default events and allowing you to customize the input as needed making your tests cleaner and more maintainable.&lt;/p&gt;

&lt;p&gt;A simple example provided by the README of the package looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;SQSEventStub&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws-lambda-stubs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;it&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vitest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../src/sqs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sqs handler&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;should log the received event&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mockEvent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SQSEventStub&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Hello, World!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]);&lt;/span&gt;

    &lt;span class="c1"&gt;// assert on output&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mockEvent&lt;/span&gt;&lt;span class="p"&gt;)).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;({});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This clearly demonstrate how easy it is to create a mock SQS event with a custom message body. The output of above example would be a completely valid SQS event that you can pass to your Lambda handler function for testing looking like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Records"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"messageId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"receiptHandle"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"MessageReceiptHandle"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"body"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"{&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;key&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;value&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;}"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"attributes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ApproximateReceiveCount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"SentTimestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1523232000000"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"SenderId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"123456789012"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ApproximateFirstReceiveTimestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1523232000001"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"messageAttributes"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{},&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"md5OfBody"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ea7e1632014d79bb59dd5e08c6aaea39"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"eventSource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"aws:sqs"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"eventSourceARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:sqs:us-east-1:012345678901:queue-name"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"awsRegion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why did I build this?
&lt;/h3&gt;

&lt;p&gt;After working on multiple projects, seeing numerous git repositories and seeing everyone making their own stubs for AWS Lambda events, or - non-reusing- stubs making unit tests much larger because of perparing empty test data, I decided to create a reusable package that can be used across multiple projects. This way, I can avoid duplicating code and make it easier for developers to write unit tests for their Lambda functions. Making this open source makes it available for everyone to use and contribute to.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is supported?
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://www.npmjs.com/package/aws-lambda-stubs" rel="noopener noreferrer"&gt;aws-lambda-stubs&lt;/a&gt; package aims to support all payloads given to AWS Lambda functions. Currently, it supports all the EventTypes provided by &lt;a href="https://www.npmjs.com/package/@types/aws-lambda" rel="noopener noreferrer"&gt;@types/aws-lambda&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Important while using
&lt;/h2&gt;

&lt;p&gt;What still is very important while making unit tests is that you always test on the input that you expect your Lambda function to receive. The stubs provided by this package are just a starting point, and you should always customize the input to match your specific use case. This way, you can ensure that your unit tests are accurate and reliable.&lt;/p&gt;

&lt;p&gt;In the case the defaults change inside the package, your tests might break, so always make sure to check the input you provide to your Lambda handler functions to ensure that they match your expectations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;So while this package is already usable, there is still work to be done. My goal is to provide more and more stubs with valuable presets that can be used out of the box. If you want to contribute, please check the repository at &lt;a href="https://github.com/cino/aws-lambda-stubs" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://github.com/cino/aws-lambda-stubs" rel="noopener noreferrer"&gt;https://github.com/cino/aws-lambda-stubs&lt;/a&gt; as I would love to make this not just mine, but a community-driven project.&lt;/p&gt;

</description>
      <category>testing</category>
      <category>typescript</category>
      <category>node</category>
      <category>lambda</category>
    </item>
    <item>
      <title>InvalidSignature in Node with AWS SDK</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Sun, 28 Sep 2025 18:19:37 +0000</pubDate>
      <link>https://forem.com/aws-builders/invalidsignature-in-node-with-aws-sdk-2nao</link>
      <guid>https://forem.com/aws-builders/invalidsignature-in-node-with-aws-sdk-2nao</guid>
      <description>&lt;p&gt;Sometimes you run into a weird issue that is hard to debug. This was one of those times. I was working on a Lambda function that was supposed to retrieve a secret from AWS Secrets Manager. However, I kept getting random &lt;code&gt;InvalidSignatureException&lt;/code&gt; errors in production.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the issue?
&lt;/h2&gt;

&lt;p&gt;The issue arose in our production environment, where the Lambda function would sometimes throw &lt;code&gt;InvalidSignatureException&lt;/code&gt; errors when making calls to AWS services. This was perplexing because around 99% of the requests were successful, and the errors seemed to occur randomly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Root cause
&lt;/h3&gt;

&lt;p&gt;After extensive debugging and investigation, I discovered that the root cause of the issue was related to how the AWS SDK client was being initialized in our codebase. We had implemented a Repository pattern, and in the constructor of the repository, we were initializing the AWS SDK client to retrieve secrets from AWS Secrets Manager.&lt;/p&gt;

&lt;p&gt;However, we were initializing the repository outside of the Lambda handler function, which meant that the same instance of the AWS SDK client was being reused across multiple invocations of the Lambda function.&lt;/p&gt;

&lt;p&gt;This is due to the fact that we stored the SDK client in a property of the repository class, which was instantiated only once when the Lambda function was cold-started. As a result, the client would retain the same timestamp for signing requests, leading to signature mismatches and subsequent &lt;code&gt;InvalidSignatureException&lt;/code&gt; errors.&lt;/p&gt;

&lt;h3&gt;
  
  
  Solution
&lt;/h3&gt;

&lt;p&gt;While it's normally a good practice to initiate the repository outside of the handler, in this case, the solution was to move the initialization of the AWS SDK client inside the Lambda handler function. This ensures that a fresh client with a current timestamp is used for each invocation, preventing the &lt;code&gt;InvalidSignatureException&lt;/code&gt; errors.&lt;/p&gt;

&lt;p&gt;Imagine having a repository like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MyRepository&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;PrismaClient&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where &lt;code&gt;getClient&lt;/code&gt; retrieves the Client instance to interact with the database, and within this function we would retrieve the secrets from AWS Secrets Manager where an AWS SDK Client is initialized. When doing this all outside of the handler, we would always re-use the same instance of the AWS SDK client, that would retain the same timestamp for signing requests.&lt;/p&gt;

&lt;p&gt;So going from this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;MyRepository&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./my-repository&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;repository&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MyRepository&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// implement&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;MyRepository&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./my-repository&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;repository&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MyRepository&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="c1"&gt;// implement&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The downside
&lt;/h3&gt;

&lt;p&gt;Because now the repository is initialized on every invocation, we lose the benefits of reusing the same instance across invocations, which causes us to retrieve the secrets from AWS Secrets Manager on every invocation. This can lead to increased latency and costs, especially if the Lambda function is invoked frequently.&lt;/p&gt;

&lt;p&gt;However, we have implemented caching mechanisms within the repository to mitigate this issue. The repository caches the secrets after the first retrieval, so subsequent invocations can use the cached values instead of making repeated calls to AWS Secrets Manager.&lt;/p&gt;

&lt;p&gt;This approach strikes a balance between ensuring valid signatures for AWS SDK requests and optimizing performance by reducing redundant calls to Secrets Manager.&lt;/p&gt;

&lt;p&gt;An easy way to implement caching is to use &lt;a href="https://docs.powertools.aws.dev/lambda/typescript/latest/features/parameters/#fetching-secrets" rel="noopener noreferrer"&gt;AWS Powertools Parameters utility&lt;/a&gt;, which provides a simple way to cache parameters and secrets in memory and automatically refresh them after a specified duration. When not providing any additional configuration, it defaults to caching the secrets for 5 seconds:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;getSecret&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws-lambda-powertools/parameters/secrets&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Retrieve a single secret&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;getSecret&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my-secret&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;secret&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(example from &lt;a href="https://docs.powertools.aws.dev/lambda/typescript/latest/features/parameters/#fetching-secrets" rel="noopener noreferrer"&gt;AWS Powertools documentation&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>aws</category>
      <category>secretsmanager</category>
      <category>awssdk</category>
    </item>
    <item>
      <title>Testing in a serverless environment</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Fri, 22 Aug 2025 13:38:13 +0000</pubDate>
      <link>https://forem.com/aws-builders/testing-in-a-serverless-environment-31l4</link>
      <guid>https://forem.com/aws-builders/testing-in-a-serverless-environment-31l4</guid>
      <description>&lt;p&gt;Working in a pure serverless environment presents a distinct set of challenges when testing your software, particularly when multiple services interact with each other. In this post, I will explore some strategies for testing in a serverless environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Types of testing
&lt;/h2&gt;

&lt;p&gt;When it comes to testing, there are different types of tests that you can implement in your software development lifecycle. Here are some of the most common types of tests:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unit tests&lt;/strong&gt;: These tests focus on individual components or functions of your codebase, ensuring that each part works as intended in isolation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration tests&lt;/strong&gt;: These tests evaluate how different components of your application work together, verifying that they interact correctly and produce the expected results.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure tests&lt;/strong&gt;: Tests to confirm that the infrastructure outputs the expected results and behaves as intended.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;End-to-end tests&lt;/strong&gt;: These tests simulate real user scenarios, testing the entire application stack from the frontend to the backend, including all external dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What we are going to focus on is primarily &lt;strong&gt;integration tests&lt;/strong&gt;, which verify the interactions between different services in a serverless architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit testing
&lt;/h3&gt;

&lt;p&gt;Before we continue to integration tests it's important to mention that in my current environment &lt;strong&gt;everything&lt;/strong&gt; is unit tested and we aim towards a high 90% test coverage on our codebase, obviously test coverage is not the end goal and you should properly think about what you are testing.&lt;/p&gt;

&lt;p&gt;The reason we focus so much on unit testing is that these are the cheapest and fastest tests to execute. We test every path of our code with unit tests to ensure the results are as we expect them. We do this in the unit test to ensure we can build many tests, and run them locally fast while still building the integration tests to make sure they run in the cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration testing
&lt;/h3&gt;

&lt;p&gt;The fun part, how do you properly test your serverless application and why is it so important? The challenge with serverless is that, besides having your own codebase, there are many different parts of infrastructure that you need to consider, such as API gateways, lambdas, databases, queues, you name it. When you have all these different parts working together, there is one thing that unit tests will really not cover: the authorization between these components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can your lambda reach the database?&lt;/li&gt;
&lt;li&gt;Does it have permissions to access the database?&lt;/li&gt;
&lt;li&gt;Do you have the correct IAM Permission to send messages to SQS?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;So how do we test this?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To make sure we can safely test our serverless application, we can do a number of things; You can use a tool like &lt;a href="https://www.localstack.cloud/" rel="noopener noreferrer"&gt;LocalStack&lt;/a&gt;, which I haven't done before but surely am looking at for a next adventure. Or, you can spin up a full environment for your change request and execute tests against your new environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj83vpxd59ckmq3cxd3rf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj83vpxd59ckmq3cxd3rf.png" alt="diagram" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What we like to do in our project is, on each pull request, we deploy a completely new environment prefixed with &lt;code&gt;pr-{pr-number}&lt;/code&gt;. This does take a while to complete, based on all the resources that you'll be deploying, but it will be worth your while.&lt;/p&gt;

&lt;p&gt;Once this is done, we start tests, which will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Directly invoke our API's and expect a specific response. This will prove that the lambda behind our api can execute what is necessary and that the API Gateway is configured correctly.&lt;/li&gt;
&lt;li&gt;Push messages directly into queues or event buses, and we'll wait until a result has appeared in a database or S3 bucket.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This gives us great confidence that the software that we deploy can be executed with the right permissions and configuration in place.&lt;/p&gt;

&lt;p&gt;In comparison to the unit tests, in the integration tests, we do not test all paths, as these have been covered by the unit tests. What we do test is making sure that each external service (database/queue/other service needing specific configuration) is working as expected. In some cases, that is a single test; if there is a non-default path where a message goes to a specific queue or database, there will be a second test, and so on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure tests
&lt;/h2&gt;

&lt;p&gt;A part from testing your code, you also want to make sure that your infrastructure is creating the resources as you expect. As such, you should include tests that validate the infrastructure itself. This can include checking that the correct number of resources have been created, that they are configured correctly, and that they have the right permissions (I know, this sounds double compared to integration tests, but think of it as layers of tests).&lt;/p&gt;

&lt;p&gt;When working with AWS CDK you can do this in multiple manners, for one you can use &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/testing.html#testing-snapshot" rel="noopener noreferrer"&gt;Snapshot tests&lt;/a&gt; which capture the current state of your infrastructure and compare it to a state you previously stored inside the repository, this is useful for quick assertions on IF anything has changed and forces you to update the snapshot when you do.&lt;/p&gt;

&lt;p&gt;Or you can use my favourite; &lt;a href="https://docs.aws.amazon.com/cdk/v2/guide/testing.html#testing-fine-grained" rel="noopener noreferrer"&gt;fine-grained assertions&lt;/a&gt;, with this you can specifically check if the output of your CDK App matches your expectations. For example if the Lambda environment has a correct reference to a SSM Parameter, or if the Lambda function has the correct memory configured that is required to execute correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Private testing
&lt;/h2&gt;

&lt;p&gt;If you are a frequent reader of my blog, it must come as no surprise that I am a big advocate for building private architecture when you can. This, however, will give you an additional challenge if you are deploying all your resources in a private network and trying to run an end-to-end test in your CI/CD pipeline.&lt;/p&gt;

&lt;p&gt;When working with GitHub Actions, for example, in a default workflow, your tests will be running in a GitHub runner, which would not have any access to your private network. To overcome this limitation, you can use &lt;a href="https://docs.github.com/en/actions/concepts/runners/self-hosted-runners" rel="noopener noreferrer"&gt;self-hosted&lt;/a&gt; runners that are deployed within your private network. In our company, this is provided out-of-the-box by the Cloud Center of Excellence, and there is a &lt;a href="https://medium.com/postnl-engineering/building-scalable-ci-cd-pipelines-with-self-hosted-github-actions-on-amazon-codebuild-6a82150a3eb2" rel="noopener noreferrer"&gt;great article&lt;/a&gt; about it written by a colleague at PostNL and fellow AWS Community Builder &lt;a href="https://awsbythebook.com/" rel="noopener noreferrer"&gt;Matheus das Mercês&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As you can see, testing your software in a serverless environment can be challenging, but with the right strategies and tools, you can ensure that your application is working as expected. By implementing a combination of unit tests, integration tests, infrastructure tests, and end-to-end tests, you can build a robust testing strategy that will help you catch issues early and ensure that your application is reliable and scalable.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ci</category>
      <category>testing</category>
      <category>serverless</category>
    </item>
    <item>
      <title>DynamoDB Streams with more than 24 hour retention</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Tue, 24 Jun 2025 22:00:00 +0000</pubDate>
      <link>https://forem.com/aws-builders/dynamodb-streams-with-more-than-24-hour-retention-1739</link>
      <guid>https://forem.com/aws-builders/dynamodb-streams-with-more-than-24-hour-retention-1739</guid>
      <description>&lt;p&gt;That was kind of a misleading title, but I wanted to get your attention. The truth is that DynamoDB Streams have a maximum retention period of 24 hours and there is no way to extend that. When you do need more than 24 hours the default solution is to use Kinesis Data Streams, which can retain data for up to &lt;a href="https://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html" rel="noopener noreferrer"&gt;365 days&lt;/a&gt;. While it would be easy to just move to Kinesis, this comes with extra cost which may not be justified for all use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not Kinesis?
&lt;/h2&gt;

&lt;p&gt;At the time of writing this I actually am using Kinesis Data Streams for a project, but the only reason we are using it is because we need to retain data for more than 24 hours in case of a failure in the processing lambda. At the time this was built time was of the essence, and we needed a solution that would work out of the box. Kinesis Data Streams is a fully managed service that integrates well with Lambda, and it was the quickest way to get started.&lt;/p&gt;

&lt;p&gt;Time has passed and since we now have more time to think about the architecture, we are now considering alternatives to Kinesis Data Streams. The main reason is that Kinesis Data Streams is more expensive than DynamoDB Streams, and we are not using all the features that Kinesis provides. We only need to retain data for a few days, and we can achieve that with DynamoDB Streams by using a custom solution.&lt;/p&gt;

&lt;p&gt;This in combination with a workflow where for every pull-request we spin up a new environment this will increase the cost drastically and when you don't necessarily need the performance of Kinesis Data Streams, you can save a lot of money by using DynamoDB Streams instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  DynamoDb Streams Vs Kinesis Data Streams
&lt;/h3&gt;

&lt;p&gt;DynamoDB Streams and Kinesis Data Streams are both services that allow you to process data in real-time, but they have&lt;br&gt;
different use cases and features. Here are some key differences while using DynamoDB stream:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retention Period&lt;/strong&gt;: DynamoDB Streams have a maximum retention period of 24 hours, while Kinesis Data Streams can retain data for up to 365 days.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost&lt;/strong&gt;: DynamoDB Streams are generally cheaper than Kinesis Data Streams, especially for small workloads. Kinesis Data Streams can become expensive as the number of shards increases, while DynamoDB Streams are priced based on the number of read requests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The custom solution
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flab54i9a8istur1xrsks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flab54i9a8istur1xrsks.png" alt="DynamoDB 24 Hour Retention Architecture" width="800" height="207"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This might look a bit odd with 2 queues but there is a logical reason for it, the first queue is there because when you put an &lt;code&gt;onFailure&lt;/code&gt; destination on a DynamoDB Stream handler it will not place the DynamoDB Record change in the queue, but rather an object with the &lt;em&gt;location in the stream&lt;/em&gt; of the record change. This means that if you try to process the message after 24 hours, it will not contain the actual data of the record change, but rather just the data on where to extract it. This message will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"requestContext"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"requestId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"12a8a0fa-f8d2-4d42-a6e3-72bae7d1e2f9"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"functionArn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:&amp;lt;region&amp;gt;:&amp;lt;account&amp;gt;:function:stream-handler"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"RetryAttemptsExhausted"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"approximateInvokeCount"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"responseContext"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"statusCode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"executedVersion"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"$LATEST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"functionError"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Unhandled"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-06-05T18:22:26.479Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"DDBStreamBatchInfo"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"shardId"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"shardId-10101010101010101010-10101010"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"startSequenceNumber"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"202328600001712385967261751"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"endSequenceNumber"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"202328600001712385967261751"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"approximateArrivalOfFirstRecord"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-06-05T18:21:49Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"approximateArrivalOfLastRecord"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2025-06-05T18:21:49Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"batchSize"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"streamArn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:dynamodb:&amp;lt;region&amp;gt;:&amp;lt;account&amp;gt;:table/table/stream/2025-04-23T13:37:16.991"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can imagine, if you would process this message after 24 hours it would not be very useful as the data at this location in the DynamoDB Stream would have been deleted. This is why we are using a second queue, the first queue is used to retrieve the actual data from the DynamoDB Stream, and the second queue is used to store the data for a longer period of time. In my case, I am using SQS as the second queue, which is not enabled by default, but you can enable it in the Lambda function configuration. This way, you can process the data from the first queue and store it in the second queue for later processing.&lt;/p&gt;

&lt;p&gt;Whenever there is a real failure of our DynamoDB Stream handler it will give us some time to investigate the issue and resolve it before we enable the re-drive lambda and process the messages in the second queue. This way, we can ensure that we are not losing any data and that we can process it later when we have time to investigate the issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Idempotency
&lt;/h2&gt;

&lt;p&gt;When you are using a solution like this you do want to make sure that your processing is idempotent. This means that if you process the same message multiple times, it will not have any side effects. In our case, we have used &lt;a href="https://docs.powertools.aws.dev/lambda/typescript/latest/utilities/idempotency/" rel="noopener noreferrer"&gt;AWS Lambda Powertools for TypeScript&lt;/a&gt; (also available, with more features for other languages; &lt;a href="https://docs.powertools.aws.dev/lambda/python/latest/" rel="noopener noreferrer"&gt;Python&lt;/a&gt;, &lt;a href="https://docs.powertools.aws.dev/lambda/dotnet/" rel="noopener noreferrer"&gt;.NET&lt;/a&gt; and &lt;a href="https://docs.powertools.aws.dev/lambda/java/latest/" rel="noopener noreferrer"&gt;Java&lt;/a&gt;) to implement idempotency in our DynamoDB Stream handler in the critical points where we do not want to reprocess the same step unless it failed there. I can highly recommend using this library as it provides a lot of useful utilities for working with AWS Lambda, including idempotency, logging, and tracing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Obviously this is not a standard solution, but it's a cost-effective way to retain data for more than 24 hours without using Kinesis Data Streams. It does require some extra work to set up, but it can save you money in the long run if you don't need the full capabilities of Kinesis. If you do need more than 24 hours of retention, Kinesis Data Streams is still a great option, but if you can work with the limitations of DynamoDB Streams, this custom solution can be a good alternative.&lt;/p&gt;

</description>
      <category>dynamodb</category>
      <category>kinesis</category>
      <category>lambda</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Private AppSync with custom dns</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Sat, 07 Jun 2025 13:20:08 +0000</pubDate>
      <link>https://forem.com/aws-builders/private-appsync-with-custom-dns-1a72</link>
      <guid>https://forem.com/aws-builders/private-appsync-with-custom-dns-1a72</guid>
      <description>&lt;p&gt;In the last year I've been working a &lt;strong&gt;lot&lt;/strong&gt; with AppSync and I have to say it didn't come without challenges. One of the biggest challenges was to create a private AppSync API with a custom domain. This is something that is not natively supported by AWS, but it is possible to achieve it using a combination of services.&lt;/p&gt;

&lt;p&gt;While this is now possible for API Gateway, AppSync is a different beast. There is no native way to create a private AppSync API with a private custom domain. This is something that I had to figure out the hard way, and I want to share my findings with you.&lt;/p&gt;

&lt;p&gt;This might become a bit of a long read, but if you are looking for a way to create a private AppSync API with a custom domain, you are in the right place. I will try to explain the steps I took to achieve this and the challenges I faced along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AppSync?
&lt;/h2&gt;

&lt;p&gt;AppSync is a fully managed service that makes it easy for developers to build scalable Graphql APIs on AWS. It allows you to create a flexible API for applications that require real-time data, such as mobile or web apps. With GraphQL support, AppSync helps you to define the structure of your data and how it can be queried.&lt;/p&gt;

&lt;p&gt;With AppSync, you can easily connect to various data sources, including DynamoDB, Lambda, Elasticsearch, and more. It also provides built-in support for real-time subscriptions, offline data synchronization, and security features like authentication and authorization.&lt;/p&gt;

&lt;p&gt;While there are other challenges with Authorization, which I will not cover in this article (but &lt;em&gt;surely&lt;/em&gt; in a follow-up), I will focus on how to create a private AppSync API with a custom domain and all the challenges that come with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Target Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz8jvy1ly6x8u54715h9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz8jvy1ly6x8u54715h9.png" alt="Target architecture" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;(spoiler: We will &lt;strong&gt;not&lt;/strong&gt; reach this architecture)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This would be the ideal situation i'd like to see, however this is not possible with the current state of AppSync, with the main reason being that it's currently not possible to add a custom domain to a private AppSync API. Which results in that when you do try to point&lt;/p&gt;

&lt;h2&gt;
  
  
  Private AppSync API
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Making it private
&lt;/h3&gt;

&lt;p&gt;Making AWS AppSync private itself is not a big deal, you just need to toggle the &lt;code&gt;Use private API features&lt;/code&gt; option in the AppSync console. This will allow you to create a private AppSync API that is only accessible from within your VPC. This is done by creating a VPC endpoint for AppSync, which allows you to access the service without going through the public internet.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer 1&lt;/strong&gt;: You can not make a public AppSync API private, this needs to be configured at the start. This is a limitation of AppSync itself and not something that can be changed later on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer 2&lt;/strong&gt;: When you make an AppSync API private, you will not be able to access it from the aws console and losing the ability to use the console to test your API. This can be overcome by using the &lt;a href="https://cino.io/2024/aws-cloudshell-in-your-own-vpc/" alt="AWS CloudShell in your own vpc" rel="noopener noreferrer"&gt;CloudShell&lt;/a&gt; from your vpc or by deploying an EC2 instance inside your vpc.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Private AppSync API with custom domain
&lt;/h3&gt;

&lt;p&gt;Now this is the part where it becomes challenging, because AppSync does not support private APIs with custom domains. This means that you will not be able to use the default AppSync endpoint, which is something that you will have to work around by using additional services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a custom domain again?
&lt;/h2&gt;

&lt;p&gt;While you could use the default AppSync endpoint, this is not recommended for production use. In case of a service replacement, you will have to update all your clients with the new endpoint. This is not a big deal if you are using a single client, but if you are using multiple clients (e.g. mobile and web), this can become a nightmare.&lt;/p&gt;

&lt;p&gt;Even if you'd decide to switch to hosting your own GraphQL server you might be able to switch the back-end and not inform the client, if you keep supporting the exact same features as you are using. If you are using Subscriptions from AppSync it might become more challenging due to the implementation of AWS AppSync's authorization mechanism.&lt;/p&gt;

&lt;h3&gt;
  
  
  Creating custom DNS for your AppSync API
&lt;/h3&gt;

&lt;p&gt;To make your AppSync API accessible from a custom domain, you will have to use the same approach as you would for &lt;a href="https://cino.io/2024/private-api-gateway-with-dns/" rel="noopener noreferrer"&gt;API Gateway before they introduced native support&lt;/a&gt; . This means that you will have to create a Route 53 hosted zone for your custom domain and create a CNAME record that points to an Application Load Balancer that routes traffic to the AppSync VPC Endpoint.&lt;/p&gt;

&lt;p&gt;However this is not enough, because where this will allow you to access the AppSync API from your custom domain, it will not allow you to &lt;em&gt;only&lt;/em&gt; use the custom domain. Additionally you will have to send an HTTP header (X-AppSync-Domain) to the AppSync API that contains the custom domain name so the VPC Endpoint can route the traffic to the correct AppSync API. In that case you still need to share the AppSync API endpoint with the end user.&lt;/p&gt;

&lt;p&gt;While it is technically possible to add the header to all your requests, this will only work for HTTP requests. If you are using WebSockets for subscriptions, you will not be able to add the header to the request, because the WebSocket protocol does not support custom headers in the web browser. When using a different client (e.g. mobile or server-side) you might be able to add the header, but this is not a solution that works for all clients.&lt;/p&gt;

&lt;p&gt;The only way to avoid this limitation is to use a proxy that will route the traffic to the AppSync API. This can be done by using an Application Load Balancer with a custom domain and a target group that points a proxy (e.g. NGINX) that will route the traffic to the AppSync API. This way you can use the custom domain to access the AppSync API without exposing the AppSync API endpoint to the end user.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer 3&lt;/strong&gt;: Whenever you use a custom private domain the default libraries provided (AppSync SDK &amp;amp; Amplify Framework's AppSync, see &lt;a href="https://github.com/aws-amplify/amplify-data/issues/469" rel="noopener noreferrer"&gt;this issue&lt;/a&gt; (especially the comment of &lt;code&gt;sleepwithcoffee&lt;/code&gt; too.)) by AWS will not work. This is because the libraries are using the default AppSync endpoint and not the custom domain. This means that you will have to use a custom implementation of the AppSync client that supports custom domains.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To implement such proxy you can use NGINX, which is a popular web server that can be used as a reverse proxy. NGINX can be configured to route traffic to the AppSync API and add the required HTTP header (X-AppSync-Domain) to the request.&lt;/p&gt;

&lt;h4&gt;
  
  
  Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  location /graphql {
    proxy_set_header X-AppSync-Domain ${identifier}.appsync-api.eu-west-1.amazonaws.com;
    proxy_pass https://${url};
    proxy_pass_request_headers on;
    proxy_ssl_session_reuse off;
    proxy_cache_bypass $http_upgrade;
    proxy_redirect off;
    ...
  }

  location /graphql/ws {
    proxy_set_header X-AppSync-Domain ${identifier}.appsync-realtime-api.eu-west-1.amazonaws.com;
    proxy_pass https://${url};
    proxy_pass_request_headers on;
    proxy_ssl_session_reuse off;
    proxy_cache_bypass $http_upgrade;
    proxy_redirect off;
    ...
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Final architecture
&lt;/h3&gt;

&lt;p&gt;If we want to invoke our AppSync API from our custom DNS only the architecture will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18ubee1yik60437hqmjo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18ubee1yik60437hqmjo.png" alt="Final architecture" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why is that? Well, the only way to access an AppSync API privately from your VPC is to use a VPC endpoint. This means that you will have to create a VPC endpoint for AppSync and route traffic from your custom domain to the VPC endpoint. This is done by creating an Application Load Balancer with the desired Custom DNS. Behind the ALB you will have 2 target groups, one for the AppSync VPC endpoint Private IP address to directly access the AppSync API and another one for the proxy that will be used to access the AppSync API from the browser.&lt;/p&gt;

&lt;p&gt;With this setup, you can access the AppSync API from your custom domain without exposing the AppSync API endpoint to the end user. The proxy will handle the routing of the traffic to the AppSync API and will add the required HTTP header (X-AppSync-Domain) to the request.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wishes
&lt;/h2&gt;

&lt;p&gt;Like API Gateway, AppSync is a service that is constantly evolving. I hope that in the future we will have a native way to create private AppSync APIs with custom domains. This would make our lives a lot easier and would eliminate the need for workarounds like the one I described in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Enterprise Support&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AppSync/latest/devguide/using-private-apis.html" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://docs.aws.amazon.com/AppSync/latest/devguide/using-private-apis.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AppSync/latest/devguide/using-private-apis.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AppSync/latest/devguide/real-time-websocket-client.html" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://docs.aws.amazon.com/AppSync/latest/devguide/real-time-websocket-client.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AppSync/latest/devguide/real-time-websocket-client.html&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/aws-amplify/amplify-data/issues/469#issuecomment-2578784043" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://github.com/aws-amplify/amplify-data/issues/469#issuecomment-2578784043" rel="noopener noreferrer"&gt;https://github.com/aws-amplify/amplify-data/issues/469#issuecomment-2578784043&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope this post was helpful to you and if you have any questions or remarks feel free to reach out to me on &lt;a href="https://bsky.app/profile/cino.io" rel="noopener noreferrer"&gt;Bluesky&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/cinoricardo/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>appsync</category>
      <category>private</category>
      <category>networking</category>
    </item>
    <item>
      <title>Modifying DynamoDB TTL with CDK</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Sun, 23 Mar 2025 09:00:00 +0000</pubDate>
      <link>https://forem.com/aws-builders/modifying-dynamodb-ttl-with-cdk-3g4e</link>
      <guid>https://forem.com/aws-builders/modifying-dynamodb-ttl-with-cdk-3g4e</guid>
      <description>&lt;p&gt;Ever tried to update the TTL attribute of a DynamoDB table using the AWS CDK and got a &lt;code&gt;InvalidRequest&lt;/code&gt; in CDK or a &lt;code&gt;ValidationException&lt;/code&gt; via the CLI? I did, and it took me a while to figure out why. In this post, I'll explain what happened and how to avoid the same issue in the future. This is a short post, but I think it's worth sharing because it can save you some time and frustration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Very recently I spent a few hours trying to figure out why I was getting a &lt;code&gt;ValidationException&lt;/code&gt; when trying to update the TTL attribute of a DynamoDB table using the AWS CDK. The error message was not very helpful, and I couldn't find any documentation that explained the issue. After some digging (and asking around on Slack), I discovered that the problem was related to the fact that the TTL attribute was already set to a different value, and I was trying to change it to a new one which is not allowed in a single deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modifying DynamoDB TTL with CDK
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Error(s)
&lt;/h3&gt;

&lt;p&gt;When you try to update the TTL attribute of a DynamoDB table using the AWS CDK, you might encounter the following error:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Invalid request provided: Cannot change time-to-live attribute name. To update this property, you must first disable TTL then enable TTL with the new attribute name.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For me this was a surprise and wasn't aware this limitation existed. Followed by this error I thought, let me try this by the CLI and see if I can update it manually before deploying the CDK Stack. So to clear up; I was trying to disable the TTL by CLI and than re-run the stack with the new TTL attribute.&lt;/p&gt;

&lt;p&gt;Of course that didn't work either, because the state of the CloudFormation stack was still in the previous state. So when validating the stack it would still throw the same exact error.&lt;/p&gt;

&lt;p&gt;Next up I thought, let's try to disable the TTL via the CLI and then re-enable it with the new attribute name. This is where I got the following error:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An error occurred (ValidationException) when calling the UpdateTimeToLive operation: Time to live has been modified multiple times within a fixed interval&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Why was I confused
&lt;/h3&gt;

&lt;p&gt;To me this was a experience of surprises. I looked at the documentation and it didn't mention anything about this limitation. I thought I could just change the TTL attribute name and be done with it. This was mainly due to the DynamoDB CloudFormation documentation about the &lt;code&gt;TimeToLiveSpecification&lt;/code&gt; property, which doesn't mention anything about this limitation. It actually mentions you can modify this property without any interruption on &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-timetolivespecification" rel="noopener noreferrer"&gt;this page&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And yes, there is a note below that saying there are limitations on DynamoDb and links to a new page, which does &lt;em&gt;not&lt;/em&gt; mention the &lt;code&gt;TimeToLiveSpecification&lt;/code&gt; property.&lt;/p&gt;

&lt;p&gt;Now the second error I got was even more confusing. I was trying to disable the TTL and then re-enable it with a new attribute name, but I got an error saying that the TTL had been modified multiple times within a fixed interval. Again, this was something I hadn't yet spotted in any of the documentation which was a error on my side. I opened the documentation for &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTimeToLive.html" rel="noopener noreferrer"&gt;UpdateTimeToLive&lt;/a&gt; and immediately scrolled down to the &lt;code&gt;Errors&lt;/code&gt; section and this did not include the &lt;code&gt;ValidationException&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;However, at the top of the page that discusses the &lt;code&gt;UpdateTimeToLive&lt;/code&gt; operation, it clearly states that:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;TimeToLiveSpecification. It can take up to one hour for the change to fully process. Any additional UpdateTimeToLive calls for the same table during this one hour duration result in a ValidationException.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So, if you try to disable the TTL and then re-enable it with a new attribute name within that one hour duration, you'll get the &lt;code&gt;ValidationException&lt;/code&gt; error.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion if you use any kind of Infrastructure as code tool, like the AWS CDK, you need to be aware of this limitation when modifying the TTL attribute of a DynamoDB table. You can't just change the attribute name and expect it to work. You'll need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Disable the TTL&lt;/li&gt;
&lt;li&gt;Deploy the change&lt;/li&gt;
&lt;li&gt;Wait for the change to be applied (up to one hour)&lt;/li&gt;
&lt;li&gt;Enable the TTL with the new attribute name&lt;/li&gt;
&lt;li&gt;Deploy the change&lt;/li&gt;
&lt;li&gt;Wait for the change to be applied (up to one hour)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In hindsight, yes I could've spotted this in the documentation but I clearly didn't and as a result I lost multiple hours on researching but also, deploying.. and waiting for the changes to be applied. So I hope this post helps you avoid the same issue in the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html" rel="noopener noreferrer"&gt;docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-timetolivespecification" rel="noopener noreferrer"&gt;docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-dynamodb-table.html#cfn-dynamodb-table-timetolivespecification&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/update-time-to-live.html" rel="noopener noreferrer"&gt;awscli.amazonaws.com/v2/documentation/api/latest/reference/dynamodb/update-time-to-live.html&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>dynamodb</category>
      <category>ttl</category>
      <category>invalidrequest</category>
    </item>
    <item>
      <title>Always set AWS CDK Defaults</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Mon, 25 Nov 2024 09:33:41 +0000</pubDate>
      <link>https://forem.com/ricardocino/always-set-aws-cdk-defaults-45jn</link>
      <guid>https://forem.com/ricardocino/always-set-aws-cdk-defaults-45jn</guid>
      <description>&lt;p&gt;We are nearing the end of the year, the time to reflect on the past year and definitely share the things that went "wrong" or in this case the things that could have been done better. This is one of those things that I wish I knew earlier, and I hope it helps you too.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS CDK Defaults.. they change
&lt;/h2&gt;

&lt;p&gt;As you're reading this you definitely know what AWS CDK is and you are most likely also using it. If you are not, I'd suggest you take a look at it, it's a great way to manage your AWS infrastructure in a programmatic way (nothing against the likes of Terraform and/or Pulumi, cdk is just my chosen evil (and mandated by the company)).&lt;/p&gt;

&lt;p&gt;CDK is designed to be as user-friendly as possible, allowing you to create Amazon resources easily and quickly. To do so, it comes with a lot of defaults, which is great, but it can also be a bit risky.&lt;/p&gt;

&lt;p&gt;Reason for me saying it are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I was not aware of the defaults, and I was not aware that the defaults of CDK are NOT the same as AWS CloudFormation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defaults change, and they can change without you knowing it&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What happened?
&lt;/h2&gt;

&lt;p&gt;In the projects I work with we always have a full CI/CD setup with a lot of tests. For each pull-request we open we deploy a full environment with AWS CDK and run a lot of tests against it. This is great, and really helps us to catch a lot of issues before they hit production.&lt;/p&gt;

&lt;p&gt;While deploying all the resource is great, we also need to destroy the resources that we created and this is where the issue started. Before a &lt;a href="https://github.com/aws/aws-cdk/pull/30037" rel="noopener noreferrer"&gt;certain change in CDK&lt;/a&gt; the default RemovalPolicy for Kinesis Stream was set to &lt;code&gt;DESTROY&lt;/code&gt;, which is great for testing but not so great for production. After the change the default was set to &lt;code&gt;RETAIN&lt;/code&gt;, which I support as for production you would want this.&lt;/p&gt;

&lt;p&gt;However, because we were not setting the RemovalPolicy ourselves, we were relying on the default. This meant that after the change we were not able to delete the Kinesis Stream anymore, because the default was changed.&lt;/p&gt;

&lt;p&gt;What surprised me about the change is that it was in a very large &lt;a href="https://github.com/aws/aws-cdk/pull/30037" rel="noopener noreferrer"&gt;pull-request&lt;/a&gt; where a lot of changes were made, and it was not mentioned in the release notes. This is not a critique, but it's just something to be aware of.&lt;/p&gt;

&lt;h2&gt;
  
  
  How did I find out?
&lt;/h2&gt;

&lt;p&gt;You don't want to hear this, but yes, through the Cost Explorer. While randomly checking our Dev environment where the pull-request environments are deployed to there was a surprise peak in Kinesis cost, not being sure why I started digging into the resources and found out that many Kinesis streams were still there after Pull-Requests were closed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yq14mrtbwu1owh5kq3f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yq14mrtbwu1owh5kq3f.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Learnings
&lt;/h2&gt;

&lt;p&gt;For me this taught me to always override the defaults of CDK, even if they are the same as the AWS defaults. This way you are sure that you are in control of the resources you are creating and you are not relying on the defaults of CDK. Giving you no surprises when a sudden change happens.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Private API Gateway with DNS</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Thu, 21 Nov 2024 10:03:46 +0000</pubDate>
      <link>https://forem.com/ricardocino/private-api-gateway-with-dns-aep</link>
      <guid>https://forem.com/ricardocino/private-api-gateway-with-dns-aep</guid>
      <description>&lt;p&gt;At PostNL we are building most of our applications with &lt;a href="https://medium.com/postnl-engineering/business-overview-f7c8d8ebee2c" rel="noopener noreferrer"&gt;Serverless&lt;/a&gt; in mind, let me rephrase that, we build all our applications within our own landing zone with Serverless only. There is no option to deploy any kind of EC2 and if you need containers you'd be running them on Fargate only.&lt;/p&gt;

&lt;p&gt;Given that, we are using quite a bunch of API Gateways in the projects I'm working on. While PostNL is also a big corporate company we have a strong focus on security and compliance, and that's why we are building our applications &lt;strong&gt;Private first&lt;/strong&gt;. When there is no need to be public, it shouldn't be.&lt;/p&gt;

&lt;p&gt;This is however, easier said then done when trying to do it with API Gateway and I'll show you why.&lt;/p&gt;

&lt;h2&gt;
  
  
  API Gateway
&lt;/h2&gt;

&lt;p&gt;To start, let's have a look at Amazon API Gateway. When you create a gateway you'll be provided with an endpoint that is public by default and hosted under the Amazon domain (*.execute-api.{region}.amazonaws.com). This is not what we want, we want to have our API Gateway private and only accessible from within our VPC.&lt;/p&gt;

&lt;p&gt;For Public API Gateways you can use a custom domain name and use Route 53 to point to the API Gateway endpoint. This is not possible for Private API Gateways, you can't use a custom domain name and you can't use Route 53 to point to the API Gateway endpoint.&lt;/p&gt;

&lt;p&gt;When you deploy a simple API Gateway without any custom dns your architecture would look as simple as:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsy46aeyg2olcohvucjx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flsy46aeyg2olcohvucjx.png" alt="API Gateway with Lambda" width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now when you'd like to add your own custom DNS on a public API Gateway you can do so by creating a custom domain name and using Route 53 to point to the API Gateway endpoint (you can leverage the &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html" rel="noopener noreferrer"&gt;custom domain feature on api&lt;/a&gt;). This would look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82yt7z18g0u5snpc7wkm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82yt7z18g0u5snpc7wkm.png" alt="API Gateway with Lambda and Custom DNS" width="800" height="582"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point it still looks simple, but when you'd like to have a private API Gateway you can't use a custom domain name and you can't use Route 53 to point to the API Gateway endpoint. This is where it gets tricky.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a custom domain?
&lt;/h2&gt;

&lt;p&gt;Even though it's not supported for Private API's you might wonder why would you need a custom domain on your API Gateway. To me there is a simple reason why I want this. Within a larger organization on AWS you often have other teams / aws accounts invoke your api's.&lt;/p&gt;

&lt;p&gt;When you are in a situation like this often you have 2 options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;There is a central team that manages all the api's and they provide you with the endpoint to invoke the api. (which still means that you need to provide a private network connection to the central API team).&lt;/li&gt;
&lt;li&gt;You allow another team to invoke your api's by providing them with the endpoint of the api gateway.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you don't have custom dns on your API Gateway the only option to provide the API to a different account is to provide them with the API Gateway ID, which they'll use together with a VPC Endpoint in their account to reach your API Gateway. This is not a very user-friendly way to provide an API to another team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x9cr2els0crna0aubea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x9cr2els0crna0aubea.png" alt="API Gateway with Lambda and VPC Endpoint" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A downside of this is that you &lt;em&gt;lose&lt;/em&gt; the flexibility to replace or update the API Gateway without having to update all the consumers that are using the API Gateway. When you have a custom domain name you can point the domain to a new API Gateway and the consumers don't have to change anything.&lt;/p&gt;

&lt;p&gt;When you have a custom domain name you can provide the other team with a domain name that they can use to invoke the API Gateway. This is a much more user-friendly way to provide an API to another team.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;As noted before it is not possible to use a custom domain name on a private API Gateway. This means that you can't use Route 53 to point to the API Gateway endpoint. The actual technical reason for me is still unknown but I do know that the API Gateway is not deployed inside your own VPC.&lt;/p&gt;

&lt;p&gt;The only way to reach the API Gateway is by using a VPC Endpoint. This is a private connection between your VPC and the API Gateway. This is a great way to provide a private connection to the API Gateway but it doesn't provide you with a way to reach the API Gateway via a custom domain name.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;The solution to this problem is to use an Application Load Balancer (ALB) in front of the VPC Endpoint. You will need to retrieve the private ip addresses of the VPC Endpoint and point a target group towards these. The ALB is deployed in your VPC and is reachable via a custom domain name. This way you can reach the API Gateway via the custom domain name and fully private net working.&lt;/p&gt;

&lt;p&gt;The beauty of this solution is that you can provide the custom domain name to other teams and they can invoke the API Gateway via the custom domain name without having to know the specifics of the API Gateway. Because the consumer is now not aware of what is behind the DNS you can replace it with whatever you want without having to update the consumers.&lt;/p&gt;

&lt;p&gt;This is how the architecture would look like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kdmq2o954mrmusw7um4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kdmq2o954mrmusw7um4.png" alt="Private API Gateway with Lambda and Route53" width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  CDK Example
&lt;/h2&gt;

&lt;p&gt;It's not a post from me without a CDK example, so here it is. In this example we are creating an API Gateway with a simple Mock Integration Response that is reachable via private DNS. The API Gateway is deployed in a VPC and is only reachable when invoked via the VPC Endpoint, to be able to use private DNS we are deploying an Application Load Balancer which is pointed towards the VPC Endpoint. Based upon the domain name that is pointed towards the Application Load Balancer and that is configured on the API Gateway we can reach the API Gateway via the private DNS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;apiFqn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;api.cino.dev&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Step 1: Retrieve vpc / hosted zones&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;publicHostedZone&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;privateHostedZone&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getNetwork&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Step 2: Create API Gateway VPC Endpoint&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;apiGatewayVpcEndpoint&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createApiGatewayVpcEndpoint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Step 3: Create certificate for the API Gateway / ALB&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;acmCertificate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createCertificate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;apiFqn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;publicHostedZone&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Step 3: Create ALB&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;alb&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createApplicationLoadBalancer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;apiGatewayVpcEndpoint&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Step 4: Create API Gateway&lt;/span&gt;
&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createApiGateway&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;apiFqn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;acmCertificate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;apiGatewayVpcEndpoint&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Step 5: Create ALB Listener for API Gateway VPC Endpoint&lt;/span&gt;
&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createApiGatewayVpcEndpointListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;acmCertificate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;alb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;apiGatewayVpcEndpoint&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Step 6 Create Route53 records pointing towards&lt;/span&gt;
&lt;span class="c1"&gt;// the Application Load Balancer&lt;/span&gt;
&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createDnsRecords&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;privateHostedZone&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;alb&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;apiFqn&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you see we need the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;Hosted Zones (Public &amp;amp; Private)

&lt;ul&gt;
&lt;li&gt;Public Hosted Zone is used for the certificate&lt;/li&gt;
&lt;li&gt;Private Hosted Zone is used for the DNS records&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;API Gateway VPC Endpoint&lt;/li&gt;

&lt;li&gt;Certificate for the API Gateway / ALB&lt;/li&gt;

&lt;li&gt;Application Load Balancer

&lt;ul&gt;
&lt;li&gt;Listener on port 443 for your domain pointing towards the VPC Endpoint Private IP Adresses&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;API Gateway&lt;/li&gt;

&lt;li&gt;Custom Domain Name pointing towards the Application Load Balancer&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;For the full example you can visit the &lt;a href="https://github.com/cino/cdk-examples/blob/main/private-api-gateway-dns/lib/private-api-gateway-dns-stack.ts" rel="noopener noreferrer"&gt;Github repository&lt;/a&gt; as I'll only show you the most important parts of the code, that said we immediately go to step 5 where we create the API Gateway VPC Endpoint Listener.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nf"&gt;createApiGatewayVpcEndpointListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;acmCertificate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;Certificate&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IVpc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;alb&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ApplicationLoadBalancer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;apiGatewayVpcEndpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InterfaceVpcEndpoint&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ipAddresses&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getVpcEndpointIpAddresses&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;apiGatewayVpcEndpoint&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;targets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ipAddresses&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;IpTarget&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ip&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;apiGatewayTargetGroup&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ApplicationTargetGroup&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="s2"&gt;`PrivateApiGatewayTargetGroup`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;targetType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;TargetType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;IP&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;targetGroupName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`priv-apigw-tg`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;targets&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ApplicationProtocol&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HTTPS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nx"&gt;apiGatewayTargetGroup&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;configureHealthCheck&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`/test/hello`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;healthyHttpCodes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;200,403&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;443&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="nx"&gt;alb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addListener&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PrivateApiGatewayListener&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;protocol&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ApplicationProtocol&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;HTTPS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;certificates&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;acmCertificate&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="na"&gt;defaultTargetGroups&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;apiGatewayTargetGroup&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nf"&gt;getVpcEndpointIpAddresses&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;IVpc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;vpcEndpoint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;InterfaceVpcEndpoint&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;vpcEndpointProps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AwsCustomResource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;VpcEndpointEnis&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;onUpdate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;EC2&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;describeVpcEndpoints&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;Filters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
          &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vpc-endpoint-id&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="na"&gt;Values&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;vpcEndpoint&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpcEndpointId&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
          &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;physicalResourceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PhysicalResourceId&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AwsCustomResourcePolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromSdkCalls&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AwsCustomResourcePolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ANY_RESOURCE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;vpcEndpointIps&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AwsCustomResource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vpc-endpoint-ip-lookup&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;onUpdate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;EC2&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;describeNetworkInterfaces&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;outputPaths&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;availabilityZones&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="s2"&gt;`NetworkInterfaces.&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.PrivateIpAddress`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}),&lt;/span&gt;
        &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;NetworkInterfaceIds&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;availabilityZones&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;vpcEndpointProps&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getResponseField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
              &lt;span class="s2"&gt;`VpcEndpoints.0.NetworkInterfaceIds.&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;
            &lt;span class="p"&gt;);&lt;/span&gt;
          &lt;span class="p"&gt;}),&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;physicalResourceId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;PhysicalResourceId&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;of&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toString&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="na"&gt;policy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AwsCustomResourcePolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromSdkCalls&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AwsCustomResourcePolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ANY_RESOURCE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;availabilityZones&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;vpcEndpointIps&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getResponseField&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="s2"&gt;`NetworkInterfaces.&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.PrivateIpAddress`&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important part about this piece of code is we are creating a listener towards the private ip addresses of the created VPC Endpoint. This is done by creating a target group with the private ip addresses and creating a listener on the Application Load Balancer that points towards the target group.&lt;/p&gt;

&lt;p&gt;However, the private ip addresses of the VPC Endpoint are not directly available in the CDK, so we need to use a Custom Resource to retrieve the private ip addresses of the VPC Endpoint. This is done by creating a Custom Resource that retrieves the VPC Endpoint ID and then creates another Custom Resource that retrieves the private ip addresses of the VPC Endpoint.&lt;/p&gt;

&lt;p&gt;Full example can be found &lt;a href="https://github.com/cino/cdk-examples/blob/main/private-api-gateway-dns/lib/private-api-gateway-dns-stack.ts" rel="noopener noreferrer"&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this post I've shown you how you can create a private API Gateway with DNS. This is not a straightforward process and requires some additional resources to make it work. However, when you have a need for a private API Gateway with DNS this is the way to go.&lt;/p&gt;

&lt;p&gt;Please have a good look at the Github link provided as there are a lot more details to be found and it is a fully working example with a simple &lt;code&gt;cdk deploy&lt;/code&gt; command. (when you provide your own route53 / hosted zone)&lt;/p&gt;

&lt;p&gt;I hope this post was helpful to you and if you have any questions or remarks feel free to reach out to me on &lt;a href="https://bsky.app/profile/cino.io" rel="noopener noreferrer"&gt;Bluesky&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/cinoricardo/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>networking</category>
    </item>
    <item>
      <title>AWS CloudShell in your own vpc</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Thu, 21 Nov 2024 07:30:00 +0000</pubDate>
      <link>https://forem.com/ricardocino/aws-cloudshell-in-your-own-vpc-7ho</link>
      <guid>https://forem.com/ricardocino/aws-cloudshell-in-your-own-vpc-7ho</guid>
      <description>&lt;p&gt;Until recently, I was completely unaware of AWS CloudShell, and I’m glad I finally decided to give it a try. CloudShell provides a shell environment right in your browser, and to my surprise, you can start an instance within your own VPC!&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudShell in your own VPC
&lt;/h2&gt;

&lt;p&gt;Why is this so cool? Well, I was working on a new example to show on my website I needed to access my private resources. While I have a full setup on my day-time job where I have a VPN connection to my VPC, I don't have this setup at my sandbox account. I was thinking about how I could access my private resources and then I remembered CloudShell.&lt;/p&gt;

&lt;p&gt;When you start CloudShell you can select the VPC and subnet you want to use. This means that you can access your private resources without having to setup a VPN connection. This is great for me as I can now access my private resources without having to setup a VPN connection.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost
&lt;/h3&gt;

&lt;p&gt;I was a bit worried about the cost of running an CloudShell instance in my VPC, but it turns out that it's free! You can run an instance in your VPC for free and will be terminated after 30 minutes of inactivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  My use case
&lt;/h2&gt;

&lt;p&gt;The real thing I was trying to do is resolve my Private DNS entries, by creating a Cloud Shell in my own VPC which was associated with the Private Hosted Zone I was able to resolve the Private DNS entries. This is great as I can now test my DNS entries without having to setup a VPN connection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2umqtzwzb7pza2x3x3c9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2umqtzwzb7pza2x3x3c9.png" alt="CloudShell example with traceroute to private dns record" width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;traceroute was not actually installed, you need to do that yourself ;)&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Avoiding CloudFormation Stack Outputs</title>
      <dc:creator>Ricardo Cino</dc:creator>
      <pubDate>Sun, 17 Nov 2024 17:47:15 +0000</pubDate>
      <link>https://forem.com/ricardocino/avoiding-cloudformation-stack-outputs-bho</link>
      <guid>https://forem.com/ricardocino/avoiding-cloudformation-stack-outputs-bho</guid>
      <description>&lt;p&gt;Recently I’ve been working on a new project where we created many resources in a lot of different stacks. A feature of CloudFormation is that you can output values from your stack, which is great for referencing resources in other stacks. However, while there is a use-case for this, I’ve found that it’s better to avoid using these outputs and instead use SSM parameters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why avoid outputs?
&lt;/h2&gt;

&lt;p&gt;There is a simple reason for me to avoid using outputs and that is &lt;strong&gt;dependencies&lt;/strong&gt;. When you output a value and import it an a different stack you create a hard dependency between the two stacks. This means that you can't delete the stack that is being referenced without first deleting the stack that is referencing it.&lt;/p&gt;

&lt;p&gt;Even better, you can't remove the stack output without first deleting the stack that is referencing it. This is a problem when you want to remove a resource from a stack, even when you are updating multiple stacks at the same time it will fail because of the dependency, in many cases it will try to update the stack which has the resource which is used for output first before the second stack where it is referenced. Even when the second stack wouldn't need the output anymore in that same deploy it will still cause a reference issue due to the order of the updates.&lt;/p&gt;

&lt;p&gt;This is good right? Well, yes. There is an absolute case to be made where this gives you safety and stops you from deleting resources that are still in use. However, in my case:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We are deploying stacks often while making test environments and we want to be able to remove resources without having to delete all the stacks that are referencing it all-the-time.&lt;/li&gt;
&lt;li&gt;When you actually have an issue in production, dealing with stack dependencies is not something you want to be doing at that time.&lt;/li&gt;
&lt;li&gt;Destroying all your stacks can take a long time as they need to be deleted in the correct order.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Surprise outputs by AWS CDK
&lt;/h2&gt;

&lt;p&gt;If you are using AWS CDK for your infrastructure as code you might be surprised by the many outputs that you are using! This is because when you reference a Construct in another stack it will automatically output the identifier of the resource. This is a nice feature but it can also be a surprise when you are not expecting it. Even though you might be thinking that you are not using outputs, you are.&lt;/p&gt;

&lt;p&gt;For example when you deploy the following stacks, in this example we are deploying 2 resources in different stacks, there is a DynamoDB table in the DataStack and a single Lambda function in the AppStack. The Lambda function has been given the permissions to read the DynamoDB table by passing it into the next stack and calling &lt;code&gt;table.grantReadData(lambda)&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DataStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DataStack&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AppStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AppStack&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Full example: &lt;a href="https://github.com/cino/cdk-examples/blob/main/avoid-cloudformation-outputs/lib/stacks/example-with-stack-outputs.ts" rel="noopener noreferrer"&gt;example-with-stack-outputs.ts&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When building your infrastructure in this way you will see that the CDK will automatically output the identifier of the DynamoDB table in CloudFormation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwnkb1soe7rc3q9bm4o2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcwnkb1soe7rc3q9bm4o2.png" alt="CloudFormation stack output example" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This value is imported by the AppStack during the synthesize phase and used to retrieve the DynamoDB Table. Now you have a hard dependency between the two stacks. When we now try to delete the AppStack we will get the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;table&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DataStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DataStack&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// new AppStack(app, "AppStack", { table });&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Delete canceled. Cannot delete export DataStack:ExportsOutputFnGetAttDataStackTableB5A722F5ArnBE341008 as it is in use by AppStack.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You'll see that it now automatically tries to delete the created Output as it is not longer in use by the AppStack, however because CDK now doesn't have a notion of the AppStack it will only try to perform an update on the DataStack which will fail because the AppStack is still referencing it.&lt;/p&gt;

&lt;p&gt;This is just a small example of how you can get into trouble with outputs without even knowing it.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to avoid this?
&lt;/h2&gt;

&lt;p&gt;A pattern to avoid this is to store identifiers in SSM parameters. This way you can reference the SSM parameter in your other stacks and you can delete the resource without having to delete the stack that is referencing it. This does mean that when you delete the resource you will have to make sure that your other stacks don't depend on it anymore, giving you more risk of deleting resources that are still in use.&lt;/p&gt;

&lt;p&gt;Personally for me this risk is worth it as you can still create dependencies between stacks during deployment like the example below. This way you are ensured that the resources are created in the correct order without having CloudFormation outputs and hard dependencies.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dataStack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DataStackWithSsmParameters&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;DataStackWithSsmParameters&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;appStack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;AppStackWithSsmParameters&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AppStackWithSsmParameters&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;appStack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addDependency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dataStack&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Full example: &lt;a href="https://github.com/cino/cdk-examples/blob/main/avoid-cloudformation-outputs/lib/stacks/example-with-ssm-parameters.ts" rel="noopener noreferrer"&gt;example-with-ssm-parameters.ts&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you are using CloudFormation (or CDK) you can avoid finding yourself in a situation where you can't update a stack because of a reference to another stack by using SSM parameters instead of outputs. This way you can delete resources without having to delete the stacks that are referencing them.&lt;/p&gt;

&lt;p&gt;In all the projects I'm working on myself I will not be using any outputs anymore and will be using SSM parameters instead. I've been hurt too many times with the hard dependencies between stacks and the time it takes to delete all the stacks in the correct order.&lt;/p&gt;

&lt;p&gt;At 1 of the projects I had to delete 10+ stacks which would take more than 1 hour in a pipeline due to the dependencies between the stacks. Removing these outputs and importing the values from SSM Parameters decreased the time to delete all the stacks to ~20 minutes. I admit this is still a long time but this is because there was a CloudFront and Fargate container in the codebase. (Cloudfront being the longest to delete)&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;✅ Pros&lt;/th&gt;
&lt;th&gt;❌ Cons&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Less conflicts when updating your stacks&lt;/td&gt;
&lt;td&gt;Less "safety" out of the box from CloudFormation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Faster deletion of all your stacks in test environments&lt;/td&gt;
&lt;td&gt;Needing to place dependencies between stacks yourself&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
    </item>
  </channel>
</rss>
